id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
54,750,175
https://en.wikipedia.org/wiki/Hygrocybe%20bolensis
Hygrocybe bolensis is a mushroom of the waxcap genus Hygrocybe. Generally found growing in soil in moist, shady conditions. It was described in 2000 by the mycologist Anthony M. Young. References Fungi described in 2000 Fungi of Australia bolensis Fungus species
Hygrocybe bolensis
[ "Biology" ]
65
[ "Fungi", "Fungus species" ]
54,762,521
https://en.wikipedia.org/wiki/Host-directed%20therapeutics
Host-directed therapeutics, also called host targeted therapeutics, act via a host-mediated response to pathogens rather than acting directly on the pathogen, like traditional antibiotics. They can change the local environment in which the pathogen exists to make it less favorable for the pathogen to live and/or grow. With these therapies, pathogen killing, e.g.bactericidal effects, will likely only occur when it is co-delivered with a traditional agent that acts directly on the pathogen, such as an antibiotic, antifungal, or antiparasitic agent. Several antiviral agents are host-directed therapeutics, and simply slow the virus progression rather than kill the virus. Host-directed therapeutics may limit pathogen proliferation, e.g., have bacteriostatic effects. Certain agents also have the ability to reduce bacterial load by enhancing host cell responses even in the absence of traditional antimicrobial agents. Types Immunomodulatory Intracellular pathogens often reside in immune cells like macrophages. These pathogens can be obligate or facultative intracellular pathogens. Changing the innate immune response of these host-cells can alter the pathogen's ability to live inside the cell. Many of these immunomodulatory host-directed therapies are adjuvants or pathogen-associated molecular patterns. They can include Toll-like receptors (TLRs), NOD-like receptors (NLRs), C-type lectin receptors (CLRs), mannose receptor (MR), dendritic cell-specific intracellular adhesion molecule 3 (ICAM3)-grabbing nonintegrin (DC-SIGN), complement receptors, Fc receptors, and DNA sensors (e.g., STING). Epithelial cells also host pathogens, like Salmonella enterica. These immunomodulatory agents can also alter the epithelial cell environments, since they also have a role in innate signalling. Enhanced host cell function Autophagy modulators are one type of method to enhance host cell functions. Pathogens like Mycobacterium tuberculosis (MTB), will be degraded in the autophagosome during an effective host response that will clear the bacteria. Because bacteria and other pathogens like MTB can take over cellular responses like autophagy, they can increase their survival in the body. By reactivating effective autophagy processes the pathogen could be cleared. Examples of this has been shown with MTB, and Listeria monocytogenes. OSU-03012 is thought to modulate autophagy in its effect on Salmonella enterica, and Francisella tularensis. Pathology modification Modifying lung and macrophage pathology has been shown to have a role in the host-directed therapies for MTB. References Cell biology Cellular processes Connective tissue cells Human cells Immune system Immunology Lymphatic system Medicinal chemistry Programmed cell death
Host-directed therapeutics
[ "Chemistry", "Biology" ]
615
[ "Cell biology", "Immune system", "Signal transduction", "Senescence", "Organ systems", "Immunology", "Cellular processes", "nan", "Medicinal chemistry", "Biochemistry", "Programmed cell death" ]
51,986,159
https://en.wikipedia.org/wiki/Friedmann%E2%80%93Einstein%20universe
The Friedmann–Einstein universe is a model of the universe published by Albert Einstein in 1931. The model is of historic significance as the first scientific publication in which Einstein embraced the possibility of a cosmos of time-varying radius. Description Interpreting Edwin Hubble's discovery of a linear relation between the redshifts of the galaxies and their radial distance as evidence for an expanding universe, Einstein abandoned his earlier static model of the universe and embraced the dynamic cosmology of Alexander Friedmann. Removing the cosmological constant term from the Friedmann equations on the grounds that it was both unsatisfactory and unnecessary, Einstein arrived at a model of a universe that expands and then contracts, a model that was later denoted the Friedmann–Einstein model of the universe. In the model, Einstein derived simple expressions relating the density of matter, the radius of the universe and the timespan of the expansion to the Hubble constant. With the use of the contemporaneous value of 500 km·s−1Mpc−1 for the Hubble constant, he calculated values of 10−26 cm−3, 108 light-years and 1010 years for the density of matter, the radius of the universe and the timespan of the expansion respectively. It has recently been shown that these calculations contain a slight systematic error. Einstein's blackboard In May 1931, Einstein chose the Friedmann–Einstein universe as the topic of his 2nd Rhodes lecture at Oxford University. A blackboard used by Einstein during the lecture, now known as Einstein's Blackboard, has been preserved at the Museum of the History of Science, Oxford. It has been suggested that the source of the numerical errors in the Friedmann–Einstein model can be discerned on Einstein's blackboard. See also Einstein–de Sitter universe References Friedmann-Einstein Universe
Friedmann–Einstein universe
[ "Physics" ]
372
[ "Theoretical physics", "Theoretical physics stubs" ]
51,989,620
https://en.wikipedia.org/wiki/E%20band%20%28NATO%29
The NATO E band is a designation given to the radio frequencies from 2000 to 3000 MHz (equivalent to wavelengths between 15 and 10 cm) during the cold war period. Since 1992, detailed frequency allocations, allotment and assignments are in line with the NATO Joint Civil/Military Frequency Agreement (NJFA). However, in order to generically identify military radio spectrum requirements, e.g. for crisis management planning, training, electronic warfare activities, radar or in military operations, the NATO band system is often used. References Radio spectrum Military equipment of NATO
E band (NATO)
[ "Physics" ]
114
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
51,990,283
https://en.wikipedia.org/wiki/F%20band%20%28waveguide%29
The waveguide F band is the range of radio frequencies from 90 GHz to 140 GHz in the electromagnetic spectrum, corresponding to the recommended frequency band of operation of WR8 waveguides. These frequencies are equivalent to wave lengths between 3.33 mm and 2.14 mm. The E band is in the EHF range of the radio spectrum. References Radio spectrum
F band (waveguide)
[ "Physics" ]
74
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
51,992,495
https://en.wikipedia.org/wiki/Asgardia
Asgardia, also known as the Space Kingdom of Asgardia and Asgardia the Space Nation, is a "virtual nation" formed by a group of people who have launched a satellite into Earth orbit. They refer to themselves as "Asgardians" and they have given their satellite the name . They have declared sovereignty over the space occupied by and contained within Asgardia -1. The Asgardians have adopted a constitution and they intend to access outer space free of the control of existing nations and establish a permanent settlement on the Moon by 2043. Igor Ashurbeyli, the founder of the Asgardia Independent Research Center, proposed the establishment of Asgardia on 12 October 2016. The Constitution of the Space Kingdom of Asgardia was adopted on 18 June 2017 and it became effective on 9 September 2017. Asgardia's administrative center is located in Vienna, Austria. The Cygnus spacecraft that carried Asgardia-1 into space released Asgardia-1 and two other satellites on 12 November 2017. The Space Kingdom of Asgardia has claimed that it is now "the first nation to have all of its territory in space." Legal scholars doubt that Asgardia-1 can be regarded as a sovereign territory and Asgardia has not yet attained the goal of being recognised as a nation state. Etymology Asgardia is taken from the name of one of the Nine Worlds in the Norse religion: Asgard (). Home to the Æsir tribe of gods, Asgard is derived from Old Norse áss, god and garðr, enclosure; from Indo-European roots ansu- spirit, demon (see cognate ahura; also asura) and gher- grasp, enclose (see cognates garden and yard), essentially meaning "garden of gods." History Asgardia Independent Research Center The Asgardia Independent Research Center (AIRC), formerly the Aerospace International Research Center, was founded by Igor Ashurbeyli in 2013. In 2014, the AIRC began the publication of an international space journal, ROOM, of which Ashurbeyli is the editor-in-chief. On February 5, 2016, Ashurbeyli was awarded the UNESCO Medal for contributions to the development of nanoscience and nanotechnologies during a ceremony held at UNESCO headquarters, Paris. AIRC AAS is the only institute in Austria whose activity is fully dedicated to the research of the Solar System, extraterrestrial life and the Earth using space technology and satellite techniques. Since 2013, the AIRC staff constructed, developed and prepared for launch over 30 instruments and participated in the experiments in 15 space missions, for example: ESA's mission Mars Express, Rosetta (mission to a comet), Venus Express, BepiColombo (mission to Mercury) and CNES' DEMETER and TARANIS missions. AIRC has also collaborated with NASA for the IBEX mission. In 2015, the AIRC established a close collaboration with Asgardia. Founding On 12 October 2016, Ashurbeyli announced in a press conference in Paris, France, "the birth of the new space nation Asgardia." The ultimate aim of the project is to create a new nation that allows access to outer space free of the control of existing nations. The current space law framework, the Outer Space Treaty requires governments to authorise and supervise all space activities, including the activities of non-governmental entities such as commercial and non-profit organisations; by attempting to create a nation, those behind Asgardia hope to avoid the tight restrictions that the current system imposes. It officially calls itself the "Space Kingdom of Asgardia." "Asgardia" was chosen as a reference to Asgard, one of the nine worlds of Norse mythology; the world that was inhabited by the gods. People were invited to register for citizenship in 2016, with the aim of Asgardia then applying to the United Nations for recognition as a nation state. In less than two days, there were over 100,000 applications; within three weeks, there were 500,000. After tougher verification requirements were introduced, this declined, and stood at around 210,000 in June, 2017. There is no intention to actually move these members into space. Asgardia intends to apply for membership of the UN. The Constitution of the Space Kingdom of Asgardia was adopted on 18 June 2017 and it became effective on 9 September 2017. The Cygnus spacecraft that carried Asgardia-1 into space released Asgardia-1 and two other satellites on 12 November 2017. The Space Kingdom of Asgardia has claimed that it is now "the first nation to have all of its territory in space." Legal scholars doubt that Asgardia-1 can be regarded as a sovereign territory and Asgardia has not yet attained the goal of being recognised as a nation-state. As of March 2019, Asgardia says that it has more than 290,000 citizens and more than 1,040,000 followers around the world. Governance The Constitution of Asgardia divides Governance of Asgardia into three branches: (1) a legislative branch named the "Parliament," (2) an executive branch named the "Government," and (3) a judicial branch named the "Court." Parliament The Parliament is composed of 150 nonpartisan members and each member is referred to as a "Member of Parliament" (MP). The Members of Parliament elect one Member to the office of "Chairman of the Parliament." The Members of Parliament also appoint the "Chairman of the Government." The Parliament has 12 permanent committees; the Chairman of Parliament of Asgardia is Mr. Lembit Öpik. Executive branch The Head of Nation is the most senior official of the executive branch (i.e., the Government). The Head of Nation is elected to a 5-year term of office. The Head of Nation may dissolve the Parliament and may then order the holding of parliamentary elections. The Head of Nation may initiate legislative proposals and may veto acts adopted by the Parliament. The Head of Nation may issue decrees that must be obeyed by governmental bodies and by the citizens of Asgardia. The Head of Nation is Igor Ashurbeyli. The Chairman of the Government supervises 12 Ministers. Each Minister supervises the operation of one Government Ministry. Each of the permanent committees of Parliament monitors the operation of one Government Ministry. The Parliament may invite Ministers to attend meetings of the Parliament. Judicial branch The judicial branch includes a "Supreme Justice," who supervises the operation of four judicial panels: (1) a "constitutional" panel, (2) a "civil" panel, (3) an "administrative" panel, and (4) a "criminal" panel. The Supreme Justice is appointed by the "Head of Nation." The "Justices" who serve on the judicial panels are appointed by the Parliament. Asgardia's Supreme Justice is Zhao Yun. Zhao, head of the Department of Law at The University of Hong Kong, was appointed as Asgardia's Supreme Justice on 24 June 2018 during the first parliamentary session in Vienna, where he was introduced to the elected Members of Parliament. Mayoral elections The mayoral elections took place in the period between 1 August – 9 September 2018. Based on the results of the first stage of mayoral elections of Asgardia, offices were taken by mayors of 44 cities from 12 October 2018. The Head of Nation delegated to continue elections of the mayors of Asgardia until the Parliament passes the Bill "On Mayors of Asgardia" from 12 October 2018. Until the Parliament has passed the Bill "On Mayors of Asgardia," elected mayors will report to the Head of the Nation of Asgardia. Key people Head of Nation — Igor Ashurbeyli The Chairman of Parliament — Lembit Öpik Head of the Government — Lena De Winne Supreme Justice — Zhao Yun Space activity Asgardia intends to launch a series of satellites into Earth orbit. Its first satellite was successfully launched by Orbital ATK on 12 November 2017 as part of an International Space Station resupply mission. It was a two-unit CubeSat measuring at a weight of , manufactured and deployed into orbit by NanoRacks, and has been named . The overall goal of the mission was to demonstrate the long-term storage of data on a solid-state storage device operating in low Earth orbit. The spacecraft had a 512 gigabyte solid-state storage device. The data stored in this device was to be periodically checked for data integrity and function. Before the launch, the data storage device was loaded with things like family photos supplied by the first 1,500,000 members of Asgardia. After the spacecraft reached orbit, data could be uploaded or downloaded using the Globalstar satellite network. was boosted to space and then deployed by US companies on a NASA-funded mission so the satellite falls under US jurisdiction. Asgardia intends to partner with a non-signatory to the Outer Space Treaty (OST), perhaps an African state such as Ethiopia or Kenya, in the hopes of circumventing the OST's restriction on nation-states claiming territory in outer space. The satellite was expected to have a lifetime of 5 years before its orbit decays and it burns up on reentry. On 12 September 2022, Asgardia-1 reentered the atmosphere. A continuously updated map that shows the location of in its orbit was hosted by NearSpace Launch, Inc. (NORAD satellite identification number 43049) is also being tracked by Satflare. Often described as a billionaire, Ashurbeyli has said that he is currently solely responsible for funding Asgardia, and that members will not be funding the planned first satellite launch. Although the cost has not been made publicly available, NanoRacks have said that similar projects cost $700,000. The project intends to move to crowdfunding to finance itself. Sa'id Mosteshar, of the London Institute of Space Policy and Law, says this suggests that Asgardia lacks a credible business plan. A company, Asgardia AG, has been incorporated, and members can buy shares in it. Asgardia wants to enable its founders' companies to use Asgardia's satellite network for their own services and business activities. These are to be settled via the crypto currency Solar and the reserve currency Lunar. Eventually, Asgardia hopes to have a colony in orbit. This will be expensive: the International Space Station cost $100bn to build, and flights to it cost over $40m per launch. Asgardia has been compared to the troubled Mars One project, which aims to establish a permanent colony on Mars, although Asgardia's organisers point out that setting up a small nation in orbit will be a lot easier than colonising distant Mars. Other proposed goals for the future include shielding the Earth from asteroids and coronal mass ejections, and a Moon base. Legal status Historical There has been at least one previous attempt to set up an independent nation in space. The Nation of Celestial Space, also known as Celestia, was formed in 1949 by James Mangan and claimed all of space. He banned atmospheric nuclear testing and issued protests to the major powers at their encroachment on his territory, but was ignored by both the powers and the UN. However, modern communications mean that Asgardia has a better ability to organise its claim and perhaps raise funds for the satellite that would give it a physical presence in outer space. Recognition and territorial claims Both UN General Assembly Resolution 1962 (XVIII) and the Outer Space Treaty (OST) of 1967 have established all of outer space as an international commons by describing it as the "province of all mankind" and, as a fundamental principle of space law, declaring that space, including Moon and other astronomical objects, is not subject to any national sovereignty claim. Article VI of the Outer Space Treaty vests the responsibility for activities in space to States Parties, regardless of whether they are carried out by governments or non-governmental entities. Article VIII stipulates that the State Party to the Treaty that launches a space object shall retain jurisdiction and control over that object. According to Sa'id Mosteshar of the London Institute of Space Policy and Law: "The Outer Space Treaty... accepted by everybody says very clearly that no part of outer space can be appropriated by any state." Without self-governing territory in space where citizens are present, Mosteshar suggested that the prospect any country would recognise Asgardia was slim. Ram Jakhu, the director of McGill University's Institute of Air and Space Law, and Asgardia's legal expert, believes that Asgardia will be able to fulfil three of the four elements that the UN requires when considering if an entity is a state: citizens; a government; and territory, being an inhabited spacecraft. In that situation, Jakhu considers that fulfilling the fourth element, gaining recognition by the UN member states, will be achievable, and Asgardia will then be able to apply for UN membership. The Security Council would then have to assess the application, as well as obtain approval from two-thirds of the members of the General Assembly. Joanne Gabrynowicz, an expert in space law and a professor at the Beijing Institute of Technology's School of Law, believes that Asgardia will have trouble attaining recognition as a nation. She says there are a "number of entities on Earth whose status as an independent nation have been a matter of dispute for a long time. It is reasonable to expect that the status an unpopulated object that is not on Earth will be disputed." Christopher Newman, an expert in space law at the UK's University of Sunderland, highlights that Asgardia is trying to achieve a "complete re-visitation of the current space-law framework," anticipating that the project will face significant obstacles with getting UN recognition and dealing with liability issues. The Outer Space Treaty requires the country that sends a mission into space to be responsible for the mission, including any damage it might cause. Data security As Asgardia is involved in the storing of private data, there could be legal and ethical issues. For the moment, as the Asgardian satellite is being deployed to orbit by US companies, it will fall under US jurisdiction and data stored on the satellite will be subject to US privacy laws. Economy The ideological component of Asgardia's economy is based on two pillars. The first: In Asgardia, citizens must become owners of the monetary system. The government is simply a middleman, broker and guarantor of monetary transactions. The second: In Asgardia, every citizen must be a participant in the distribution of the Nation's profit. 'Residents' are required to pay a 100 euro or 110 US dollar fee. Legal currency The Head of Nation charged his Administration to hold a contest on the main national Earth currencies in order to determine the initial rate of Solar, the cryptocurrency that will be used for Asgardians. He also charged the Government to introduce a Bill on the National Currency of Asgardia to Parliament. The second Digital Parliamentary Session (the third Parliamentary session) of Asgardia, which took place on 10–12 January 2019, approved the Act of National Currency and Basic Principles of Economic and Financial System of Asgardia. Parliament voted in favour of tasking the Government with drafting legislation regarding the economic system and the national currency of Asgardia by the next Parliamentary session. The financial component of Asgardia's economy is based upon its two national currencies. First, the Solar currency. Because the sun shines for all on Earth, the Solar is to become a universal payment currency converted on the exchanges into not just hard currencies that exist in earthly nations, but also into legitimate cryptocurrencies. Second, the Lunar, which will be an exclusive currency just for the citizens of Asgardia. The Lunar will be an internal financial and monetary asset that confirms the citizenship of Asgardia. As any asset, it is subject to exchanges, sales, loans, gifting, inheritance and more. It is also listed on the exchanges. In January 2019 Asgardia by voting chose the basket of currencies. Using the results of this voting, the Ministry of Finance and its counterpart, the parliamentary Finance Committee, will analyse and examine how the Solar may be freely exchanged against those currencies in open markets and at what future exchange rates. The following 12 currencies have been selected: US Dollar; Euro; British Pound; Japanese Yen; Canadian Dollar; Swiss Franc; Hong Kong Dollar; Mexican Peso; Australian Dollar; Singapore Dollar; Norwegian Krone; Swedish Krona. Economic forum On 26–28 October 2018, the First Economic Forum of Asgardia was held in Nice, France. The Forum was attended by representatives from the professional community, including economists, finance professionals, specialists in the areas of development of currency systems, cryptocurrencies and investment tools from Austria, Belgium, Denmark, India, Germany, the Netherlands, Russia, South Africa, Turkey, United States, UK and other countries. The speakers presented projects of the models of Asgardia's financial system and its economy, monetary system models, and as well issues of creating a balanced financial and economic system of Asgardia. The Memorandum with a general overview and outlines the next steps of developing Asgardia's economic system was adopted on the Forum. Among other things, it was decided to make Asgardia's presentation of model with two currencies at the World Economic Forum in Davos in January 2019 and for development of draft legislation on the national currencies of Asgardia for introduction to the Parliament of Asgardia. On 22–25 January 2019, the Asgardian delegation attended the World Economic Forum in Davos, Switzerland. The Asgardian representatives participated at two sessions—economic and cultural at the Caspian Week Conference 2019. The Caspian Week Conference is a meeting of global leaders, visionaries and experts within the Davos Forum. The Conference was held for the third time since the year 2017. References External links Space advocacy organizations Space colonization Organizations established in 2016 2017 in outer space
Asgardia
[ "Astronomy" ]
3,747
[ "Space advocacy organizations", "Astronomy organizations" ]
67,572,697
https://en.wikipedia.org/wiki/Anke%20Weidenkaff
Anke Weidenkaff (December 27, 1966 in Hanover, Germany) is a German-Swiss chemist and materials scientist. Since 2018, she has been head of the Materials & Resources Group at the Faculty of Materials Science at Technical University Darmstadt and director of the Fraunhofer Research Institution for Materials Recycling and Resource Strategies (IWKS) in Hanau (Hesse) and Alzenau (Bavaria). Life Weidenkaff was born in Hanover, Germany, and studied chemistry at the University of Hamburg. She received her PhD in 2000 from ETH Zurich in the Department of Chemistry. In 2006, she received the Venia Legendi for Solid State Chemistry and Materials Science from the University of Augsburg and became section head at the Swiss Federal Laboratories for Materials Science and Technology (Empa) and associated professor at the University of Bern. From 2013 to 2018, she was director of the Institute of Materials Science at the University of Stuttgart, where she chaired the Department of Chemical Materials Synthesis.. Since October 1, 2018, Weidenkaff has been director of Fraunhofer Research Institution for Materials Recycling and Resource Strategies. Weidenkaff is also a professor at Technical University Darmstadt in the field of material science and resource management. From 2016 to 2019, she was president of the European Thermoelectric Society (ETS), of which she had been a board member since 2007. She is an elected member of the European Materials Research Society's (E-MRS) Executive Committee and was chair of the 2019 E-MRS Spring Meeting. Since 2020, Anke Weidenkaff has been a member of the German Advisory Council on Global Change (WBGU) Anke Weidenkaff was elected as a member to the German National Academy of Sciences Leopoldina and the German Academy of Science and Engineering in 2023. Research Weidenkaff's main areas of research and expertise are materials science and resource strategies, including the development, synthesis chemistry, and characterization of substitute materials for energy conversion and storage. Building on scientific knowledge of solid-state chemistry, her current work focuses on materials science and specifically the development of regenerative, sustainable materials and next-generation process technologies for fast and efficiently closed materials cycles. Anke Weidenkaff and her team are currently working on technologies for the production of (green) hydrogen including photoelectrochemical water splitting, the production of carbon nanotubes using microwave plasma synthesis for carbon storage, and sustainable perovskite materials. She is also involved in the development of thermoelectrics, electroceramics and ceramic membranes. Together with the Energy Materials Department of Fraunhofer IWKS, she conducts research on sustainable materials and recycling technologies for batteries and fuel cells. Another focus of her work is "Green ICT", the development of sustainable materials and processes for information and communication technology. International recognition and activities 2008: Visiting professor, Case Western Reserve University (CWRU) and visiting scientist NASA Glenn Research Centre, Cleveland, USA 2011: Kavli Foundation Lectureship Award 2012 - 2013: Editor in Chief and Member of the Editorial Board of “Energy Quarterly”; Member of the Advisory Board of the MRS Book Series on Energy and Sustainability 2015 - 2017: Member of the Board of Directors, Materials Research Society (MRS) 2016 - 2019: President of the European Thermoelectric Society (ETS) since 2020: Member of the German Advisory Council on Global Change (WBGU) 2022: Karl W. Böer Renewable Energy Mid-Career Award since 2023: Member of the German National Academy of Sciences Leopoldina since 2023: Member of the Acatech, the German National Academy of Science and Engineering. References External links Women chemists Materials scientists and engineers Fraunhofer Society Academic staff of Technische Universität Darmstadt Members of the German National Academy of Sciences Leopoldina 1966 births Living people University of Hamburg alumni ETH Zurich alumni
Anke Weidenkaff
[ "Materials_science", "Engineering" ]
800
[ "Materials scientists and engineers", "Materials science" ]
70,490,281
https://en.wikipedia.org/wiki/Moldable%20wood
Moldable wood is a strong and flexible cellulose-based material. Moldable wood can be folded into different shapes without breaking or snapping. The patented synthesis is based on the deconstruction and softening of the wood's lignin, then re-swelling the material in a rapid "water-shock" process that produces a wrinkled cell wall structure. The result of this unique structure is a flexible wood material that can be molded or folded, with the final shape locked in plate by simple air-drying. This discovery broadens the potential applications of wood as a sustainable structural material. This research, which was a collaborative effort between the University of Maryland, Yale University, Ohio State University, USDA Forest Service, University of Bristol, University of North Texas, ETH Zurich, and the Center for Materials Innovation, was published on the cover of Science in October 2021. References Materials science Solid mechanics Fracture mechanics
Moldable wood
[ "Physics", "Materials_science", "Engineering" ]
187
[ "Structural engineering", "Materials science stubs", "Solid mechanics", "Applied and interdisciplinary physics", "Fracture mechanics", "Materials science", "Mechanics", "nan", "Materials degradation" ]
47,787,936
https://en.wikipedia.org/wiki/Schema%20for%20horizontal%20dials
A schema for horizontal dials is a set of instructions used to construct horizontal sundials using compass and straightedge construction techniques, which were widely used in Europe from the late fifteenth century to the late nineteenth century. The common horizontal sundial is a geometric projection of an equatorial sundial onto a horizontal plane. The special properties of the polar-pointing gnomon (axial gnomon) were first known to the Moorish astronomer Abdul Hassan Ali in the early thirteenth century and this led the way to the dial-plates, with which we are familiar, dial plates where the style and hour lines have a common root. Through the centuries artisans have used different methods to markup the hour lines sundials using the methods that were familiar to them, in addition the topic has fascinated mathematicians and become a topic of study. Graphical projection was once commonly taught, though this has been superseded by trigonometry, logarithms, sliderules and computers which made arithmetical calculations increasingly trivial/ Graphical projection was once the mainstream method for laying out a sundial but has been sidelined and is now only of academic interest. The first known document in English describing a schema for graphical projection was published in Scotland in 1440, leading to a series of distinct schema for horizontal dials each with characteristics that suited the target latitude and construction method of the time. Context The art of sundial design is to produce a dial that accurately displays local time. Sundial designers have also been fascinated by the mathematics of the dial and possible new ways of displaying the information. Modern dialling started in the tenth century when Arab astronomers made the great discovery that a gnomon parallel to the Earth's axis will produce sundials whose hour lines show or legal hours on any day of the year: the dial of Ibn al-Shatir in the Umayyad Mosque in Damascus is the oldest dial of this type. Dials of this type appeared in Austria and Germany in the 1440s. A dial plate can be laid out, by a pragmatic approach, observing and marking a shadow at regular intervals throughout the day on each day of the year. If the latitude is known the dial plate can be laid out using geometrical construction techniques which rely on projection geometry, or by calculation using the known formulas and trigonometric tables usually using logarithms, or slide rules or more recently computers or mobile phones. Linear algebra has provided a useful language to describe the transformations. A sundial schema uses a compass and a straight edge to firstly to derive the essential angles for that latitude, then to use this to draw the hourlines on the dial plate. In modern terminology this would mean that graphical techniques were used to derive and and from it . Basic calculation Using a large sheet of paper. Starting at the bottom a horizontal line is drawn, and a vertical one up the centre. Where they cross is becomes the origin O, the foot of the Gnomon. A horizontal line draw a line which fixes the size of the dial. Where it crosses the centre line is an important construction point F A construction line is drawn upwards from O at the angle of latitude. Using a square, (drop a line) a line from F through the construction line is drawn so they cross at right angles. That point E, is an important construction point. To be precise it is the line FE that is important as it is length . Using compasses, or dividers the length FE was copied upwards in the centre line from F. The new construction point is called G The construction lines and FE are erased. Such geometric constructions were well known and remained part of the high school (UK grammar school) curriculum until the New Maths revolution in the 1970s. The schema shown above was used in 1525 (from an earlier work 1440) by Dürer is still used today. The simpler schema were more suitable for dials designed for the lower latitudes, requiring a narrow sheet of paper for the construction, than those intended for the higher latitudes. This prompted the quest for other constructions. Horizontal dials The first part of the process is common to many methods. It establishes a point on the north south line that is sin φ from the meridian line. Early Scottish method (1440) Dürer (1525) Rohr (1965) Start with the basic method shown above From G a series of lines, 15° apart are drawn, long enough so they cross the line through F. These mark the hour points 1, 2, 3 4, 5 and 7, 8, 9, 10, 11. The centre of the dial is at the bottom, point O. The line drawn from each of these hour point to O will be the hour line on the finished dial. The significant problem is the width of the paper needed in the higher latitudes. Benedetti (1574) Giambattista Benedetti, an impoverished nobleman worked as a mathematician at the court of Savola. His book which describes this method was De gnomonum umbrarumque solarium usu published in 1574. It describes a method for displaying the legal hours, that is equal hours as we use today, while most people still used unequal hours which divided the hours of daylight into 12 equal hours- but they would change as the year progressed. Benedettis method divides the quadrant into 15° segments. Two construction are made: a parallel horizontal line that defines the tan h distances, and a gnomonic polar line GT which represents sin φ. Draw a quadrant GRB, with 15° segments. GR is horizontal. A parallel horizontal line is drawn from PE, and ticks made where it bisects the 15° rays. GX is the latitude. T is the crossing point with PE. GTE is the gnomonic triangle. The length GT is copied to the bottom of E giving the point F. The hour lines are drawn from F, and the dial is complete. Benedetti included instructions for drawing a point gnomon so unequal hours could be plotted. Clavius method (1586) (Fabica et usus instrumenti ad horologiorum descriptionem.) Rome Italy. The Clavius method looks at a quarter of the dial. It views the horizontal and the perpendicular plane to the polar axis as two rectangles hinged around the top edge of both dials. the polar axis will be at φ degrees to the polar axis, and the hour lines will be equispaced on the polar plane an equatorial dial. (15°). Hour points on the polar plane will connect to the matching point on the horizontal plane. The horizontal hour lines are plotted to the origin. Draw the gnomomic triangle, lying on its hypotenuse. On the small side, draw a (equatorial) square, with 15° hour markings. The dial plate is constructed with compasses taking it sizes from the triangle. The hour lines 12, 3, and 6 are known. The hour lines 1 and 2 are taken from the side of the square. A diagonal is taken from 12 to 6, and lines parallel to this drawn through 1 and 2, giving 5 and 4 The morning dial is a reflection of this. Stirrup's method (1652) From G a series of lines, 15° apart are drawn, long enough so they cross the line through F. These mark the hour points 9, 10, 11, 12, 1, 2, 3. The centre of the dial is at the bottom, point O. The line drawn from each of these hour point to O will be the hour line on the finished dial. Bettini method (1660) The Jesuit Mario Bettini penned a method which was posthumously published in the book Recreationum Mathematicarum Apiaria Novissima 1660. Draw the gnomonic triangle with the hypotenuse against the meridian line, and φ to the bottom, C. The other point call M, the right angle call G. A horizontal line is drawn through M, this is the equinoctial A circle centred an M with a radius MG is drawn. G2 and G3 are the intersections of the circle and meridian. In the top quadrants, points are marked each 30°. Two are named P, Q. Construction lines are drawn from G2 and G3 through P and Q- the intersections with the equinoctial are marked. To finish the hourlines are drawn through these points from C, and the dial squared off. Leybourn (1669) William Leybourn published his "Art of Dialling" in 1669, a with it a six-stage method. His description relies heavily on the term line of chords, for which a modern diallist substitutes a protractor. The line of chords was a scale found on the sector which was used in conjunction with a set of dividers or compasses. It was still used by navigators up to the end of the 19th century. Draw a circle, and its two cardinal diameters: E–W, and S–N (top to bottom). O is their crossing point or origin. Using a scale of chords or protractor, lay off two lines, "0a" that is 52° from OS, and "0b" that is 52° from OW. (they will be at right angles. The points "a" and "b" are important. With a straight edge draw a line connect E with "a", it cuts SN (the meridian line) at P, which is called the pole of the world. Now connect E to "a", it connects AE. This point is important as it is where the meridian crosses the equinoctial circle. The points E, AE, and W lie on the equinoctial circle. The next task is to use this information to locate the centre and to draw the circle. Use a construction line to join AE and W. At the centre point, raise a line at right angles. Where it cuts the SN (the meridian) will be C, the centre of the equinoctial circle. Use C to draw an arc from E to W, it will pass through AE. There is now a semicircle passing through E and W, and the equinoctial arc passing through E and W. Divide the semicircle into 12 equal parts, i.e. 15° angles. Mark with a "construction point". A ruler joins O with the points on the semicircle. These lines cut the equinoctial arc: a series of unequal points ("markers) are created. A ruler from P (the pole of the world) takes a line from these markers back over the semicircle. Where it cuts it will be the "hour point"; these hour points are unequally spaced. The hour lines are drawn from each of these "hour points" to O the origin. The origin is the foot of the style which is cut at 52°. Ozanam's method (1673) Mayall (1938) This method requires a far smaller piece of paper, a great advantage for higher latitudes. From G a series of lines, 15° apart are drawn, long enough so they cross the line through F. These mark the hour points 9, 10, 11, 12, 1, 2, 3 and represent the points . The centre of the dial is at the bottom, point O. The line drawn from each of these hour point to O will be the hour line on the finished dial. The lines through 9 and 3 are extended to the WE line and a line dropped orthogonally from 9 and 3 to the WE line, call the crossing points W' and E'. From W and E two more lines are drawn 15° apart, these cut the verticals creating the hour points 7, 8 and 4, 5. Lines taken from 0 to these hour points are the hour lines on the final dial. Encyclopedia method (1771) This method uses the properties of chords to establish distance in the top quadrant, and then transfers this distance into the bottom quadrant so that is established. Again, a transfer of this measure to the chords in the top quadrant. The final lines establish the formula = This is then transferred by symmetry to all quadrants. It was used in the Encyclopædia Britannica First Edition 1771, Sixth Edition 1823 The gnomon is drawn first against the north–south line. In doing so, a diameter at φ degrees to the vertical is drawn; its reflection will also be needed. The circumference is marked off at 15° intervals in the top quadrants. Chords parallel to the horizontal are drawn (the length of these chords will be sin Θ. The measurement of each chord is transferred to form a scale along the lower radiuses. When joined these points form a series of parallel lines that are sin θ. sin φ in length. These measurements are transferred up to the chord. The final hour lines are drawn from the origins through these crossing points. ( = ) de Celles (1760) (1790) Waugh method (1973) The Dom Francois Bedos de Celles method (1760) otherwise known as the Waugh method (1973) From G a series of lines, 15° apart are drawn, long enough so they cross the line through F. These mark the hour points 9, 10, 11, 12, 1, 2, 3 if you take just 3 and represent the points . The centre of the dial is at the bottom, point O. The line drawn from each of these hour point to O will be the hour line on the finished dial. If the paper is large enough, the method above works from 7 until 12, and 12 until 5 and the values before and after 6 are calculated through symmetry. However, there is another way of marking up 7 and 8, and 4 and 5. Call the point where 3 crosses the line R, and a drop a line at right-angles to the base line. Call that point W. Use a construction line to join W and F. Waugh calls the crossing points with the hours lines K, L, M. Using compasses or dividers, add two more points to this line N and P, so that the distances MN = ML, and MP = MK. The missing hour lines are drawn from O through N and through P. The construction lines are erased. Nicholson's method (1825) This method first appeared in Peter Nicholsons A popular Course of Pure and Mixed Mathematics in 1825. It was copied by School World in Jun 1903, then in Kenneth Lynch's, Sundial and Spheres 1971. It starts by drawing the well known triangle, and takes the vertices to draw two circles at radius (OB) sin φ and (AB) tan φ. The 15° lines are drawn, intersecting these circles. Lines are taken horizontally, and vertically from these circles and their intersection point (OB sin t,AB cos t) is on the hour line. That is tan κ = OB sin t/ AB cos t which resolves to sin φ. tan t. Draw the NS line, and the EW line crossing at the origin O. At a convenient point in the first quadrant join the axes with a line set at the target angle. This forms the basic triangle OAB. Set the compasses at length OB and inscribe a circle. Set the compasses on AB and inscribe a concentric circle. On both of these circles mark out the 15° angles. Taking the lines vertically from the inner circle, and horizontally from the outer circle, Mark each of the intersections. These are on the hour lines. Connect the intersection points to the origin. Foster Serles Dialling Scales (1638) A right-angle is drawn on the dial-face and the latitude scale is laid against the x-axis. The target latitude point is marked across on to the dial face. The hour scale is placed from this point to the noon line (conventionally, the zero point is on the noon line). Each of the hour points is copied over to the dial face, and this procedure is repeated, giving the hours both sides of noon. A straight edge is used to connect these points to the origin, thus drawing the hour lines for that location. A vertical line from the target latitude point, and a horizontal line through the noon point will bisect at the three-hour (9am–3pm) marker. The style will be at the same angle as the latitude. Saphea (As-Saphiah) This was an early and convenient method to use if you had access to an astrolabe as many astrologers and mathematicians of the time would have had. The method involved copying the projections of the celestial sphere onto a plane surface. A vertical line was drawn with a line at the angle of the latitude drawn on the bisection of the vertical with the celestial sphere. See also London dial Schema for vertical declining dials Notes References Citations Sources Sundials Clocks Horology Geometry Projective geometry Compass and straightedge constructions
Schema for horizontal dials
[ "Physics", "Mathematics", "Technology", "Engineering" ]
3,508
[ "Machines", "Physical quantities", "Time", "Horology", "Euclidean plane geometry", "Clocks", "Measuring instruments", "Physical systems", "Geometry", "Straightedge and compass constructions", "Spacetime", "Planes (geometry)" ]
77,694,206
https://en.wikipedia.org/wiki/Ammonium%20hexafluorovanadate
Ammonium hexafluorovanadate is an inorganic chemical compound with the chemical formula . Synthesis The compound can be prepared by a fusion of ammonium hydrogen fluoride and vanadium trioxide. Also, a reaction of vanadium trioxide and ammonium bifluoride can produce the compound. Physical properties Ammonium hexafluorovanadate forms powder. It is toxic. Chemical properties The compound decomposes to vanadium pentoxide if heated in open air: Uses Ammonium hexafluorovanadate is typically used as a catalyst at temperatures below 400 °C. References Fluoro complexes Vanadates Ammonium compounds Fluorometallates Hexafluorides
Ammonium hexafluorovanadate
[ "Chemistry" ]
147
[ "Ammonium compounds", "Salts" ]
77,694,514
https://en.wikipedia.org/wiki/Ammonium%20hexafluorochromate
Ammonium hexafluorochromate is an inorganic chemical compound with the chemical formula . Physical properties Ammonium hexafluorochromate forms crystals of cubic system, space group F43m. Chemical properties When heated, ammonium hexafluorochromate decomposed directly to the pure chromium(III) fluoride. References Fluoro complexes Chromates Ammonium compounds Fluorometallates Hexafluorides
Ammonium hexafluorochromate
[ "Chemistry" ]
99
[ "Chromates", "Ammonium compounds", "Oxidizing agents", "Salts" ]
77,694,615
https://en.wikipedia.org/wiki/Isobutyric%20anhydride
Isobutyric anhydride is an organic compound with the formula . It is an acyclic carboxylic anhydride of isobutyric acid. It is classified as an organic acid anhydride, being derived from dehydration of isobutyric acid. It is a colorless liquid with a strong, pungent odor. Isobutyric anhydride is a reagent in the production of the ester of cyclohexanone oxime. Applications Isobutyric anhydride is used as an acylating agent in organic synthesis. Its primary application is in the production of esters, such as cyclohexanone oxime. Isobutyric anhydride is used in the synthesis of various dyes. It is also used in the production of cellulose derivatives, such as cellulose isobutyrate and cellulose acetate isobutyrate. Another application of isobutyric anhydride is in the synthesis of various chemical derivatives. For example, it is used to produce 4-O-isobutyryl derivatives of monosaccharides. References Carboxylic anhydrides Reagents for organic chemistry
Isobutyric anhydride
[ "Chemistry" ]
263
[ "Reagents for organic chemistry" ]
77,696,593
https://en.wikipedia.org/wiki/Association%20for%20Plant%20Breeding%20for%20the%20Benefit%20of%20Society
The Association for Plant Breeding for the Benefit of Society (APBREBES) is an international non-governmental organization founded in 2009 as a network to advocate on issues related to plant breeders' rights, peasants and farmers’ rights, food sovereignty, and the sustainable management of agricultural biodiversity. APBREBES has the status of observer to the International Union for the Protection of New Varieties of Plants (UPOV). Background In 2009, seven NGOs joined to create the APBREBES: the Center for International Environmental Law, Community Technology Development Trust, Development Fund (Norway), Local Initiatives for Biodiversity, Research and Development, Public Eye, Southeast Asia Regional Initiative for Community Empowerment, and Third World Network. The association's main focus are the International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGRFA) and the Convention on Biological Diversity. The network is also an important critique of the implementation of the UPOV system of plant breeders' rights. APBREBES emphasises equitable access to plant genetic resources and ensures that legal frameworks respect human rights and environmental sustainability. The association is active mostly at the UPOV, although it also occasionally work at the national or regional level, such as in Africa and elsewhere. In 2015, APBREBES developed guidelines for alternative to the UPOV system for developing countries' plant variety protection laws. References Bioethics Botany Plant breeding organizations Plant genetics Biological patent law Intellectual property organizations Intergovernmental organizations established by treaty Organizations established in 2009 Organisations based in Geneva Seed associations Peasant food Food sovereignty Food security International law organizations
Association for Plant Breeding for the Benefit of Society
[ "Chemistry", "Technology", "Biology" ]
313
[ "Bioethics", "Biotechnology law", "Molecular biology", "Plants", "Plant genetics", "Biological patent law", "Botany", "Ethics of science and technology", "Plant breeding" ]
77,703,177
https://en.wikipedia.org/wiki/Joining%20technology
The joining technology is used in any type of mechanical joint which is the arrangement formed by two or more elements: typically, two physical parts and a joining element. The mechanical joining systems make possible to form a set of several pieces using the individual parts and the corresponding joining elements. There are fixed sets and removable sets. Most common utensils (tools, furniture, weapons, clothing, footwear, vehicles, ...) are made up of sets of parts. The study of mechanical joints is essential to ensure the proper functioning of the mentioned assemblies.”. Types of unions Metallic materials Riveted joint Bolted joint Pin joint Folded joints: Sheet metal folding joint Welded joints Soldering Brazing Spot welding Wood and "nailable" materials Wood The joints between pieces of wood (natural or processed), between materials similar to joint effects (for example, plastic foam boards) and combined materials can be various. If the parts to be joined include (in addition to wood) metals, ceramic materials or polymers, the joints can be more elaborate. Joints Joints of two pieces of wood . A mortise determines the shape of the ends of the two pieces of wood to be joined. Some of traditional joints are listed below: dovetail joint pocket-hole joinery Biscuit joint dowel (carpentry) tongue and groove Butt joint; p.e. traditional violins Beveled joint; p.e. two pieces of plywood Joining elements fastener nails copper bronze, brass, aluminium and others iron screws Self-tapping screw Bolt (fastener) staples threaded inserts glues and adhesives female and male (forced assembly) Tools with wooden handle Often the handle is wedged and forced. Sometimes with a skinny type bailer or similar. hammers axes scythe chisel plane adzes Others barrels (metal staves and hoops) Polymers Glues and adhesives Flexible materials knots sewn boats ropes Self unions Some manufactured items are made from a raw material using self unions, that is: unions without using other joining materials. basketry reed mats fabric felt knitted fabrics Braided leather For example: in a braided leather whip, all joints are made without any kind of sewing thread or adhesive Fishing nets Metallic nets Historical examples Neolithic The replacement of cut stone tools by polished stone tools is not the most important innovation, although it is the one that gives the period its name. The diversification of tasks that needed to be done (cutting down trees, sowing seeds, harvesting cereals, much of the grain...) explains that the first farmers had to create new specific tools for each function. Most utensils were made of flint with a wooden handle, others were made of bone and animal horn. They made pottery to store food, fabric for clothing with wool and linen, musical instruments... The mortice and tenon coupling was used to join the planks of the ancient Greek ships with double box and false wick. This set was fixed with a wooden peg on each side. The construction system of the large ships of antiquity (the chaining with a guarantor of the lining plates) was of Phoenician origin. The Romans called it "Phoenician chains" ("coagmenta punicana", in Latin plural). Bronze Age iron age Phoenicians and Carthaginians The mortice and tenon coupling was used to join the planks of the ancient Greek ships with double box and false wick. This set was fixed with a wooden peg on each side. The construction system of the large ships of antiquity (the chaining with a guarantor of the lining plates) was of Phoenician origin. The Romans called it "Phoenician joints" ("coagmenta punicana", in Latin plural). Ancient egypt The images show three material assemblies representing three mechanical joints and the corresponding joint elements. The first example, a solar boat, recalls the sewn joints of the wooden pieces that make up the boat's hull. In this particular case the joints were reinforced with box and wick fittings. The second example is that of the wheels of a war chariot. The button of a wheel was formed by the union of the six "vertices" of six pieces of wood - each bent at an angle - so that each spoke was formed by the union of two arms of contiguous angular pieces. The third example is based on the funerary mask of Tutankhamun and shows a kind of soft welding for metals. Ancient Greece Classical Greek culture offers many examples of ensembles made up of pieces mechanically joined together. The following sets are randomly presented: a hoplite spear, a hoplite shield, a mechanical system for chariot racing, the Antikythera mechanism, and general war machines. In figure 1 you can see a Greek spear made up of three parts: the tip (of bronze or steel), the shaft (of ash or a similar wood) and the shaft (of bronze or steel). This set involves two unions. Viking Era The ships of the Vikings, the drakkars, had (almost all) tingled hulls. The juxtaposed lath system was the most popular on the Mediterranean coast. Boats, oxen, gussies and ships of great harbor had ships according to this arrangement. The tingled can method (in which each can overlaps the bottom can) was typical of the Atlantic coasts. An example would be the ships of the Vikings, the drakkars. The method of sewing the tins was followed in various parts of the world, with examples in the Nordic countries and on the coasts of the Indian Ocean. The union of two planks in a drakkar was secured by means of iron rivets (or nails with a dome on the outside and a bent point on the inside). The tightness was obtained with moss or wool impregnated with resin or glue. Patents Since about the fifteenth century, joining technologies have been the subject of patents and similar actions. Here follows a small, random sample, arranged chronologically. The listed patents include assembly tools for mounting or tightening fasteners. 1891 The Swedish company Bahco attributes an improved design, in 1891 or 1892, to the Swedish inventor Johan Petter Johansson who in 1892 received a patent. 1909 Allen screws. 1944 Blind rivets 1981 Pozidriv screws. References External links Konstruktionsatlas (Maschinenbau) Mechanics Soldering
Joining technology
[ "Physics", "Engineering" ]
1,329
[ "Mechanics", "Mechanical engineering" ]
77,703,621
https://en.wikipedia.org/wiki/Triprene
Triprene is an insecticide that is no longer in use. It is an insect growth regulator introduced by Zoecon Corporation (now Sandoz AG) under the "Altorick" trademark, registered 1974 and not renewed, expiring in 1980. The EPA records no registration, now or past. Triprene is nontoxic to mammals, non-carcinogenic, not a human endocrine disruptor, and not neurotoxic. To fish, it may be of moderate toxicity. Triprene is a juvenile hormone mimic. It disrupts insects' development by endocrine disruption, causing incomplete pupation and sterile adult insects. Effectiveness Triprene was tested against the similar kinoprene and hydroprene. Kinoprene was the most effective against long tailed mealybug and solanum mealybug, hydroprene and triprene both needing multiple applications. All controlled coffee brown scale. References External links Insecticides Ethers Thioesters
Triprene
[ "Chemistry" ]
207
[ "Organic compounds", "Thioesters", "Functional groups", "Ethers" ]
77,710,834
https://en.wikipedia.org/wiki/Transition%20metal%20perchlorate%20complexes
Transition metal perchlorate complexes are coordination complexes with one or more perchlorate ligands. Perchlorate can bind to metals through one, two, three, or all four oxygen atoms. Usually however, perchlorate is a counterion, not a ligand. Homoleptic complexes Homoleptic complexes, i.e. complexes where all the ligands are the same (in this case perchlorate), are of fundamental interest because of their simple stoichiometries. Several anhydrous metal diperchlorate complexes are known but most are not molecular (and hence, not complexes). For example, many compounds with the formula are coordination polymers (M = Mn, Fe, Co, Ni, Cu). An exception to this pattern is palladium(II) perchlorate , which is a square planar complex consisting of a pair of bidentate perchlorate ligands. Furthermore, anhydrous is sublimable, which implies the existence of molecular . Titanium(IV) perchlorate and zirconium(IV) perchlorate are molecular, featuring four bidentate perchlorate ligands. They are volatile. Mixed ligand complexes More common than homoleptic complexes are those with two or more types of ligands. A classic case is the dicationic complex pentamminecobalt(III) perchlorate, which had resisted formation by conventional substitution reactions. It was prepared by oxidation of the azide complex: Another mixed ligand complex is the perchlorate complex of the ferric derivative of octaethylporphyrin. Perchlorate as a counterion Being the conjugate base of the strongly acidic perchloric acid, perchlorate is very weakly basic. It is more commonly encountered as a counterion in coordination chemistry. Illustrative of its low basicity is the ability of water to outcompete perchlorate as a ligand for metal ions is indicated by the multitude of aquo complexes with noncoordinated perchlorate. Ferrous perchlorate, cobalt(II) perchlorate, chromium(III) perchlorate, manganese(II) perchlorate, nickel(II) perchlorate, and copper(II) perchlorate are commonly encountered as their hexaaquo complexes. Synthesis The preparation of perchlorate complexes can be challenging because perchlorate is a weakly coordinating anion. Chlorine trioxide is an important precursor to anhydrous perchlorate complexes. It serves as a source of . It reacts with vanadium pentoxide () to give and . Hydrated mercury and cadmium perchlorates can be dehydrated with , affording anhydrous compounds. In some cases, chlorine trioxide serves both as an oxidant and a dehydrating agent: Silver perchlorate, which has some solubility in noncoordinating solvents, reacts with some metal chlorides to give the corresponding perchlorate complex. Reactions Anhydrous perchlorate complexes are susceptible to hydrolysis: Upon heating, perchlorate complexes yield oxides, evolving chlorine oxides in the process. For example, thermolysis of titanium perchlorate gives TiO2, ClO2, and O2 The titanyl species TiO(ClO4)2 is an intermediate in this decomposition. Ti(ClO4)4 → TiO2 + 4ClO2 + 3O2 ΔH = Safety Perchlorate complexes and the reagents used to prepare them are often dangerously explosive intrinsically and especially in contact with organic compounds. References Ligands Coordination chemistry Perchlorates
Transition metal perchlorate complexes
[ "Chemistry" ]
766
[ "Ligands", "Coordination chemistry", "Perchlorates", "Salts" ]
64,598,656
https://en.wikipedia.org/wiki/Fabunan%20Antiviral%20Injection
The Fabunan Antiviral Injection (FAI) is a patented medicine administered to patients by US-based Filipino doctors Ruben and Willie Fabunan, who assured that it can treat dengue fever, chikungunya, dog bite, snakebite, and HIV/AIDS. Formulation Fabunan contains procaine hydrochloride, a water-soluble ester anesthetic, and dexamethasone sodium phosphate, a corticosteroid with well-known anti-inflammatory and immunosuppressant properties. The solution is intended to be administered as an intramuscular injection. Validity of claims The patent application cites six case studies for conditions such as dengue, dengue hemorrhagic fever and AIDS, which were all conducted at the Fabunan Medical Clinic in Burgos. To date, no registered clinical trials of the Fabunan Antiviral Injection have been performed to validate the Fabunans' claims. COVID-19 claims Recent claims promoted on social media that it can cure COVID-19 are not supported by the Philippine government, which has issued a cease and desist order to Fabunan Medical Clinic in Zambales, prompting the clinic to stop its operations on April 2. On April 15, 2020, the fact-checking website Rappler warned against false claims on YouTube and Facebook that the so-called treatment had been approved, and pointed out that on April 8, 2020, the FDA warned the public against the use of drugs or vaccines that are not yet certified to treat COVID-19, particularly the Fabunan Antiviral Injection. Similarly, claims popularly spread in YouTube videos in June 2020 that Fabunan has been approved in Indonesia have been demonstrated to be false. See also List of unproven methods against COVID-19 References Antiviral drugs Experimental antiviral drugs Communication of falsehoods COVID-19 pandemic in the Philippines Combination antiviral drugs COVID-19 drug development
Fabunan Antiviral Injection
[ "Chemistry", "Biology" ]
412
[ "Pharmacology", "Antiviral drugs", "Drug discovery", "Medicinal chemistry stubs", "COVID-19 drug development", "Pharmacology stubs", "Biocides" ]
64,601,019
https://en.wikipedia.org/wiki/Cyclopentadienylvanadium%20tetracarbonyl
Cyclopentadienylvanadium tetracarbonyl is the organovanadium compound with the formula (C5H5)V(CO)4. An orange, diamagnetic solid, it is the principal cyclopentadienyl carbonyl of vanadium. It can be prepared by heating a solution of vanadocene under high pressure of carbon monoxide. As confirmed by X-ray crystallography, the coordination sphere of vanadium consists of η5-cyclopentadienyl and four carbonyl ligands. The molecule is a four-legged piano stool complex. The compound is soluble in common organic solvents. The compound has no commercial applications. Reactions Reduction with sodium amalgam gives the dianion of the tricarbonyl: CpV(CO)4 + 2 Na → Na2CpV(CO)3 + CO Protonation of this salt gives Cp2V2(CO)5. Heating a mixture of cycloheptatriene and cyclopentadienylvanadium tetracarbonyl gives (cycloheptatrienyl)(cyclopentadienyl)vanadium ("trovacene"). References Cyclopentadienyl complexes Carbonyl complexes Organovanadium compounds Half sandwich compounds
Cyclopentadienylvanadium tetracarbonyl
[ "Chemistry" ]
270
[ "Organometallic chemistry", "Half sandwich compounds", "Cyclopentadienyl complexes" ]
64,601,712
https://en.wikipedia.org/wiki/NAALADL2
N-Acetylated Alpha-Linked Acidic Dipeptidase Like 2 (NAALADL2) is a protein, encoded by the gene NAALADL2 in humans. NAALADL2 shares 25%–26% sequence identity and 45% sequence similarity with the glutamate carboxypeptidase II family which includes prostate cancer marker PSMA (FOLH1/NAALAD1). The NAALADL2 gene is a giant gene spanning 1.37 Mb which is approximately 49 times larger than the average gene size of 28 kb. Gene length is correlated with the number of transcript variants of a gene, as such, NAALADL2 undergoes extensive alternative splicing and has 12 splice variants as defined by Ensembl. Function The current function of NAALADL2 is unknown. NAALADL2 shows significant homology to N-acetylated alpha-linked acidic dipeptidase and transferrin receptors. While sharing some homology with the M28B metallopeptidase family, NAALADL2 does not possess favoured amino acids at certain key positions that are highly conserved, and important for metallopeptidase function, which may imply it is catalytically inactive. Clinical significance NAALADL2 has been shown to be severed by a Cornelia De Lange-associated translocation breakpoint at 3q26.3. The rs17531088 SNP in NAALADL2 was shown to be associated with Kawasaki disease in a large GWAS comprising two independent cohorts totalling 893 KD cases plus population and family controls. Cancer NAALADL2 has been shown to have a role in prostate cancer. NAALADL2 protein expression is associated with prostate tumour stage and grade with mRNA expression predicting poor survival following radical prostatectomy in a small cohort. Overexpression of NAALADL2 in cell lines subsequently altered binding to extracellular matrix (ECM) components and enhanced the invasive capacity of prostate cancer cells. When NAALADL2 expression was artificially increased in cell lines, genes involved in the cell cycle, cell adhesion, epithelial to mesenchymal transition and cytoskeletal remodelling were altered. These results suggest NAALADL2 may act to drive aggressive prostate cancer. A genome-wide association study (GWAS) of 12,518 prostate cancer cases found a SNP; rs78943174, within the 3q26.31 (NAALADL2) locus associated with high Gleason sum score. A second study of SNPs occurring within common transcription factor binding sites identified the SNP; rs10936845 within a GATA2 motif. This SNP increased the expression of NAALADL2 expression in prostate cancer patients, with increased expression also predicting biochemical recurrence. In prostate cancer, somatic copy-number gains in NAALADL2 are present in around 16% of patients with localised disease, increasing to 30% of Gleason grade 5 disease, and 50% of T stage 4 disease. co-occurring with adjacent oncogene TBL1XR1. The frequency of CNA gains in NAALADL2 associate with a number of clinical hallmarks of aggressive prostate cancer including Gleason grade, tumour stage, positive surgical margins and cancer which has spread to the lymph nodes. The frequency of copy-number gains in this genetic region also increase in castrate resistant and neuroendocrine prostate cancer. The region surrounding NAALADL2 is rich in oncogenes. Copy-number gains in NAALADL2 often co-occur with neighbouring oncogenes including: BCL6, ATR and PI3K family members. Copy-number gains at the DNA level associate with mRNA expression changes in more than 450 known oncogenes, suggesting this region may be important in driving aggressive prostate cancer. A study of metastatic castrate resistant prostate cancer (mCRPC) has found the antisense strand of NAALADL2 (NAALADL2-AS2) to be more than 2-fold higher in patients with mCRPC compared with healthy volunteers. Patients with higher NAALADL2-AS2 expression had an improved response to enzalutamide compared to those with lower expression. In breast cancer, multicellular tumor spheroids (MTS) are 3D cell cultures which acquire differentiated cell-cell junctions and a defined microenvironment, differentially expressing a number of adhesion molecules such as EPCAM, E-cadherin, integrins and syndecans when compared to 2D monocultures. NAALADL2 has been shown to be differentially expressed in MTS when compared to 2D cultures. These results support a role of NAALADL2 in cell-cell interactions and agree with evidence in prostate cancer which find NAALADL2 affects cell-ECM interactions. SNP's in NAALADL2 have also been identified in cancer risk GWAS's for breast cancer and Lung cancer. Fragile site It has been shown that the gene encoding NAALADL2 is located within a fragile site, a genomic loci prone to breakage and subsequent repair. In cancer, the fragile site located within NAALADL2 has been recently shown to be the fifth most altered of all fragile sites. Therefore, it has been suggested that the copy-number gains in NAALADL2 and gains in surrounding oncogenes such as GATA2, PIK3CB, ATR, SMC4, TBL1XR1, SOX2 and MUC4 may likely arise due to breakage and attempted genomic repair in this region. Upon a break in this fragile site, through a process known as the fork stalling and template switching (FoSTeS), extra copies of the genes in the region surrounding the break may be duplicated. Extra copies (copy-number gains) of NAALADL2 and the genes which surround it have been shown to increase the mRNA expression of these genes, leading to further dysregulation and activation of cancer-associated pathways involved in growth and proliferation. References Genes Prostate cancer Proteins
NAALADL2
[ "Chemistry" ]
1,281
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
64,607,842
https://en.wikipedia.org/wiki/YZ%20Reticuli
YZ Reticuli, also known as Nova Reticuli 2020 was a naked eye nova in the constellation Reticulum discovered on July 15, 2020. Previously it was known as a VY Sculptoris type object with the designation MGAB-V207. VY Sculptoris type The variability of the object was first discovered by an amateur astronomer, Gabriel Murawski, and reported on August 6, 2019 with the name MGAB-V207. Archive photometry data from the Catalina Real-time Transient Survey and ASAS-SN showed nova-like (NL) brightness variations between magnitudes 15.8 and 17.0, exhibiting a deep dimming event in late 2006. The spectrum shows a hot subdwarf (sdB) or a white dwarf origin, which is consistent with VY Scl type objects. Nova eruption On July 15, 2020 Robert H. McNaught discovered a bright transient (magnitude 5.3) coincident with the position of MGAB-V207 and it was spectroscopically confirmed by the Southern African Large Telescope (SALT) as a classical nova on July 16. The spectrum includes Balmer, O and FeII emission lines with P Cygni profiles. Spectrum analysis from observations by the Advanced Technology Telescope revealed a similarity to Nova Sagittarii 1991, three days after maximum brightness. Pre-discovery images showed that the brightness peak happened on July 9, 2020 at magnitude 3.7. In the days after the discovery, the nova faded by 0.2-0.3 magnitudes per day. This is the third case when an already known cataclysmic variable has undergone a classical nova eruption, following V407 Cygni and V392 Persei. The orbital period of YZ Reticuli is 0.1324539 days (3 hours, 10 minutes, and 44 seconds), but in the months following the eruption, the lightcurve also oscillated with periods of 0.1384 and 0.1339 days. These are likely related to the accretion disk and represent a similar phenomenon to superhumps. See also List of novae in the Milky Way galaxy References External links VY Sculptoris type discovery details, July 15, 2020 Nova Reticuli 2020 bursts into the southern skies - Astronomy.com, July 17, 2020 Bright Nova Reticuli 2020 - blog by Ernesto Guido, July 17, 2020 Reticulum Astronomical objects discovered in 2020 Novae J03582954-5446411 Discoveries by Robert H. McNaught Reticuli, YZ
YZ Reticuli
[ "Astronomy" ]
533
[ "Novae", "Astronomical events", "Reticulum", "Constellations" ]
58,028,396
https://en.wikipedia.org/wiki/Space%20climate
Space climate is the long-term variation in solar activity within the heliosphere, including the solar wind, the Interplanetary magnetic field (IMF), and their effects in the near-Earth environment, including the magnetosphere of Earth and the ionosphere, the upper and lower atmosphere, climate, and other related systems. The scientific study of space climate is an interdisciplinary field of space physics, solar physics, heliophysics, and geophysics. It is thus conceptually related to terrestrial climatology, and its effects on the atmosphere of Earth are considered in climate science. Background Space climatology considers long-term (longer than the latitudinally variable 27-day solar rotation period, through the 11-year solar cycle and beyond, up to and exceeding millennia) variability of solar indices, cosmic ray, heliospheric parameters, and the induced geomagnetic, ionospheric, atmospheric, and climate effects. It studies mechanisms and physical processes responsible for their variability in the past with projections onto future. It is a broader and more general concept than space weather, to which it is related like the conventional climate and weather. In addition to real-time solar observations, the field of research also covers analysis of historical space climate data. This has included analysis and reconstruction that has allowed solar wind and heliospheric magnetic field strengths to be determined from back to 1611. The importance of space climate research has been recognized, in particular, by NASA which launched a special space mission Deep Space Climate Observatory (DSCOVR) dedicated to monitoring of space climate. New results, ideas and discoveries in the field of Space Climate are published in a focused peer-review research Journal of Space Weather and Space Climate (JSWSC). Since 2013, research awards and medals in space weather and space climate are annually awarded by the European Space Weather Week. Another recent space observatory platform is the Solar Radiation and Climate Experiment (SORCE). Space climate research has three main aims: to better understand the long-term solar variability, including also the observed extremes and features of this variability in the solar wind and in the heliospheric magnetic field to better understand the physical relationships between the Sun, the heliosphere, and various related proxies (geomagnetic fields, cosmic rays, etc.) to better understand the long-term effect of solar variability on the near-Earth environment, including the different atmospheric layers, and ultimately on Earth's global climate History In the early 2000s, when the concept of space weather became common, a small initiative group, led by Kalevi Mursula and Ilya G. Usoskin the University of Oulu in Finland had realized that physical drivers of solar variability and its terrestrial effects can be better understood with a more general and broader view. The concept of Space Climate had been developed, and the corresponding research community formed, which presently includes a few hundred active members around the world. In particular, a series of International Space Climate Symposia (biennial since 2004) was organized, with the first inaugural symposium being held in Oulu (Finland) in 2004, followed by those in Romania (2006), Finland (2009), India (2011), Finland (2013), Finland (2016), Canada (2019), Poland (2022) and Japan (2024) as well as topical space climate sessions are regularly held at the General Assemblies of the Committee on Space Research and Earth Science. Dissemination Research results related to Space Climate are published in a bunch of peer-reviewed journals, such as Astronomy & Astrophysics, Journal of Geophysical Research, Geophysical Research Letters, Solar Physics (journal), Advances in Space Research. See also Aeronomy Planetary science, atmospheric sciences, and atmospheric physics Solar activity and climate Solar irradiance Space weather Space weathering Stellar astronomy References External links Journal of Space Weather and Space Climate (JSWSC) United Nations Office for Outer Space Affairs' Space and Climate Change program Space weather Space physics Space science Geophysics Climate Solar System Sun Climatology
Space climate
[ "Physics", "Astronomy" ]
827
[ "Applied and interdisciplinary physics", "Outer space", "Space science", "Geophysics", "Solar System", "Space physics" ]
58,030,745
https://en.wikipedia.org/wiki/Ionic%20Coulomb%20blockade
Ionic Coulomb blockade (ICB) is an electrostatic phenomenon predicted by M. Krems and Massimiliano Di Ventra (UC San Diego) that appears in ionic transport through mesoscopic electro-diffusive systems (artificial nanopores and biological ion channels) and manifests itself as oscillatory dependences of the conductance on the fixed charge in the pore ( or on the external voltage , or on the bulk concentration ). ICB represents an ion-related counterpart of the better-known electronic Coulomb blockade (ECB) that is observed in quantum dots. Both ICB and ECB arise from quantisation of the electric charge and from an electrostatic exclusion principle and they share in common a number of effects and underlying physical mechanisms. ICB provides some specific effects related to the existence of ions of different charge (different in both sign and value) where integer is ion valence and is the elementary charge, in contrast to the single-valence electrons of ECB (). ICB effects appear in tiny pores whose self-capacitance is so small that the charging energy of a single ion becomes large compared to the thermal energy per particle ( ). In such cases there is strong quantisation of the energy spectrum inside the pore, and the system may either be “blockaded” against the transportation of ions or, in the opposite extreme, it may show resonant barrier-less conduction, depending on the free energy bias coming from , , or . The ICB model claims that is a primary determinant of conduction and selectivity for particular ions, and the predicted oscillations in conductance and an associated Coulomb staircase of channel occupancy vs are expected to be strong effects in the cases of divalent ions () or trivalent ions (). Some effects, now recognised as belonging to ICB, were discovered and considered earlier in precursor papers on electrostatics-governed conduction mechanisms in channels and nanopores. The manifestations of ICB have been observed in water-filled sub-nanometre pores through a 2D MoS2 monolayer, revealed by Brownian dynamics (BD) simulations of calcium conductance bands in narrow channels, and account for a diversity of effects seen in biological ion channels. ICB predictions have also been confirmed by a mutation study of divalent blockade in the NaChBac bacterial channel. Model Generic electrostatic model of channel/nanopore ICB effects may be derived on the basis of a simplified electrostatics/Brownian dynamics model of a nanopore or of the selectivity filter of an ion channel. The model represents the channel/pore as a charged hole through a water-filled protein hub embedded in the membrane. Its fixed charge is considered as a uniform, centrally placed, rigid ring (Fig.1). The channel is assumed to have geometrical parameters length nm and radius nm, allowing for the single-file movement of partially hydrated ions. The model represents the water and protein as continuous media with dielectric constants and respectively. The mobile ions are described as discrete entities with valence and of radius , moving stochastically through the pore, governed by the self-consistently coupled Poisson's electrostatic equation and Langevin stochastic equation. The model is applicable to both cationic and anionic biological ion channels and to artificial nanopores. Electrostatics The  mobile ion is assumed to be partially hydrated  (typically retaining its first hydration shell) and carrying charge where is the elementary charge (e.g. the ion with ). The model allows one to derive the pore and ion parameters satisfying the barrier-less permeation conditions, and to do so from basic electrostatics taking account of charge quantisation. The potential energy of a channel/pore containing ions can be decomposed into electrostatic energy , dehydration energy, and ion-ion local interaction energy : The basic ICB model makes the simplifying approximation that , whence:where is the net charge of the pore when it contains identical ions of valence , the sign of the moving ions being opposite to that of the , represents the electrostatic self-capacitance of the pore, and is the electric permittivity of the vacuum. Resonant barrier-less conduction Thermodynamics and statistical mechanics describe systems that have variable numbers of particles via the chemical potential , defined as Gibbs free energy per particle:, where is the Gibbs free energy for the system of particles. In thermal and particle equilibrium with bulk reservoirs, the entire system has a common value of chemical potential (the Fermi level in other contexts). The free energy needed for the entry of a new ion to the channel is defined by the excess chemical potential which (ignoring an entropy term ) can be written as where is the charging energy (self-energy barrier) of an incoming ion and is its affinity (i.e. energy of attraction to the binding site ). The difference in energy between and (Fig.2.) defines the ionic energy level separation (Coulomb gap) and gives rise to most of the observed ICB effects. In selective ion channels, the favoured ionic species passes through the channel almost at the rate of free diffusion, despite the strong affinity to the binding site. This conductivity-selectivity paradox has been explained as being a consequence of selective barrier-less conduction. In the ICB model, this occurs when is almost exactly balanced by (), which happens for a particular value of (Fig.2.). This resonant value of depends on the ionic properties and (implicitly, via the -dependent dehydration energy ), thereby providing a basis for selectivity. Oscillations of conductance The ICB model explicitly predicts an oscillatory dependence of conduction on , with two interlaced sets of singularities associated with a sequentially increasing number of ions in the channel (Fig.3A). Electrostatic blockade points correspond to minima in the ground state energy of the pore (Fig.3C). The points () are equivalent to neutralisation points where . Resonant conduction points correspond to the barrier-less condition: , or . The values of are given by the simple formulaei.e. the period of conductance oscillations in , . For , in a typical ion channel geometry, , and ICB becomes strong. Consequently, plots of the BD-simulated Ca^2+current vs exhibit multi-ion conduction bands - strong Coulomb blockade oscillations between minima and maxima (Fig.3A)). The point corresponds to an uncharged pore with . Such pores are blockaded for ions of either sign. Coulomb staircase The ICB oscillations in conductance correspond to a Coulomb staircase in the pore occupancy , with transition regions corresponding to and saturation regions corresponding to (Fig.3B) . The shape of the staircase is described by the Fermi-Dirac (FD) distribution, similarly to the Coulomb staircases of quantum dots. Thus, for the transition, the FD function is: Here is the excess chemical potential for the particular ion and is an equivalent bulk occupancy related to pore volume. The saturated FD statistics of occupancy is equivalent to the Langmuir isotherm or to Michaelis–Menten kinetics. It is the factor that gives rise to the concentration-related shift in the staircase seen in Fig.3B. Shift of singular points Addition of the partial excess chemical potentials coming from different sources (including dehydration, local binding, volume exclusion etc.) leads to the ICB barrier-less condition leads to a proper shift in the ICB resonant points , described by a "shift equation" : i.e. the additional energy contributions lead to shifts in the resonant barrier-less point . The more important of these shifts (excess potentials) are:  A concentration-related shift arising from the bulk entropy  A dehydration-related shift , arising from partial dehydration penalty A local binding-related shift , coming from energy of local binding and surface effects. In artificial nanopores Sub-nm MoS2 pores Following its prediction based on analytic theory and molecular dynamics simulations, experimental evidence for ICB emerged from experiments on monolayer MoS2 pierced by a single nm nanopore. Highly non-Ohmic conduction was observed between aqueous ionic solutions on either side of the membrane. In particular, for low voltages across the membrane, the current remained close to zero, but it rose abruptly when a threshold of about mV was exceeded. This was interpreted as complete ionic Coulomb blockade of current in the (uncharged) nanopore due to the large potential barrier at low voltages. But the application of larger voltages pulled the barrier down, producing accessible states into which transitions could occur, thus leading to conduction. In biological ion channels The realisation that ICB could occur in biological ion channels accounted for several experimentally observed features of selectivity, including: Valence selectivity Valence selectivity is the channel's ability to discriminate between ions of different valence , wherein e.g. a calcium channel favours ions over ions by a factor of up to 1000×. Valence selectivity has been attributed variously to pure electrostatics, or to a charge space competition mechanism, or to a snug fit of the ion to ligands, or to quantised dehydration. In the ICB model, valence selectivity arises from electrostatics, namely from -dependence of the value of needed to provide for barrier-less conduction. Correspondingly, the ICB model provides explanations of why site-directed mutations that alter can destroy the channel by blockading it, or can alter its selectivity from favouring ions to favouring ions, or vice versa . Divalent blockade Divalent (e.g. ) blockade of monovalent (e.g. ) currents is observed in some types of ion channels. Namely, ions in a pure sodium solution pass unimpeded through a calcium channel, but are blocked by tiny (nM) extracellular concentrations of ions. ICB provides a transparent explanation of both the phenomenon itself and of the Langmuir-isotherm-shape of the current vs. attenuation curve, deriving them from the strong affinity and an FD distribution of Ca^2+ions. Vice versa, appearance divalent blockade presents strong evidence in favour of ICB Similarly, ICB can account for the divalent (Iodide I^2-) blockade that has been observed in biological chloride (Cl-)-selective channels. Special features Comparisons between ICB and ECB ICB and ECB should be considered as two versions of the same fundamental electrostatic phenomenon. Both ICB and ECB are based on charge quantisation and on the finite single-particle charging energy , resulting in close similarity of the governing equations and manifestations of these closely related phenomena. Nonetheless, there are important distinctions between ICB and ECB: their similarities and differences are summarised in Table 1. Particular cases Coulomb blockade can also appear in superconductors; in such a case the free charge carriers are Cooper pairs () In addition, Pauli spin blockade represents a special kind of Coulomb blockade, connected with Pauli exclusion principle. Quantum analogies Despite appearing in completely classical systems, ICB exhibits some phenomena reminiscent of quantum-mechanics (QM). They arise because the charge/entity discreteness of the ions leads to quantisation of the energy spectrum and hence to the QM-analogies: Noise-driven diffusive motion provides for escape over barriers, comparable to QM-tunnelling in ECB. The particular FD shape of the occupancy vs plays a significant role in the ICB explanation of the divalent blockade phenomenon. The appearance of an FD distribution in the diffusion of classical particles obeying an exclusion principle, has been demonstrated rigorously. See also Coulomb blockade Ion channel Brownian dynamics Nanopore Binding selectivity Fermi–Dirac statistics Electrostatics Quantisation of charge Elementary charge References Nanoelectronics Quantum electronics Mesoscopic physics
Ionic Coulomb blockade
[ "Physics", "Materials_science" ]
2,566
[ "Quantum electronics", "Quantum mechanics", "Condensed matter physics", "Nanoelectronics", "Nanotechnology", "Mesoscopic physics" ]
78,979,264
https://en.wikipedia.org/wiki/List%20of%20cultured%20meat%20companies
This is a list of companies involved in the sale and development of cultured meat, along with information about them. Because the commercial production of cultured meat is as of the 2020s still a developing industry, with unprecedented technological challenges and breakthroughs or failures, the progress of pioneers and early start-ups has received much attention in the media and the scientific community. The number of cultured meat companies increased from about 10 start-ups in 2016 to "98 cultured meat companies engaged in culture-related meat production" in December 2022. In addition to these companies, non-profit organizations such as New Harvest, the Good Food Institute, ProVeg International and the Cellular Agriculture Society advocate for, fund and research cultured meat. Cultured meat companies Note: dates in italics refer to projected dates of achievement in the future; they may shift. Pilot plants Note: data in italics refer to unfinished projects or projected capacities in the future; they may shift. See also List of vegetarian and vegan companies References Lists of companies
List of cultured meat companies
[ "Engineering", "Biology" ]
207
[ "Biological engineering", "Cellular agriculture" ]
74,744,150
https://en.wikipedia.org/wiki/Layers%20of%20protection%20analysis
Layers of protection analysis (LOPA) is a technique for evaluating the hazards, risks and layers of protection associated with a system, such as a chemical process plant. In terms of complexity and rigour LOPA lies between qualitative techniques such as hazard and operability studies (HAZOP) and quantitative techniques such as fault trees and event trees. LOPA is used to identify scenarios that present the greatest risk and assists in considering how that risk could be reduced. Introduction LOPA is a risk assessment technique that uses rules to evaluate the frequency of an initiating event, the independent protection layers (IPL), and the consequences of the event. LOPA aims to identify the countermeasures available against the potential consequences of a risk. An IPL is a device, system or action that prevents a scenario from escalating. The effectiveness of an IPL is quantified by its probability of failure on demand (PFD), in the range 0 to 1. An IPL must be independent of the other protective layers and its functionality must be capable of validation. LOPA was developed in the 1990s in the chemical process industry but has found wider application. In functional safety, LOPA is often used to allocate a safety integrity level to instrumented protective functions. When this occurs in the context of the analysis of process plants, LOPA generally leverages the results of a preceding HAZOP. LOPA is complementary to HAZOP and can generate a second in-depth analysis of a scenario, which can be used to challenge the HAZOP findings in terms of failure events and safeguards. Layers of protection in process plants Safety protection systems for process plant typically comprises eight layers: LOPA is used to determine how a process deviation can lead to a hazardous event if not interrupted by an IPL. The LOPA procedure LOPA is a risk assessment undertaken on a 'one cause–one consequence' pair. The steps of a LOPA risk assessment are: Identify the consequences, using a risk matrix Define the risk tolerance criteria (RTC), based on the tolerable/intolerable regions on the risk matrix Define the relevant accident scenario, e.g. mechanical or human failure Determine the initiating event frequency, again using the risk matrix Identify the conditions and estimate the probability of failure on demand (PFD) Estimate the frequency of unmitigated consequences Identify the IPLs and estimate the PFD for each one Determine the frequency of mitigated consequences Evaluate the need for additional IPLs. Other uses Although the LOPA methodology started in the process industry, the technique can be used in other fields, including: General design Management of change Facilities siting risk Mechanical integrity programs Incident investigations Screening tool for Quantified Risk Assessment (QRA) See also Hazard and operability study Hazard analysis Fault tree analysis Risk assessment References Hazard analysis Process safety Safety engineering
Layers of protection analysis
[ "Chemistry", "Engineering" ]
585
[ "Systems engineering", "Safety engineering", "Hazard analysis", "Process safety", "Chemical process engineering" ]
53,480,889
https://en.wikipedia.org/wiki/May%E2%80%93Gr%C3%BCnwald%20stain
May–Grünwald stain is used for the staining of slides obtained by fine-needle aspiration in a histopathology lab for the diagnosis of tumorous cells. Sometimes, it is combined with Giemsa staining, yielding Pappenheim staining (May-Grünwald-Giemsa staining). References Histopathology Staining Romanowsky stains
May–Grünwald stain
[ "Chemistry", "Biology" ]
82
[ "Staining", "Microbiology techniques", "Microscopy", "Cell imaging", "Histopathology" ]
53,483,035
https://en.wikipedia.org/wiki/Ply%20%28layer%29
A ply is a layer of material which has been combined with other layers in order to provide strength. The number of layers is indicated by prefixing a number, for example 4-ply, indicating material composed of 4 layers. Etymology The word "ply" derives from the French verb plier, "to fold", from the Latin verb plico, from the ancient Greek verb πλέκω. Examples Yarn, where plying is a spinning technique to combine several fibres. Vehicle tires Plywood Toilet paper References Structural analysis Structural engineering
Ply (layer)
[ "Engineering" ]
114
[ "Structural engineering", "Structural analysis", "Construction", "Civil engineering", "Mechanical engineering", "Aerospace engineering" ]
53,489,192
https://en.wikipedia.org/wiki/Euxinia
Euxinia or euxinic conditions occur when water is both anoxic and sulfidic. This means that there is no oxygen (O2) and a raised level of free hydrogen sulfide (H2S). Euxinic bodies of water are frequently strongly stratified; have an oxic, highly productive, thin surface layer; and have anoxic, sulfidic bottom water. The word "euxinia" is derived from the Greek name for the Black Sea (Εὔξεινος Πόντος (Euxeinos Pontos)) which translates to "hospitable sea". Euxinic deep water is a key component of the Canfield ocean, a model of oceans during part of the Proterozoic eon (a part specifically known as the Boring Billion) proposed by Donald Canfield, an American geologist, in 1998. There is still debate within the scientific community on both the duration and frequency of euxinic conditions in the ancient oceans. Euxinia is relatively rare in modern bodies of water, but does still happen in places like the Black Sea and certain fjords. Background Euxinia most frequently occurred in the Earth's ancient oceans, but its distribution and frequency of occurrence are still under debate. The original model was that it was quite constant for approximately a billion years. Some meta-analyses have questioned how persistent euxinic conditions were based on relatively small black shale deposits in a period when the ocean should have theoretically been preserving more organic matter. Before the Great Oxygenation Event happened approximately 2.3 billion years ago, there was little free oxygen in either the atmosphere or the ocean. It was originally thought that the ocean accumulated oxygen soon after the atmosphere did, but this idea was challenged by Canfield in 1998 when he proposed that instead of the deep ocean becoming oxidizing, it became sulfidic. This hypothesis is partially based on the disappearance of banded iron formations from the geological records 1.8 billion years ago. Canfield argued that although enough oxygen entered the atmosphere to erode sulfides in continental rocks, there was not enough oxygen to mix into the deep ocean. This would result in an anoxic deep ocean with an increased flux of sulfur from the continents. The sulfur would strip iron ions from the sea water, resulting in iron sulfide (pyrite), a portion of which was eventually buried. When sulfide became the major oceanic reductant instead of iron, the deep water became euxinic. This has become what is known as the Canfield ocean, a model backed by the increase in presence of δ34S in sedimentary pyrite and the discovery of evidence of the first sulfate evaporites. Anoxia and sulfidic conditions often occur together. In anoxic conditions anaerobic, sulfate reducing bacteria convert sulfate into sulfide, creating sulfidic conditions. The emergence of this metabolic pathway was very important in the pre-oxygenated oceans because adaptations to otherwise inhabitable or "toxic" environments like this may have played a role in the diversification of early eukaryotes and protozoa in the pre-Phanerozoic. Euxinia still occurs occasionally today, mostly in meromictic lakes and silled basins such as the Black Sea and some fjords. It is rare in modern times; less than 0.5% of today's sea floor is euxinic. Causes The basic requirements for the formation of euxinic conditions are the absence of oxygen (O2), and the presence of sulfate ions (SO42−), organic matter (CH2O), and bacteria capable of reducing sulfate to hydrogen sulfide (H2S). The bacteria utilize the redox potential of sulfate as an oxidant and organic matter as a reductant to generate chemical energy through cellular respiration. The chemical species of interest can be represented via the reaction: 2CH2O + SO42− → H2S + 2HCO3− In the reaction above, the sulfur has been reduced to form the byproduct hydrogen sulfide, the characteristic compound present in water under euxinic conditions. Although sulfate reduction occurs in waters throughout the world, most modern-day aquatic habitats are oxygenated due to photosynthetic production of oxygen and gas exchange between the atmosphere and surface water. Sulfate reduction in these environments is often limited to occurring in seabed sediments that have a strong redox gradient and become anoxic at some depth below the sediment-water interface. In the ocean the rate of these reactions is not limited by sulfate, which has been present in large quantities throughout the oceans for the past 2.1 billion years. The Great Oxygenation Event increased atmospheric oxygen concentrations such that oxidative weathering of sulfides became a major source of sulfate to the ocean. Despite plentiful sulfate ions being present in solution, they are not preferentially used by most bacteria. The reduction of sulfate does not give as much energy to an organism as reduction of oxygen or nitrate, so the concentrations of these other elements must be nearly zero for sulfate-reducing bacteria to out-compete aerobic and denitrifying bacteria. In most modern settings these conditions only occur in a small portion of sediments, resulting in insufficient concentrations of hydrogen sulfide to form euxinic waters. Conditions required for the formation of persistent euxinia include anoxic waters, high nutrient levels, and a stratified water column. These conditions are not all-inclusive and are based largely on modern observations of euxinia. Conditions leading up to and triggering large-scale euxinic events, such as the Canfield ocean, are likely the result of multiple interlinking factors, many of which have been inferred through studies of the geologic record at relevant locations. The formation of stratified anoxic waters with high nutrient levels is influenced by a variety of global and local-scale phenomena such as the presence of nutrient traps and a warming climate. Nutrient traps In order for euxinic conditions to persist, a positive feedback loop must perpetuate organic matter export to bottom waters and reduction of sulfate under anoxic conditions. Organic matter export is driven by high levels of primary production in the photic zone, supported by a continual supply of nutrients to the oxic surface waters. A natural source of nutrients, such as phosphate (), comes from weathering of rocks and subsequent transport of these dissolved nutrients via rivers. In a nutrient trap, increased input of phosphate from rivers, high rates of recycling of phosphate from sediments, and slow vertical mixing in the water column allow for euxinic conditions to persist. Geography The arrangement of the continents has changed over time due to plate tectonics, resulting in the bathymetry of ocean basins also changing over time. The shape and size of the basins influences the circulation patterns and concentration of nutrients within them. Numerical models simulating past arrangements of continents have shown that nutrient traps can form in certain scenarios, increasing local concentrations of phosphate and setting up potential euxinic conditions. On a smaller scale, silled basins often act as nutrient traps due to their estuarine circulation. Estuarine circulation occurs where surface water is replenished from river input and precipitation, causing an outflow of surface waters from the basin, while deep water flows into the basin over the sill. This type of circulation allows for anoxic, high nutrient bottom water to develop within the basin. Stratification Stratified waters, in combination with slow vertical mixing, are essential to maintaining euxinic conditions. Stratification occurs when two or more water masses with different densities occupy the same basin. While the less dense surface water can exchange gas with the oxygen-rich atmosphere, the denser bottom waters maintain low oxygen content. In the modern oceans, thermohaline circulation and upwelling prevent the oceans from maintaining anoxic bottom waters. In a silled basin, the stable stratified layers only allow surface water to flow out of the basin while the deep water remains anoxic and relatively unmixed. During an intrusion of dense saltwater however, the nutrient-rich bottom water upwells, causing increased productivity in the surface, further enhancing the nutrient trap due to biological pumping. Rising sea level can exacerbate this process by increasing the amount of deep water entering a silled basin and enhancing estuarine circulation. Warming climate A warming climate increases surface temperatures of waters which affects multiple aspects of euxinic water formation. As waters warm, the solubility of oxygen decreases, allowing for deep anoxic waters to form more readily. Additionally, the warmer water causes increased respiration of organic matter leading to further oxygen depletion. Higher temperatures enhance the hydrologic cycle, increasing evaporation from bodies of water, resulting in increased precipitation. This causes higher rates of weathering of rocks and therefore higher nutrient concentrations in river outflows. The nutrients allow for more productivity resulting in more marine snow and subsequently lower oxygen in deep waters due to increased respiration. Volcanism has also been proposed as a factor in creating euxinic conditions. The carbon dioxide (CO2) released during volcanic outgassing causes global warming which has cascading effects on the formation of euxinic conditions. Evidence for euxinic events Black shale Black shales are organic rich, microlaminated sedimentary rocks often associated with bottom water anoxia. This is because anoxia slows the degradation of organic matter, allowing for greater burial in the sediments. Other evidence for anoxic burial of black shale includes the lack of bioturbation, meaning that there were no organisms burrowing into the sediment because there was no oxygen for respiration. There must also be a source of organic matter for burial, generally from production near the oxic surface. Many papers discussing ancient euxinic events use the presence of black shale as a preliminary proxy for anoxic bottom waters, but their presence does not in and of itself indicate euxinia or even strong anoxia. Generally geochemical testing is needed to provide better evidence for conditions. Geochemistry Some researchers study the occurrence of euxinia in ancient oceans because it was more prevalent then than it is today. Since ancient oceans cannot be directly observed, scientists use geology and chemistry to find evidence in sedimentary rock created under euxinic conditions. Some of these techniques come from studying modern examples of euxinia, while others are derived from geochemistry. Though modern euxinic environments have geochemical properties in common with ancient euxinic oceans, the physical processes causing euxinia most likely vary between the two. Isotopes Stable isotope ratios can be used to infer the environmental conditions during the formation of sedimentary rock. Using stoichiometry and knowledge of redox pathways, paleogeologists can use isotopes ratios of elements to determine the chemical composition of the water and sediments when burial occurred. Sulfur isotopes are frequently used to look for evidence of ancient euxinia. Low δ34S in black shales and sedimentary rocks provides positive evidence for euxinic formation conditions. The pyrite (FeS2) in euxinic basins typically has higher concentrations of light sulfur isotopes than pyrite in the modern ocean. The reduction of sulfate to sulfide favors the lighter sulfur isotopes (32S) and becomes depleted in the heavier isotopes (34S). This lighter sulfide then bonds with Fe2+ to form FeS2 which is then partially preserved in the sediments. In most modern systems, sulfate eventually becomes limiting, and the isotopic weights of sulfur in both sulfate and sulfide (preserved as FeS2) become equal. Molybdenum (Mo), the most common transition metal ion in modern seawater, is also used to look for evidence for euxinia. Weathering of rocks provides an input of MoO42– into oceans. Under oxic conditions, MoO42– is very unreactive, but in modern euxinic environments such as the Black Sea, molybdenum precipitates out as oxythiomolybdate (MoO4−xSx2– ). The isotope ratio for Molybdenum (δ97/95 Mo) in euxinic sediments appears to be higher than in oxic conditions. Additionally, the concentration of molybdenum is frequently correlated with the concentration of organic matter in euxinic sediments. The use of Mo to indicate euxinia is still under debate. Trace-element enrichment Under euxinic conditions, some trace elements such as Mo, U, V, Cd, Cu, Tl, Ni, Sb, and Zn, become insoluble. This means that euxinic sediments would contain more of the solid form of these elements than the background seawater. For example, Molybdenum and other trace metals become insoluble in anoxic and sulfidic conditions, so over time the seawater becomes depleted of trace metals under conditions of persistent euxinia, and preserved sediments are relatively enriched with molybdenum and other trace elements. Organic biomarkers Bacteria such as green sulfur bacteria and purple sulfur bacteria, which exist where the photic zone overlaps with euxinic water masses, leave pigments behind in sediments. These pigments can be used to identify past euxinic conditions. The pigments used to identify past presence of green sulfur bacteria are chlorobactane and isorenieratene. The pigments used to identify past presence of purple sulfur bacteria is okenane. Iron geochemistry Pyrite (FeS2) is a mineral formed by the reaction of hydrogen sulfide (H2S) and bioreactive iron (Fe2+). In oxic bottom waters pyrite can only form in sediments where H2S is present. However, in iron-rich euxinic environments, pyrite formation can occur at higher rates in both the water column and in sediments due to higher concentrations of H2S. Therefore the presence of euxinic conditions can be inferred by the ratio of pyrite-bound iron to the total iron in sediments. High ratios of pyrite-bound iron can be used as an indicator of past euxinic conditions. Similarly, if >45% of the bioreactive iron in sediments is pyrite-bound, then anoxic or euxinic conditions can be inferred. While useful, these methods do not provide definitive proof of euxinia because not all euxinic waters have the same concentrations of bioreactive iron available. These relationships have been found to be present in the modern euxinic Black Sea. Euxinic events in Earth's history Proterozoic The Proterozoic is the transition era between anoxic and oxygenated oceans. The classic model is that the end of the Banded iron formations (BIFs) was due to the injection of oxygen into the deep ocean, an approximately 0.6 billion year lag behind the Great Oxygenation Event. Canfield, however, argued that anoxia lasted much longer, and the end of the banded iron formations was due to the introduction of sulfide. Supporting Canfield's original hypothesis, 1.84 billion year old sedimentary records have been found in the Animike group in Canada that exhibit close to full pyritization on top of the last of the banded iron formations, showing evidence of a transition to euxinic conditions in that basin. In order for full pyritization to happen, nearly all of the sulfate in the water was reduced to sulfide, which stripped the iron from the water, forming pyrite. Because this basin was open to the ocean, deep euxinia was interpreted as being a widespread phenomena. This euxinia is hypothesized to have lasted until about 0.8 billion years ago, making basin bottom euxinia a potentially widespread feature throughout the Boring Billion. Further evidence for euxinia was discovered in the McArthur Basin in Australia, where similar iron chemistry was found. The degree of pyritization and the δ34S were both high, supporting the presence of anoxia and sulfide, as well as the depletion of sulfate. A different study found biomarkers for green sulfur bacteria and purple sulfur bacteria in the same area, providing further evidence for the reduction of sulfate to hydrogen sulfide. Molybdenum isotopes have been used to examine the distribution of euxinia in the Proterozoic eon, and suggest that perhaps euxinia was not as widespread as Canfield initially postulated. Bottom waters may have been more widely suboxic than anoxic, and there could have been negative feedback between euxinia and the high levels of surface primary production needed to sustain euxinic conditions. Further work has suggested that from 700 million years ago (late Proterozoic) and onward, the deep oceans may have actually been anoxic and iron rich with conditions similar to those during the formation of BIFs. Phanerozoic There is evidence for multiple euxinic events during the Phanerozoic. It is most likely that euxinia was periodic during the Paleozoic and Mesozoic, but geologic data is too sparse to draw any large scale conclusions. In this eon, there is some evidence that euxinic events are potentially linked with mass extinction events including the Late Devonian and Permian–Triassic. Paleozoic The periodic presence of euxinic conditions in the Lower Cambrian has been supported by evidence found on the Yangtze platform in South China. Sulfur isotopes during the transition from Proterozoic to Phanerozoic give evidence for widespread euxinia, perhaps lasting throughout the Cambrian period. Towards the end of the Lower Cambrian, the euxinic chemocline grew deeper until euxinia was present only in the sediments, and once sulfate became limiting, conditions became anoxic instead of euxinic. Some areas eventually became oxic, while others eventually returned to euxinic for some time. Geological records from the paleozoic in the Selwyn Basin in Northern Canada have also shown evidence for episodic stratification and mixing, where, using δ34S, it was determined that hydrogen sulfide was more prevalent than sulfate. Although this was not originally attributed to euxinia, further studies found that seawater in that time likely had low concentrations of sulfate, meaning that the sulfur in the water was primarily in the form of sulfide. This combined with organic-rich black shale provide strong evidence for euxinia. There is similar evidence in the black shales in the mid-continent North America from the Devonian and early Mississippian periods. Isorenieratene, a pigment known as a proxy for an anoxic photic zone, has been found in the geological record in Illinois and Michigan. Although present, these events were probably ephemeral and did not last for longer periods of time. Similar periodic evidence of euxinia can also be found in the Sunbury shales of Kentucky. Evidence for euxinia has also been tied to the Kellwasser events of the Late Devonian Extinction event. Euxinia in basinal waters in what is now central Europe (Germany, Poland, and France) persisted for part of the late Devonian, and may have spread up into shallow waters, contributing to the extinction event. There was perhaps a period of oxygenation of bottom waters during the Carboniferous, most likely between the Late Devonian Extinction and the Permian-Triassic Extinction, at which point euxinia would be very rare in the paleo oceans. The Permian–Triassic extinction event may also have some ties to euxinia, with hypercapnia and hydrogen sulfide toxicity killing off many species. Presence of a biomarker for anaerobic photosynthesis by green sulfur bacteria has been found spanning from the Permian to early Triassic in sedimentary rock in both Australia and China, meaning that euxinic conditions extended up quite shallow in the water column, contributing to the extinctions and perhaps even slowed the recovery. It is uncertain, however, just how widespread photic zone euxinia was during this period. Modelers have hypothesized that due to environmental conditions anoxia and sulfide may have been brought up from a deep, vast euxinic reservoir in upwelling areas, but stable, gyre-like areas remained oxic. Mesozoic The Mesozoic is well known for its distinct Ocean Anoxic Events (OAEs) which resulted in the burial of layers of black shale. Although these OAEs are not stand alone evidence for euxinia, many do contain biomarkers which support euxinic formation. Again, evidence is not universal. OAEs may have spurred the spread of existing euxinia, especially in upwelling regions or semi-restricted basins, but photic zone euxinia did not happen everywhere. Cenozoic Few episodes of euxinia are evident in the sedimentary record during the Cenozoic. Since the end of the Cretaceous OAEs, it is most likely that the oceanic bottom waters have stayed oxic. Modern euxinia Euxinic conditions have nearly vanished from Earth's open-ocean environments, but a few small scale examples still exist today. Many of these locations share common biogeochemical characteristics. For example, low rates of overturning and vertical mixing of the total water column is common in euxinic bodies of water. Small surface area to depth ratios allow multiple stable layers to form while limiting wind-driven overturning and thermohaline circulation. Furthermore, restricted mixing enhances stratified layers of high nutrient density which are reinforced by biological recycling. Within the chemocline, highly specialized organisms such as green sulfur bacteria take advantage of the strong redox potential gradient and minimal sunlight. The Black Sea The Black Sea is a commonly used modern model for understanding biogeochemical processes that occur under euxinic conditions. It is thought to represent the conditions of Earth's proto-oceans and thus assists in the interpretation of oceanic proxies. Black Sea sediment contains redox reactions to depths of tens of meters, compared to single centimeters in the open ocean. This unique feature is important for understanding the behavior of the redox cascade under euxinic conditions. The only connection between the open ocean and the Black Sea is the Bosphorus Strait, through which dense Mediterranean waters are imported. Subsequently, numerous rivers, such as the Danube, Don, Dnieper, and Dniester, drain fresh water into the Black Sea, which floats on top of the more dense Mediterranean water, causing a strong, stratified water column. This stratification is maintained by a strong pycnocline which restricts ventilation of deep waters and results in an intermediate layer called the chemocline, a sharp boundary separating oxic surface waters from anoxic bottom waters usually between 50m and 100m depth, with interannual variation attributed to large scale changes in temperature. Well-mixed, oxic conditions exist above the chemocline and sulfidic conditions are dominant below. Surface oxygen and deep water sulfide do not overlap via vertical mixing, but horizontal entrainment of oxygenated waters and vertical mixing of oxidized manganese into sulfidic waters may occur near the Bosphorus Strait inlet. Manganese and iron oxides likely oxidize hydrogen sulfide near the chemocline, resulting in the decrease in H2S concentrations as one approaches the chemocline from below. Meromictic lakes Meromictic lakes are poorly mixed and anoxic bodies of water with strong vertical stratification. While meromictic lakes are frequently categorized as bodies of water with the potential for euxinic conditions, many do not exhibit euxinia. Meromictic lakes are infamous for limnic eruptions. These events usually coincide with nearby tectonic or volcanic activity that disturbs the otherwise stable stratification of meromictic lakes. This can result in the release of immense concentrations of stored toxic gasses from the anoxic bottom waters, such as CO2 and H2S, especially from euxinic meromictic lakes. In high enough concentration, these limnic explosions can be deadly to humans and animals, such as the Lake Nyos disaster in 1986. North Sea fjords Some fjords develop euxinia if the connection to the open ocean is constricted, similar to the case of the Black Sea. This constriction prohibits relatively dense, oxygen-rich oceanic water from mixing with the bottom water of the fjord, which leads to stable stratified layers in the fjord. Low salinity melt water forms a lens of fresh, low density water on top of a more dense mass of bottom water. Ground sources of sulfur are also an important cause for euxinia in fjords. Framvaren Fjord This fjord was born as a glacial lake that was separated from the open ocean (the North Sea) when it was lifted during glacial rebound. A shallow channel (2m deep) was dug in 1850, providing a marginal connection to the North Sea. A strong pycnocline separates fresh surface water from dense, saline bottom water, and this pycnocline reduces mixing between the layers. Anoxic conditions persist below the chemocline at 20m, and the fjord has the highest levels of hydrogen sulfide in the anoxic marine world. Like the Black Sea, vertical overlap of oxygen and sulfur is limited, but the decline of H2S approaching the chemocline from below is indicative of oxidation of H2S, which has been attributed to manganese and iron oxides, photo-autotrophic bacteria, and entrainment of oxygen horizontally from the boundaries of the fjord. These oxidation processes are similar to those present in the Black Sea. Two strong seawater intrusion events have occurred through the channel in recent history (1902 and 1942). Seawater intrusions to fjords force dense, salty, oxygen-rich water into the typically anoxic, sulfidic bottom waters of euxinic fjords. These events result in a temporary disturbance to the chemocline, raising the depth at which H2S is detected. The breakdown of the chemocline causes H2S to react with dissolved oxygen in a redox reaction. This decreases the concentration of dissolved oxygen in the biologically active photic zone which can result in basin-scale fish die-offs. The 1942 event, in particular, was strong enough to chemically reduce the vast majority of oxygen and elevate the chemocline to the air-water interface. This caused a temporary state of total anoxia in the fjord, and resulted in dramatic fish mortality. Mariager Fjord This fjord is marked by a highly mobile chemocline with a depth that is thought to be related to temperature effects. Local reports of strong rotten egg smell- the smell of sulfur- during numerous summers around the fjord provide evidence that, like the Framvaren fjord, the chemocline has breached the surface of the fjord at least five times in the last century. Sediments export during these events increased the concentrations of dissolved phosphates, inorganic bioavailable nitrogen, and other nutrients, resulting in a harmful algal bloom. Cariaco Basin The Cariaco Basin in Venezuela has been used to study the cycle of organic material in euxinic marine environments. An increase in productivity coincident with post glacial nutrient loading probably caused a transition from oxic to anoxic and subsequently euxinic conditions around 14.5 thousand years ago. High productivity at the surface produces a rain of particulate organic matter to the sub surface where anoxic, sulfidic conditions persist. The organic matter in this region is oxidized with sulfate, producing reduced sulfur (H2S) as a waste product. Free sulfur exists deep in the water column and up to 6m in depth in the sediment. See also Anoxic event Canfield ocean Redox Boring Billion References Environmental science Environmental chemistry Oceanography Chemical oceanography Bioindicators Aquatic ecology Water quality indicators
Euxinia
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
5,822
[ "Hydrology", "Bioindicators", "Applied and interdisciplinary physics", "Oceanography", "Environmental chemistry", "Water pollution", "Chemical oceanography", "Water quality indicators", "Ecosystems", "nan", "Aquatic ecology" ]
59,659,569
https://en.wikipedia.org/wiki/Time%20in%20Jamaica
Jamaica Time (JAM) is the official time in Jamaica. It is five hours behind Coordinated Universal Time (UTC−05:00). Jamaica has only one time zone and does not observe daylight saving time. During winter, Jamaican Time is equivalent to North American Eastern Standard Time, whereas in the summer it is equivalent to Central Daylight Time. IANA time zone database In the IANA time zone database Jamaica has the following time zone: America/Jamaica (JM) References External links Time in Jamaica Geography of Jamaica
Time in Jamaica
[ "Physics" ]
106
[ "Spacetime", "Physical quantities", "Time", "Time by country" ]
59,660,748
https://en.wikipedia.org/wiki/C2HNO2
{{DISPLAYTITLE:C2HNO2}} The molecular formula C2HNO2 (molar mass: 71.03 g/mol, exact mass: 71.0007 u) may refer to: Carbonocyanidic acid Formyl cyanate (Hydroxyimino)ethenone HONCCO 2-Nitrosoethenone ONC(H)CO Oximide
C2HNO2
[ "Chemistry" ]
86
[ "Isomerism", "Set index articles on molecular formulas" ]
59,660,759
https://en.wikipedia.org/wiki/Oximide
Oximide is an unstable chemical compound, the cyclic imide of oxalic acid. Other names for this are the systematic name 2,3-Aziridinedione or oxalimide. The chemical formula is C2HNO2. Its core is a three member heterocycle, aziridine. Production In 1886 Ost and Mente claimed to produce oximide by the reaction of oxamic acid with phosphorus pentachloride (PCl5). However, a product with a six member ring, tetraketopiperazine, may have been produced instead. Later attempts to reproduce the production of oximide by this method were a failure. The first successful manufacture of oximide was by Hiromu Aoyama, Masami Sakamoto, and Yoshimori Omote in 1980. Properties Aziridine-2,3-dione has an infrared absorption band at 1954 cm−1. Related The term "oximide" has also been used for oximes. Derivatives of oximide exist where the hydrogen atom is substituted by other organic groups such as methyl or phenyl. When 4-methyl-1,2,4-triazolinedione is irradiated by ultraviolet light with wavelength 335 nm in a noble gas matrix, some methylaziridine-2,3-dione is made (along with isocyanates, carbon monoxide and dinitrogen). Similarly 4-phenyl-1,2,4-triazolinedione irradiated by ultraviolet light with wavelength 310 nm makes some phenylaziridine-2,3-dione. Shorter wavelength ultraviolet light decomposes these compounds to isocyanates (-NCO). Another method to produce oximide derivatives is by the photolysis of substituted diphenylmaleylimide ozonide at liquid nitrogen temperature (77K) in a potassium bromide matrix. Derivatives made this way are methyl, isopropyl and phenylethyl-aziridine-2,3-dione. These compounds are unstable at higher temperatures, and when heated, decompose to carbon monoxide and isocyanates. References Heterocyclic compounds with 1 ring Diketones Imides Nitrogen heterocycles Three-membered rings Substances discovered in the 1980s
Oximide
[ "Chemistry" ]
488
[ "Imides", "Functional groups" ]
59,661,114
https://en.wikipedia.org/wiki/Tetraketopiperazine
Tetraketopiperazine is a chemical compound with a molecule containing a six member heterocyclic ring with two nitrogen atoms. Each carbon is doubly bonded to oxygen. Production Reacting sodium oxamate (the sodium salt of oxamic acid) with hydrochloric acid yields some tetraketopiperazine. A higher yield result from reacting ethyl oxalate with sodium ethoxide. Yet another way to make tetraketopiperazine is a condensation of oxamide with ethyl oxalate with sodium ethoxide present. Excessive nitration of 2,6-diaminopyrazine ends up with tetraketopiperazine. Reactions The nitrogen atoms in tetraketopiperazine are slightly acidic losing their hydrogen atoms as ions. Salts of tetraketopiperazine exist. Tetraketopiperazine reacts with sodium bicarbonate to yield a monosodium salt. A disodium salt results from reaction with sodium hydroxide or sodium alkoxide. These are likely to be tautomeric with a hydrogen moving to an oxygen atom. Potassium salts also exist. A monosilver salt can be made from a silver compound and a dissolved tetraketopiperazine potassium salt. Ammonia and mercury salts of tetraketopiperazine also exist. Tetraketopiperazine also can form a monohydrazone. Reduction of tetraketopiperazine yields trikeopiperazine and then 2,5-diketopiperazine. Glyoxalic acid and oxamide are side products. Properties When heated tetraketopiperazine does not melt, but turns black at 250°C. Tetraketopiperazine is slightly soluble in water and more so in boiling acetic acid. The solid form has monoclinic prismatic crystals. pKa is 4.8 and the second pKa2 is 8.2. References Piperazines Imides
Tetraketopiperazine
[ "Chemistry" ]
419
[ "Imides", "Functional groups" ]
66,108,522
https://en.wikipedia.org/wiki/The%20Geometry%20of%20Numbers
The Geometry of Numbers is a book on the geometry of numbers, an area of mathematics in which the geometry of lattices, repeating sets of points in the plane or higher dimensions, is used to derive results in number theory. It was written by Carl D. Olds, Anneli Cahn Lax, and Giuliana Davidoff, and published by the Mathematical Association of America in 2000 as volume 41 of their Anneli Lax New Mathematical Library book series. Authorship and publication history The Geometry of Numbers is based on a book manuscript that Carl D. Olds, a New Zealand-born mathematician working in California at San Jose State University, was still writing when he died in 1979. Anneli Cahn Lax, the editor of the New Mathematical Library of the Mathematical Association of America, took up the task of editing it, but it remained unfinished when she died in 1999. Finally, Giuliana Davidoff took over the project, and saw it through to publication in 2000. Topics The Geometry of Numbers is relatively short, and is divided into two parts. The first part applies number theory to the geometry of lattices, and the second applies results on lattices to number theory. Topics in the first part include the relation between the maximum distance between parallel lines that are not separated by any point of a lattice and the slope of the lines, Pick's theorem relating the area of a lattice polygon to the number of lattice points it contains, and the Gauss circle problem of counting lattice points in a circle centered at the origin of the plane. The second part begins with Minkowski's theorem, that centrally symmetric convex sets of large enough area (or volume in higher dimensions) necessarily contain a nonzero lattice point. It applies this to Diophantine approximation, the problem of accurately approximating one or more irrational numbers by rational numbers. After another chapter on the linear transformations of lattices, the book studies the problem of finding the smallest nonzero values of quadratic forms, and Lagrange's four-square theorem, the theorem that every non-negative integer can be represented as a sum of four squares of integers. The final two chapters concern Blichfeldt's theorem, that bounded planar regions with area can be translated to cover at least lattice points, and additional results in Diophantine approximation. The chapters on Minkowski's theorem and Blichfeldt's theorem, particularly, have been called the "foundation stones" of the book by reviewer Philip J. Davis. An appendix by Peter Lax concerns the Gaussian integers. A second appendix concerns lattice-based methods for packing problems including circle packing and, in higher dimensions, sphere packing. The book closes with biographies of Hermann Minkowski and Hans Frederick Blichfeldt. Audience and reception The Geometry of Numbers is intended for secondary-school and undergraduate mathematics students, although it may be too advanced for the secondary-school students; it contains exercises making it suitable for classroom use. It has been described as "expository", "self-contained", and "readable". However, reviewer Henry Cohn notes several copyediting oversights, complains about its selection of topics, in which "curiosities are placed on an equal footing with deep results", and misses certain well-known examples which were not included. Despite this, he recommends the book to readers who are not yet ready for more advanced treatments of this material and wish to see "some beautiful mathematics". References Mathematics books 2000 non-fiction books Geometry of numbers
The Geometry of Numbers
[ "Mathematics" ]
718
[ "Geometry of numbers", "Number theory" ]
66,109,937
https://en.wikipedia.org/wiki/Borate%20phosphate
Borate phosphates are mixed anion compounds containing separate borate and phosphate anions. They are distinct from the borophosphates where the borate is linked to a phosphate via a common oxygen atom. The borate phosphates have a higher ratio of cations to number of borates and phosphates, as compared to the borophosphates. There are also organic esters of both borate and phosphate, e.g. NADH-borate. Production In the high temperature method, ingredients are heated together at atmospheric pressure. Products are anhydrous, and production or borophosphates is likely. The boron flux method involves dissolving ingredients such as an ammonium phosphate and metal carbonate in an excess of molten boric acid. Use Borate phosphates are of research interest for their optical, electrooptical or magnetic properties. List References Borates Phosphates Mixed anion compounds
Borate phosphate
[ "Physics", "Chemistry" ]
190
[ "Matter", "Mixed anion compounds", "Salts", "Phosphates", "Ions" ]
66,110,715
https://en.wikipedia.org/wiki/Slot-die%20coating
Slot-die coating is a coating technique for the application of solution, slurry, hot-melt, or extruded thin films onto typically flat substrates such as glass, metal, paper, fabric, plastic, or metal foils. The process was first developed for the industrial production of photographic papers in the 1950's. It has since become relevant in numerous commercial processes and nanomaterials related research fields. Slot-die coating produces thin films via solution processing. The desired coating material is typically dissolved or suspended into a precursor solution or slurry (sometimes referred to as "ink") and delivered onto the surface of the substrate through a precise coating head known as a slot-die. The slot-die has a high aspect ratio outlet controlling the final delivery of the coating liquid onto the substrate. This results in the continuous production of a wide layer of coated material on the substrate, with adjustable width depending on the dimensions of the slot-die outlet. By closely controlling the rate of solution deposition and the relative speed of the substrate, slot-die coating affords thin material coatings with easily controllable thicknesses in the range of 10 nanometers to hundreds of micrometers after evaporation of the precursor solvent. Commonly cited benefits of the slot-die coating process include its pre-metered thickness control, non-contact coating mechanism, high material efficiency, scalability of coating areas and throughput speeds, and roll-to-roll compatibility. The process also allows for a wide working range of layer thickness and precursor solution properties such as material choice, viscosity, and solids content. Commonly cited drawbacks of the slot-die coating process include its comparatively high complexity of apparatus and process optimization relative to similar coating techniques such as blade coating and spin coating. Furthermore, slot-die coating falls into the category of coating processes rather than printing processes. It is therefore better suited for coating of uniform, thin material layers rather than printing or consecutive buildup of complex images and patterns. Coating apparatus Typical components Slot-die coating equipment is available in a variety of configurations and form factors. However, the vast majority of slot-die processes are driven by a similar set of common core components. These include: A fluid reservoir to store the main supply of coating fluid for the system A pump to drive the coating fluid through the system A slot-die to distribute the coating fluid across the desired coating width before coating onto the substrate A substrate mounting system to support the substrate in a controlled manner as it moves through the system A coating motion system to drive the relative speed of the slot-die and substrate in a controlled manner during coating Depending on the complexity of the coating apparatus, a slot-die coating system may include additional modules for e.g. precise positioning of the slot-die over the substrate, particulate filtering of the coating solution, pre-treatment of the substrate (e.g. cleaning and surface energy modification), and post-processing steps (e.g. drying, curing, calendering, printing, slitting, etc.). Industrial coating systems Slot-die coating was originally developed for industrial use and remains primarily applied in production-scale settings. This is due to its potential for large-scale production of high-value thin films and coatings at a low operating cost via roll-to-roll and sheet-to-sheet line integration. Such roll-to-roll and sheet-to-sheet coating systems are similar in their intent for large-scale production, but are distinguished from each other by the physical rigidity of the substrates they handle. Roll-to-roll systems are designed to coat and handle flexible substrate rolls such as paper, fabric, plastic or metal foils. Conversely, sheet-to-sheet systems are designed to coat and handle rigid substrate sheets such as glass, metal, or plexiglass. Combinations of these systems such roll-to-sheet lines are also possible. Both industrial roll-to-roll and sheet-to-sheet systems typically feature slot-dies in the range of 300 to 1000 mm in coating width, though slot-dies up to 4000 mm wide have been reported. Commercial slot-die systems are claimed to operate at speeds up to several hundred square meters per minute, with roll-to-roll systems typically offering higher throughput due to decreased complexity of substrate handling. Such large-scale coating systems can be driven by a variety of industrial pumping solutions including gear pumps, progressive cavity pumps, pressure pots, and diaphragm pumps depending on process requirements. Roll-to-roll lines To handle flexible substrates, roll-to-roll lines typically use a series of rollers to continually drive the substrate through the various stations of the process line. The bare substrate originates at an "unwind" roll at the start of the line and is collected at a "rewind" roll at the end. Hence, the substrate is often referred to as a "web" as it winds its way through the process line from start to finish. When a substrate roll has been fully processed, it is collected from the rewind roll, allowing for a new, bare substrate roll to be mounted onto the unwind roller to begin the process again. Slot-die coating often comprises just a single step of an overall roll-to-roll process. The slot-die is typically mounted in a fixed position on the roll-to-roll line, dispensing coating fluid onto the web in a continuous or patch-based manner as the substrate passes by. Because the substrate web spans all stations of the roll-to-roll line simultaneously, the individual processes at these stations are highly coupled and must be optimized to work in tandem with each other at the same web speed. Sheet-to-sheet lines The rigid substrates employed in sheet-to-sheet systems are not compatible with the roll-to-roll processing method. Sheet-to-sheet systems rely instead on a rack-based system to transport individual sheets between the various stations of a process line, where transfer between stations may occur in a manual or automated manner. Sheet-to-sheet lines are therefore more analogous to a series of semi-coupled batch operations rather than a single continuous process. This allows for easier optimization of individual unit operations at the expense of potentially increased handling complexity and reduced throughput. Furthermore, the need to start and stop the slot-die coating process for each substrate sheet places higher tolerance requirements on the leading and trailing edge uniformity of the slot-die step. In sheet-to-sheet lines, the substrate may be fixed in place as the substrate passes underneath on a moving support bed (sometimes referred to as a "chuck"). Alternatively, the slot-die may move during coating while the substrate remains fixed in place. Lab-scale development tools Miniaturized slot-die tools have become increasingly available to support the development of new roll-to-roll compatible processes prior to the requirement of full pilot- and production-scale equipment. These tools feature similar core components and functionality as compared to larger slot-die coating lines, but are designed to integrate into pre-production research environments. This is typically achieved by e.g. accepting standard A4 sized substrate sheets rather than full substrate rolls, using syringe pumps rather than industrial pumping solutions, and relying upon hot-plate heating rather than large industrial drying ovens, which can otherwise reach lengths of several meters to provide suitable residence times for drying. Because the slot-die coating process can be readily scaled between large and small areas by adjusting the size of the slot-die and throughput speed, processes developed on lab-scale tools are considered to be reasonably scalable to industrial roll-to-roll and sheet-to-sheet coating lines. This has led to significant interest in slot-die coating as a method of scaling new thin film materials and devices, particularly in the sphere of thin film solar cell research for e.g. perovskite and organic photovoltaics. Common coating modalities Slot-die hardware can be applied in several distinct coating modalities, depending on the requirements of a given process. These include: Proximity coating, in which the substrate is supported by a hard surface (e.g. a precision backing roll or moving support bed) and the slot-die is held at a relatively small coating gap (typically 25 μm to several mm away from the substrate, depending on the wet thickness of the coated layer). Curtain coating, in which the substrate is supported by a hard surface (e.g. a precision backing roll or moving support bed) and the slot-die is held at a much larger coating gap, enabling much higher coating speeds as long as a suitable Weber number is achieved. Tensioned web over slot-die coating, in which the substrate web is suspended between two idle rollers placed on opposite sides of the slot-die. The web is then pressed against the lips of the slot-die such that the slot-die itself applies tension to the web. When fluid is pumped through the slot-die onto the substrate, the fluid lubricates the slot-die-substrate interface, preventing the slot-die from scratching the substrate during coating. The dynamics of proximity coating have been extensively studied and applied over a wide range of scales and applications. Furthermore, the concepts governing proximity coating are relevant in understanding the behavior of other coating modalities. Proximity coating is therefore considered to be the default configuration for the purposes of this introductory article, though curtain coating and tensioned web over slot die configurations remain highly relevant in industrial manufacturing. Key process parameters Film thickness control Slot-die coating is a non-contact coating method, in which the slot-die is typically held over the substrate at a height several times higher than the target wet film thickness. The coating fluid transfers from the slot-die to the substrate via a fluid bridge that spans the air gap between the slot-die lips and substrate surface. This fluid bridge is commonly referred to as the coating meniscus or coating bead. The thickness of the resulting wet coated layer is controlled by tuning the ratio between the applied volumetric pump rate and areal coating rate. Unlike in self-metered coating methods such as blade- and bar coating, the slot-die does not influence the thickness of the wet coated layer via any form of destructive physical contact or scraping. The height of the slot-die therefore does not determine the thickness of the wet coated layer. The height of the slot-die is instead significant in determining the quality of the coated film, as it controls the distance that must be spanned by the meniscus to maintain a stable coating process. Slot-die coating operates via a pre-metered liquid coating mechanism. The thickness of the wet coated layer () is therefore significantly determined by the width of coating (), the volumetric pump rate (), and the coating speed, or relative speed between the slot-die and the substrate during coating (). Increasing the pump rate increases the thickness of the wet layer, while increasing the coating speed or coating width decreases the wet layer thickness. The coating width is typically a fixed value for a given slot-die process. Hence, pump rate and coating speed can be used to calculate, control, and adjust the wet film thickness in a highly predictable manner. However, deviation from this idealized relationship can occur in practice due to non-ideal behavior of materials and process components; for example when using highly viscoelastic fluids, or a sub-optimal process setup where fluid creeps up the slot-die component rather than transferring fully to the substrate. The final thickness of the dry layer after solvent evaporation () is further determined by the solids concentration of the precursor solution () and the volumetric density of the coated material in its final form (). Increasing the solids content of the precursor solution increases the thickness of the dry layer, while using a more dense material results a thinner dry layer for a given concentration. Film quality control As with all solution processed coating methods, the final quality of a thin film produced via slot-die coating depends on a wide array of parameters both intrinsic and external to the slot-die itself. These parameters can be broadly categorized into: Coating window effects, determining the stability of fluid transfer between the slot-die and substrate in an ideal slot-die process isolated from external imperfections Downstream process effects, determining the behavior of the coating fluid on the substrate surface after exiting the slot-die component External effects, determining the degree to which the coating apparatus is capable of delivering the ideal coating process characterized by the pre-metered slot-die coating mechanism and the coating window of a given process Coating window parameters Under ideal conditions, the potential to achieve a defect-free film via slot-die is entirely governed by the coating window of the a given process. The coating window is a multivariable map of key process parameters, describing the range over which they can be applied together to achieve a defect-free film. Understanding the coating window behavior of a typical slot-die process enables operators to observe defects in a slot-die coated layer and intuitively determine a course of action for defect resolution. The key process parameters used to define the coating window typically include: The ratio of slot-die height to wet film thickness () The volumetric pump rate () The coating speed, or relative speed of the substrate () The capillary number of the coating liquid () The difference in pressure across the upstream and downstream faces of the meniscus () The coating window can be visualized by plotting two such key parameters against each other while assuming the others to remain constant. In an initial simple representation, the coating window can be described by plotting the relationship between viable pump rates and coating speeds for a given process. Excessive pumping or insufficient coating speeds result in defect spilling of the coating liquid outside of the desired coating area, while coating too quickly or pumping insufficiently results in defect breakup of the meniscus. The pump rate and coating speed can therefore be adjusted to directly compensate for these defects, though changing these parameters also affects wet film thickness via the pre-metered coating mechanism. Implicit in this relationship is the effect of the slot-die height parameter, as this affects the distance over which the meniscus must be stretched while remaining stable during coating. Raising the slot-die higher can thus counteract spilling defects by stretching the meniscus further, while lowering the slot-die can counteract streaking and breakup defects by reducing the gap that the meniscus must breach. Other helpful coating window plots to consider include the relationship between fluid capillary number and slot-die height, as well as the relationship between pressure across the meniscus and slot-die height. The former is particularly relevant when considering changes in fluid viscosity and surface tension (i.e. the effect of coating various materials with significantly different rheology), while the latter is relevant in the context of applying a vacuum box at the upstream face of the meniscus to stabilize the meniscus against breakup. Downstream process effects In reality, the final quality of a slot-die coated film is heavily influenced by a variety of factors beyond the parameter boundaries of the ideal coating window. Surface energy effects and drying effects are examples of common downstream effects with a significant influence on final film morphology. Sub-optimal matching of surface energy between the substrate and coating fluid can cause dewetting of the liquid film after it has been applied to the substrate, resulting in pinholes or beading of the coated layer. Sub-optimal drying processes are also often noted to influence film morphology, resulting in increased thickness at the edge of a film caused by the coffee ring effect. Surface energy and downstream processing must therefore be carefully optimized to maintain the integrity of the slot-die coated layer as it moves through the system, until the final thin film product can be collected. External effects Slot-die coating is a highly mechanical process in which uniformity of motion and high hardware tolerances are critical to achieving uniform coatings. Mechanical imperfections such as jittery motion in the pump and coating motion systems, poor parallelism between the slot-die and substrate, and external vibrations in the environment can all lead to undesired variations in film thickness and quality. Slot-die coating apparatus and its environment must therefore be suitably specified to meet the needs of a given process and avoid hardware- and environment-derived defects in the coated film. Applications Industrial applications Slot-die coating was originally developed for the commercial production of photographic films and papers. In the past several decades it has become a critical process in the production of adhesive films, flexible packaging, transdermal and oral pharmaceutical patches, LCD panels, multi-layer ceramic capacitors, lithium-ion batteries and more. Research applications With growing interest in the potential of nanomaterials and functional thin film devices, slot-die coating has become increasingly applied in the sphere of materials research. This is primarily attributed to the flexibility, predictability and high repeatability of the process, as well as its scalability and origin as a proven industrial technique. Slot-die coating has been most notably employed in research related to flexible, printed, and organic electronics, but remains relevant in any field where scalable thin film production is required. Examples of research enabled by slot-die coating include: Thin film solar cells, to produce electron transport layers, hole transport layers, photoactive layers, and passivating layers in perovskite, organic, quantum dot and multi-junction photovoltaic devices Solid state and next-gen batteries, to produce electrodes, solid electrolytes, ion selective membranes, protective coatings, and interface modification coatings Fuel cells and water electrolysis, to produce electrolytes and electrode catalyst coatings Flexible touch-sensitive surfaces, to produce transparent conductive films OLED devices, to produce electron transport layers, hole transport layers, and electroactive layers Printed diagnostics and molecular sensors, to produce active layers and ion selective membranes Microfluidics and lab-on-a-chip devices, to produce hydrophobic/hydrophilic surface coatings for enhanced liquid flow Water purification, to produce nanofiltration membranes Biobased and biodegradable packaging, to produce multilayer barrier foils from sustainable materials References Materials science Coatings
Slot-die coating
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,718
[ "Coatings", "Applied and interdisciplinary physics", "Materials science", "nan" ]
66,111,842
https://en.wikipedia.org/wiki/Themis%20programme
The Themis programme is an ongoing European Space Agency programme and carried by prime contractor, ArianeGroup, aiming to develop a prototype reusable rocket first stage and plans to conduct demonstration flights. The prototype rocket will also be called Themis. Context Themis is expected to provide valuable information on the economic value of reusability for the European government space program and develop technologies for potential use on future European launch vehicles. Themis will be powered by the ESA's Prometheus rocket engine. Two possible landing sites have been mentioned in discussions surrounding the project: The former Diamant launch complex, which will be used for the flight testing phase; The Ariane 5 launch complex, which will become available after the transition from the Ariane 5 to Ariane 6. The estimated program timeline, , is as follows: 2020: Basic stage testing, composed of tank filling and ground support equipment tests. 2021: Prometheus engine testing 2022: Low-altitude hop tests (short flights up from and down to the launch site) 2023: Initial flight test 2023–2024: Loop tests (repeated flights of the reusable demonstration vehicle) 2025: Full flight envelope test Suborbital flight tests were slated to begin as early as 2023 at Europe's Spaceport in Kourou, French Guiana, but have been delayed. Eventually, lessons learned with Themis' development will pave the way for developing the European reusable launcher Ariane Next, which should first fly in the 2030s. History On 15 December 2020, ESA signed a contract worth €33 million with ArianeGroup in France for the "Themis Initial Phase". This first phase of the Themis programme involves development of the flight vehicle technologies and test bench and static fire demonstrations in Vernon, France. It also includes the preparation of the ground segment at the Esrange Space Center in Kiruna, Sweden, for the first hop tests and any associated flight vehicle modifications. On 22 June 2023, the first hot-fire test of the Prometheus engine, as a part of the Themis first stage demonstrator, was successfully conducted in Vernon, France. Landing leg testing began in July 2024. See also SpaceX reusable launch system development program References External links ESA Themis website Space programs European space programmes Spaceflight technology Reusable launch systems Partially reusable space launch vehicles Space launch vehicles of Europe
Themis programme
[ "Engineering" ]
493
[ "Space programs", "European space programmes" ]
66,113,656
https://en.wikipedia.org/wiki/Branched%20flow
Branched flow refers to a phenomenon in wave dynamics, that produces a tree-like pattern involving successive mostly forward scattering events by smooth obstacles deflecting traveling rays or waves. Sudden and significant momentum or wavevector changes are absent, but accumulated small changes can lead to large momentum changes. The path of a single ray is less important than the environs around a ray, which rotate, compress, and stretch around in an area preserving way. Even more revealing are groups, or manifolds of neighboring rays extending over significant zones. Starting rays out from a point but varying their direction over a range, one to the next, or from different points along a line all with the same initial directions are examples of a manifold. Waves have analogous launching conditions, such as a point source spraying in many directions, or an extended plane wave heading on one direction. The ray bending or refraction leads to characteristic structure in phase space and nonuniform distributions in coordinate space that look somehow universal and resemble branches in trees or stream beds. The branches taken on non-obvious paths through the refracting landscape that are indirect and nonlocal results of terrain already traversed. For a given refracting landscape, the branches will look completely different depending on the initial manifold. Examples Two-dimensional electron gas Branched flow was first identified in experiments with a two-dimensional electron gas. Electrons flowing from a quantum point contact were scanned using a scanning probe microscope. Instead of usual diffraction patterns, the electrons flowed forming branching strands that persisted for several correlation lengths of the background potential. Ocean dynamics Focusing of random waves in the ocean can also lead to branched flow. The fluctuation in the depth of the ocean floor can be described as a random potential. A tsunami wave propagating in such medium will form branches which carry huge energy densities over long distances. This mechanism may also explain some statistical discrepancies in the occurrence of freak waves. Light propagation Given the wave nature of light, its propagation in random media can produce branched flow too. Experiments with laser beams in soap bubbles have shown this effect, which has also been proposed to control light focusing in a disordered medium. Flexural waves in elastic plates Flexural waves travelling in elastic plates also produce branched flows. Disorder, in this case, appears in the form of inhomogeneous flexural rigidity. Other examples Other examples where branched flow has been proposed to happen include microwave radiation of pulsars refracted by interstellar clouds, the Zeldovitch model for the large structure of the universe and electron-phonon interaction in metals. Dynamics: Kick and drift map The dynamical mechanism that originates the branch formation can be understood by means of the kick and drift map, an area preserving map defined by: where n accounts for the discrete time, x and p are position and momentum respectively, and V is the potential. The equation for the momentum is called the “kick” stage, whereas the equation for the position is the “drift”. Given an initial manifold in phase space, it can be iterated under the action of the kick and drift map. Typically, the manifold stretches and folds (although keeping its total area constant) forming cusps or caustics and stable regions. These regions of phases space with high concentration of trajectories are precisely the branches. Scaling properties of branched flow in random potentials When plane waves or parallel trajectories propagate through a weak random medium, several caustics can arise at more or less regularly ordered positions. Taking the direction perpendicular to the flow, the distance separating the caustics is determined by the correlation length of the potential d. Another characteristic length is the distance L downstream where the first generation of caustics appear. Taking into account the energy of the trajectories E and the height of the potential ɛ<<E, it can be argued that the following relation holds See also Ballistic conduction Caustic (optics) Quantum chaos Rogue wave Semiclassical physics Wave propagation References External links Video: The laser show in a soap bubble (Observation of branched flow of light) Wave mechanics Dynamics (mechanics)
Branched flow
[ "Physics" ]
839
[ "Physical phenomena", "Classical mechanics", "Waves", "Wave mechanics", "Motion (physics)", "Dynamics (mechanics)" ]
66,115,199
https://en.wikipedia.org/wiki/Intersection%20non-emptiness%20problem
The intersection non-emptiness problem, also known as finite automaton intersection problem or the non-emptiness of intersection problem, is a PSPACE-complete decision problem from the field of automata theory. Definitions A non-emptiness decision problem is defined as follows. Given an automaton as input, the goal is to determine whether or not the automaton's language is non-empty. In other words, the goal is to determine if there exists a string that is accepted by the automaton. Non-emptiness problems have been studied in the field of automata theory for many years. Several common non-emptiness problems have been shown to be complete for complexity classes ranging from Deterministic Logspace up to PSPACE. The intersection non-emptiness decision problem is concerned with whether the intersection of given languages is non-empty. In particular, the intersection non-emptiness problem is defined as follows. Given a list of deterministic finite automata as input, the goal is to determine whether or not their associated regular languages have a non-empty intersection. In other, the goal is to determine if there exists a string that is accepted by all of the automata in the list. Algorithm There is a common exponential time algorithm that solves the intersection non-emptiness problem based on the Cartesian product construction introduced by Michael O. Rabin and Dana Scott. The idea is that all of the automata together form a product automaton such that a string is accepted by all of the automata if and only if it is accepted by the product automaton. Therefore, a breadth-first search (or depth-first search) within the product automaton's state diagram will determine whether there exists a path from the product start state to one of the product final states. Whether or not such a path exists is equivalent to determining if any string is accepted by all of the automata in the list. Note: The product automaton does not need to be fully constructed. The automata together provide sufficient information so that transitions can be determined as needed. Hardness The intersection non-emptiness problem was shown to be PSPACE-complete in a work by Dexter Kozen in 1977. Since then, many additional hardness results have been shown. Yet, it is still an open problem to determine whether any faster algorithms exist. References * See an incomplete list of related publications here. Related Deterministic Finite Automaton Emptiness Problem PSPACE-complete List of PSPACE-complete Problems PSPACE-complete problems Automata (computation)
Intersection non-emptiness problem
[ "Mathematics" ]
516
[ "PSPACE-complete problems", "Mathematical problems", "Computational problems" ]
76,301,624
https://en.wikipedia.org/wiki/Tilings%20and%20patterns
Tilings and patterns is a book by mathematicians Branko Grünbaum and Geoffrey Colin Shephard published in 1987 by W.H. Freeman. The book was 10 years in development, and upon publication it was widely reviewed and highly acclaimed. Structure and topics The book is concerned with tilings—a partition of the plane into regions (the tiles)—and patterns—repetitions of a motif in the plane in a regular manner. The book is divided into two parts. The first seven chapters define concepts and terminology, establish the general theory of tilings, survey tilings by regular polygons, review the theory of patterns, and discuss tilings in which all the tiles, or all the edges, or all the vertices, play the same role. The last five chapters survey a variety of advanced topics in tiling theory: colored patterns and tilings, polygonal tilings, aperiodic tilings, Wang tiles, and tilings with unusual kinds of tiles. Each chapter open with an introduction to the topic, this is followed by the detailed material of the chapter, much previously unpublished, which is always profusely illustrated, and normally includes examples and proofs. Chapters close with exercises, and a section of notes and references which detail the historical development of the topic. These notes sections are interesting and entertaining, as they discuss the efforts of the previous workers in the field and detail the good (and bad) approaches to the topic. The notes also identify unsolved problems, point out areas of potential application, and provide connections to other disciplines in mathematics, science, and the arts. The book has 700 pages, including a 40-page, 800-entry bibliography, and an index. The book is used as a source on numerous Wikipedia pages. Audience In their preface the authors state "We have written this book with three main groups of readers in mind—students, professional mathematicians and non-mathematicians whose interests include patterns and shapes (such as artists, architects, crystallographers and others). Other reviewers commented as follows: "The most striking feature of the book is its extensive collection of figures, including hundreds of examples of tilings and patterns. The sheer abundance is perhaps one reason why artists and designers have been drawn to it over the years." "Their idea was that the book should be accessible to any reader who is attracted to geometry." Reception Contemporary reviews of the book were overwhelming positive. The book was reviewed by 15 journals in the fields of crystallography, mathematics, and the sciences. Quotations from major reviews: Influence The book was praised in later journal articles by multiple authors: The book was also praised in later books by other authors: Editions The hardback original Tilings and patterns was published in 1987. Tilings and patterns - an introduction, a paperback reprint of the first seven chapters of the 1987 original, was published in 1989. In 2016 a second edition of the full text was published by Dover in paperback, with a new preface and an appendix describing progress in the subject since the first edition. The reviewer at MAA Reviews commented "Dover has once again done the mathematical community a service in bringing back such a notable volume." References External links at the Internet Archive at the MacTutor History of Mathematics Archive Mathematics books 1987 non-fiction books Tiling Tessellation
Tilings and patterns
[ "Physics", "Mathematics" ]
671
[ "Tessellation", "Planes (geometry)", "Euclidean plane geometry", "Symmetry" ]
73,370,366
https://en.wikipedia.org/wiki/Cauchy%27s%20limit%20theorem
Cauchy's limit theorem, named after the French mathematician Augustin-Louis Cauchy, describes a property of converging sequences. It states that for a converging sequence the sequence of the arithmetic means of its first members converges against the same limit as the original sequence, that is with implies . The theorem was found by Cauchy in 1821, subsequently a number of related and generalized results were published, in particular by Otto Stolz (1885) and Ernesto Cesàro (1888). Related results and generalizations If the arithmetic means in Cauchy's limit theorem are replaced by weighted arithmetic means those converge as well. More precisely for sequence with and a sequence of positive real numbers with one has . This result can be used to derive the Stolz–Cesàro theorem, a more general result of which Cauchy's limit theorem is a special case. For the geometric means of a sequence a similar result exists. That is for a sequence with and one has . The arithmetic means in Cauchy's limit theorem are also called Cesàro means. While Cauchy's limit theorem implies that for a convergent series its Cesàro means converge as well, the converse is not true. That is the Cesàro means may converge while the original sequence does not. Applying the latter fact on the partial sums of a series allows for assigning real values to certain divergent series and leads to the concept of Cesàro summation and summable series. In this context Cauchy's limit theorem can be generalised into the Silverman–Toeplitz theorem. Proof Let and such that for all . Due to there exists a with for all . Now for all the above yields: References Further reading Sen-Ming: Note on Cauchy's Limit Theorem. In: The American Mathematical Monthly, Vol. 57, No. 1 (Jan., 1950), pp. 28–31 (JSTOR) External links Cesàro means and Cauchy's limit theorem at SOS math Cesàro Mean - proof of Cauchy's limit theoren at the ProofWiki Theorems about real number sequences Convergence tests
Cauchy's limit theorem
[ "Mathematics" ]
445
[ "Sequences and series", "Theorems in mathematical analysis", "Mathematical structures", "Convergence tests", "Theorems about real number sequences" ]
73,370,539
https://en.wikipedia.org/wiki/Magnetic%20buoyancy
In plasma physics, magnetic buoyancy is an upward force exerted on magnetic flux tubes that are immersed in electrically conducting fluids and are under the influence of a gravitational force. It acts on magnetic flux tubes in stellar convection zones where it plays an important role in the formation of sunspots and starspots. It was first proposed by Eugene Parker in 1955. Magnetic flux tubes For a magnetic flux tube in hydrostatic equilibrium with the surrounding medium, the tube's interior magnetic pressure and fluid pressure must be balanced by the fluid pressure of the exterior medium, that is, The magnetic pressure is always positive, so As such, assuming that the temperature of the plasma within the flux tube is the same as the temperature of the surrounding plasma, the density of the flux tube must be lower than the density of the surrounding medium. Under the influence of a gravitational force, the tube will rise. Instability The magnetic buoyancy instability is a plasma instability that can arise from small perturbations in systems where magnetic buoyancy is present. The magnetic buoyancy instability in a system with magnetic field and perturbation wavevector , has three modes: the interchange instability where the perturbation wavevector is perpendicular to the magnetic field direction ; the undular instability, sometimes referred to as the Parker instability or magnetic Rayleigh–Taylor instability, where the perturbation wavevector is parallel to the magnetic field direction ; and the mixed instability, sometimes referred to as the quasi-interchange instability, a combination of the interchange and undular instabilities. References Plasma phenomena Magnetism
Magnetic buoyancy
[ "Physics" ]
321
[ "Plasma phenomena", "Physical phenomena", "Plasma physics" ]
73,378,722
https://en.wikipedia.org/wiki/Shubnikov%20Institute%20of%20Crystallography%20RAS
The A. V. Shubnikov Institute of Crystallography is a scientific institute of the Department of Physical Sciences of the Russian Academy of Sciences (RAS) located in Moscow, Russia. The institute was created by the order of the Presidium of the Academy of Sciences of the USSR on 16November 1943. The first director of the Institute was a corresponding member of the Academy of Sciences of the USSR Alexei Vasilievich Shubnikov. In 1969, the institute was awarded the Order of the Red Banner of Labour. Areas of scientific interest: Crystal growth: research into crystal formation and growth, development of synthesis methods and creation of equipment for crystallography Crystal structure: study of idealialized (theoretical) and real-world crystal structures Crystal properties: study of symmetry and physical properties of crystals; search for crystals with valuable properties History 1925 – Laboratory of crystallography at the Mineralogical Museum (Leningrad). 1932 – Crystallographic section of the Lomonosov Institute of Geochemistry, Mineralogy and Petrography of the USSR Academy of Sciences. 1937 – Crystallographic Laboratory becomes part of the Geological Group of the USSR Academy of Sciences. 1941 – During World War II the majority of academic institutes were evacuated from Moscow to the East. The Crystallographic Laboratory continued its work in 1941-43 in the Sverdlovsk Oblast (in the Urals) where a series of important scientific and applied crystallographic problems were solved. 1943 – The Laboratory returns to Moscow and is transferred to the Department of Physical and Mathematical Sciences and renamed the Institute of Crystallography. 1944 – Organization of the Institute of Crystallography. Alexei V. Shubnikov was appointed Director of the Institute. 1956 – Founding of the journal Kristallografiya in which most of the institute's research is subsequently published. This journal is available in English translation as Soviet Physics Crystallography (ISSN 0038-5638) 1956-1992 (vols. 1-37) continued as Crystallography Reports (ISSN 1063-7745) 1993- (vol. 38-) 1957 – Recognition outside the USSR of the establishment of the new field of antisymmetry and colour symmetry by A.V. Shubnikov and N.V. Belov 1962 – Boris Konstantinovich Weinstein is appointed Director of the Institute. 1969 – Award of the Order of the Red Banner of Labour. 1998 – Professor Mikhail Kovalchuk elected Director of the Institute. 2016 – The Institute was subsumed within the new «Crystallography and Photonics» Federal Research Center of the Russian Academy of Sciences (KiF RAS) which is now known as the «Crystallography and Photonics» FLNIK. Research Fields Nano- and bio-organic materials: production, synthesis, structure and properties, diagnostic methods using X-ray and synchrotron radiation, electrons, neutrons and atomic force microscopy Fundamental aspects of the formation of crystalline materials and nanosystems, their real structure and properties Creation and study of new crystalline and functional materials References External links Institute of Crystallography home page History of the Institute of Crystallography (2018 in Russian) Institute of Crystallography research fields Crystallography Institutes of the Russian Academy of Sciences Research institutes established in 1943 Crystallography organizations
Shubnikov Institute of Crystallography RAS
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
671
[ "Crystallography", "Condensed matter physics", "Crystallography organizations", "Materials science" ]
56,288,945
https://en.wikipedia.org/wiki/Ilijas%20Farah
Ilijas Farah (born 18 February 1966) is a Canadian-Serbian mathematician and a professor of mathematics at York University in Toronto and at the Mathematical Institute of Serbian Academy of Sciences and Arts, Belgrade, Serbia. His research focuses on applications of logic to operator algebras. Career Farah was born in Sremska Mitrovica, Serbia. He received his BSc and MSc in 1988 and 1992 respectively from Belgrade University and his PhD in 1997 from the University of Toronto. He is a Research Chair in Logic and Operator Algebras at York University, Toronto. Before moving to York University he was an NSERC Postdoctoral Fellow, York University (1997–99), a Hill Assistant Professor at Rutgers University (1999–2000), and a professor at CUNY–Graduate center and College of Staten Island (2000–02). Farah was an invited speaker at the International Congress of Mathematicians, Seoul 2014, section on Logic and Foundations, where he presented his work on applications of logic to operator algebras. Awards, distinctions, and recognitions Sacks prize for the best doctorate in Mathematical Logic, 1997 Governor General's gold medal for one of the two best doctorates at the University of Toronto, 1998 The Canadian Association for Graduate Studies/University Microfilms International Distinguished Dissertation Award, for the best dissertation in engineering, medicine and the natural sciences in Canada, 1998. Dean's award for outstanding research, York University, 2006. Faculty Excellence in Research Award (Established Research Award), Faculty of Science, York University, 2017 Sources External links Ilijas Farah: Krajnja proširenja modela, MSc thesis, Belgrade university 1992. Living people Canadian mathematicians Mathematical logicians Set theorists 1966 births Yugoslav emigrants to Canada Canadian people of Serbian descent
Ilijas Farah
[ "Mathematics" ]
358
[ "Mathematical logic", "Mathematical logicians" ]
56,294,750
https://en.wikipedia.org/wiki/Synthetic%20measure
A synthetic measure (or synthetic indicator) is a value that is the result of combining other metrics, which are measurements of various features. Examples Quality of service There is a method to measure quality of service in hotels. In related study authors aggregate tourist opinions, measured on a scale from 1 to 10. Synthetic measure (indicator) of service quality in each hotel is calculated with the help of the aggregation operator. Project performance Other study proposed to use classical parameters EV, PV and AC to carry out the synthetic measure of project performance. Rankings of countries Different normalized stimulants and destimulants were used in research to create synthetic measure that selects countries with the best and the worst levels of implementation of Europe 2020 targets. References External links Scientific works about synthetic measure on Google Scholar Computational statistics
Synthetic measure
[ "Mathematics" ]
162
[ "Computational statistics", "Computational mathematics" ]
56,296,196
https://en.wikipedia.org/wiki/Copper%28I%29%20tert-butoxide
Copper(I) tert-butoxide is an alkoxide of copper(I). It is a white sublimable solid. It is a reagent in the synthesis of other copper compounds. The compound was originally obtained by salt metathesis from lithium tert-butoxide and copper(I) chloride. An octameric form was obtained by alcoholysis of mesitylcopper: 8 CuC6H2Me3 + 8 HOBu-t → 8 HC6H2Me3 + [CuOBu-t]8 References Copper(I) compounds Tert-butyl compounds Alkoxides
Copper(I) tert-butoxide
[ "Chemistry" ]
131
[ "Bases (chemistry)", "Alkoxides", "Functional groups" ]
56,296,491
https://en.wikipedia.org/wiki/Silkhenge
Silkhenge structures are a means of spider reproduction used by one or more currently-unknown species of spider. It typically consists of a central "spire" constructed of spider silk, containing one to two eggs, surrounded by a sort of fence of silk in a circle. Discovery In August 2013, Georgia Tech student Troy Alexander was visiting Tambopata National Reserve in Peru. He found, under a tarpaulin, a tiny bit of silk in a circular pattern approximately one inch in diameter. Upon further investigation of the area, Alexander found three additional similar structures. He posted a picture on Reddit asking for help identifying it. No information was forthcoming, as this turned out to be a completely unknown phenomenon. His discovery acquired the name "silkhenge" because of its similarity to Stonehenge. At the end of that year, an eight-day expedition led by Phil Torres found dozens more examples of this phenomenon, generally on the trunks of bamboo and cecropia trees. Spiderlings hatching from the structures were documented, but like many baby arthropods they lacked the features typically used to identify adults, and none lived to adulthood. DNA tests were also inconclusive, so the species creating these structures remained unidentified. A video was posted on YouTube of spiderlings hatching. One hypothesized purpose of the fence is that it serves to trap mites and other small arthropods known to share the same habitat. This could, in turn, secure a food source that would be easily accessible to the spiderlings upon hatching. It has also been proposed that it protects the eggs and spiderlings from possible predators such as ants. References Spiders Silk Invertebrates of Peru Shelters built or used by animals Eggs 2013 in biology
Silkhenge
[ "Biology" ]
354
[ "Ethology stubs", "Ethology", "Behavior", "Shelters built or used by animals" ]
71,964,454
https://en.wikipedia.org/wiki/Gravitational%20scattering
Gravitational scattering refers to the process by which two or more celestial objects interact through their gravitational fields, causing their trajectories to alter. This phenomenon is fundamental in astrophysics and the study of dynamic systems. When objects like stars, planets, or black holes pass close enough to influence each other’s motions, their paths can shift dramatically. These interactions typically result in either bound systems, like binary star systems, or unbound systems, where the objects continue moving apart after the interaction. An example of a body ejected from a planetary system by this process would be Kuiper belt bodies pushed from the Solar System by Jupiter. Observing gravitational scattering Gravitational scattering events are usually studied using simulations and mathematical models of the gravitational field interactions between bodies. One significant feature of gravitational scattering is the effect of energy exchange. For instance, a high-velocity object may transfer some of its kinetic energy to a slower-moving object, resulting in a slingshot effect. This principle is utilized in space exploration for gravitational assists, where spacecraft gain momentum by passing close to a planet. Observing gravitational scattering has provided insight into many astrophysical phenomena. In dense regions like star clusters or galactic cores, gravitational scattering plays a role in star formation and the distribution of stellar populations. For instance, hypervelocity stars, which are ejected from their galaxies, are often a result of gravitational scattering involving massive objects like black holes. In more extreme cases, close interactions between compact objects, such as black holes, can lead to the emission of gravitational waves, detectable by instruments like the Laser Interferometer Gravitational-Wave Observatory (LIGO). Gravitational scattering is analyzed through both Newtonian mechanics and general relativity, with the latter being necessary for systems involving high mass or velocity. Gravitational scattering impacts Gravitational scattering can cause orbits to change or even cause celestial bodies to depart their native planetary systems. A possible mechanism that may move planets over large orbital radii is gravitational scattering by larger planets or, in a protoplanetary disk, gravitational scattering by over-densities in the fluid of the disk. In the case of the Solar System, Uranus and Neptune may have been gravitationally scattered onto larger orbits by close encounters with Jupiter and/or Saturn. Systems of exoplanets can undergo similar dynamical instabilities following the dissipation of the gas disk that alter their orbits and in some cases result in planets being ejected or colliding with the star. Planets scattered gravitationally can end on highly eccentric orbits with perihelia close to the star, enabling their orbits to be altered by the gravitational tides they raise on the star. The eccentricities and inclinations of these planets are also excited during these encounters, providing one possible explanation for the observed eccentricity distribution of the closely orbiting exoplanets. The resulting systems are often near the limits of stability. As in the Nice model, systems of exoplanets with an outer disk of planetesimals can also undergo dynamical instabilities following resonance crossings during planetesimal-driven migration. The eccentricities and inclinations of the planets on distant orbits can be damped by dynamical friction with the planetesimals with the final values depending on the relative masses of the disk and the planets that had gravitational encounters. See also Planetary migration References Astrophysics Effects of gravity
Gravitational scattering
[ "Physics", "Astronomy", "Mathematics", "Engineering" ]
669
[ "Astrodynamics", "Classical mechanics", "Astrophysics", "Astronomical dynamical systems", "Aerospace engineering", "Dynamical systems", "Celestial mechanics", "Astronomical objects", "Astronomical sub-disciplines", "Stellar dynamics" ]
71,964,566
https://en.wikipedia.org/wiki/Offshore%20freshened%20groundwater
Offshore freshened groundwater (OFG) is water that contains a Total Dissolved Solid (TDS) concentration lower than sea water, and which is hosted in porous sediments and rocks located in the sub-seafloor. OFG systems have been documented all over around the world and have an estimated global volume of around 1 × 106 km3. Their study is important because they may represent an unconventional source of potable water for human populations living near the coast, especially in areas where groundwater resources are scarce or facing stress Elements and processes OFG usually presents salinity values < 33 Practical Salinity Units (PSU). They are located at water depth < 100 m and within 55 km of the coast in both siliciclastic and carbonatic aquifers along active and passive margins. OFG systems are usually composed by multiple OFG bodies which are altogether < 2 km thick (Fig.1) The principal emplacement mechanisms for OFG systems are (from the most common to the least common): Meteoric recharge by rainfall which can be either a paleo-meteoric event during sea level low stands or an active-meteoric recharge via permeable connections between offshore and onshore aquifers (Fig. 2). Diagenesis due to post‐sedimentary alteration processes leading the release of freshwater and accumulation in deeply buried marine sediments in high Pressure and Temperature conditions . Sub‐glacial and pro‐glacial injection such as sub-glacial melting, sub-glacial drainage systems, reversal of groundwater flow direction with respect to modern flow patterns. Decomposition of gas hydrates as a result of changing in temperatures or pressures which lead to the release of low salinity pore water. The geological settings have a major control on OFG development: the majority are hosted in coarser siliciclastic materials, with porosity values around 30% to 60%, constraint by a permeability contrast (predominantly sand to clay). Topographic gradients have a major impact on OFG emplacement as topography-driven flow is one of the most important mechanisms controlling discharge of freshwater offshore. Investigation Different methods can be used to characterize and assess OFG occurrences: Drilling, coring and wireline logging methods lead to characterized both sediments (e.g. granulometry and hydraulic properties) and pore water via geochemical analysis (e.g. salinity and chloride concentrations). Resistivity, porosity, density, sonic velocities, gamma ray content, temperature, and flow meter measurements can be then determined via in-situ measurements. Reflection seismic methods provide indirect constraints on heterogeneities controlling OFG distribution. Electromagnetic (EM) surveying, usually collected using controlled source electromagnetic (CSEM) systems, is used to discriminate between saturated regions with saline water (less resistive) from those containing fresh groundwater (more resistive) (Fig. 3). Numerical modelling approaches can lead to quantifying OFG emplacement in continental shelf environments over geologic time scales Applications and potential of OFG OFG systems are receiving increasing attention as they may be used as an unconventional source of potable water in coastal areas, where groundwater resources are being rapidly depleted or contaminated. 60% of the global population lives in areas of water stress defined as the ratio of total water withdrawals to available renewable surface and groundwater supplies (Fig.1). Climate change, rapid population growth, and urbanization have a negative impact on water stress especially in coastal communities. Therefore, OFG has been proposed as an alternative source of freshwater to mitigate water scarcity and groundwater depletion in areas of water stress References Marine geology Hydrology Water
Offshore freshened groundwater
[ "Chemistry", "Engineering", "Environmental_science" ]
749
[ "Water", "Hydrology", "Environmental engineering" ]
71,965,326
https://en.wikipedia.org/wiki/Oper%20%28mathematics%29
In mathematics, an oper is a principal connection, or in more elementary terms a type of differential operator. They were first defined and used by Vladimir Drinfeld and Vladimir Sokolov to study how the KdV equation and related integrable PDEs correspond to algebraic structures known as Kac–Moody algebras. Their modern formulation is due to Drinfeld and Alexander Beilinson. History Opers were first defined, although not named, in a 1981 Russian paper by Drinfeld and Sokolov on Equations of Korteweg–de Vries type, and simple Lie algebras. They were later generalized by Drinfeld and Beilinson in 1993, later published as an e-print in 2005. Formulation Abstract Let be a connected reductive group over the complex plane , with a distinguished Borel subgroup . Set , so that is the Cartan group. Denote by and the corresponding Lie algebras. There is an open -orbit consisting of vectors stabilized by the radical such that all of their negative simple-root components are non-zero. Let be a smooth curve. A G-oper on is a triple where is a principal -bundle, is a connection on and is a -reduction of , such that the one-form takes values in . Example Fix the Riemann sphere. Working at the level of the algebras, fix , which can be identified with the space of traceless complex matrices. Since has only one (complex) dimension, a one-form has only one component, and so an -valued one form is locally described by a matrix of functions where are allowed to be meromorphic functions. Denote by the space of valued meromorphic functions together with an action by , meromorphic functions valued in the associated Lie group . The action is by a formal gauge transformation: Then opers are defined in terms of a subspace of these connections. Denote by the space of connections with . Denote by the subgroup of meromorphic functions valued in of the form with meromorphic. Then for it holds that . It therefore defines an action. The orbits of this action concretely characterize opers. However, generally this description only holds locally and not necessarily globally. Gaudin model Opers on have been used by Boris Feigin, Edward Frenkel and Nicolai Reshetikhin to characterize the spectrum of the Gaudin model. Specifically, for a -Gaudin model, and defining as the Langlands dual algebra, there is a bijection between the spectrum of the Gaudin algebra generated by operators defined in the Gaudin model and an algebraic variety of opers. References Differential operators Connection (mathematics)
Oper (mathematics)
[ "Mathematics" ]
542
[ "Mathematical analysis", "Differential operators" ]
52,000,501
https://en.wikipedia.org/wiki/Inexact%20differential%20equation
An inexact differential equation is a differential equation of the form (see also: inexact differential) The solution to such equations came with the invention of the integrating factor by Leonhard Euler in 1739. Solution method In order to solve the equation, we need to transform it into an exact differential equation. In order to do that, we need to find an integrating factor to multiply the equation by. We'll start with the equation itself. , so we get . We will require to satisfy . We get After simplifying we get Since this is a partial differential equation, it is mostly extremely hard to solve, however in some cases we will get either or , in which case we only need to find with a first-order linear differential equation or a separable differential equation, and as such either or References Further reading External links A solution for an inexact differential equation from Stack Exchange a guide for non-partial inexact differential equations at SOS math Equations Ordinary differential equations Differential calculus Discrete mathematics Mathematical structures
Inexact differential equation
[ "Mathematics" ]
209
[ "Discrete mathematics", "Mathematical structures", "Calculus", "Mathematical objects", "Equations", "Differential calculus" ]
52,002,298
https://en.wikipedia.org/wiki/National%20Centre%20for%20Biotechnology%20Education
The National Centre for Biotechnology Education (NCBE) is a national resource centre at the University of Reading to teach pre-university biotechnology in schools in the UK. It was founded in 1990. History It began as the National Centre for School Biotechnology (NCSB) in 1985 in the Department of Microbiology. It became the NCBE in 1990. For many years it was the only centre in Europe that was devoted to the teaching of biotechnology in schools. The Dolan DNA Learning Center had been set up in the USA. It was set up as an education project by the Society for General Microbiology, now the Microbiology Society. Money from the Laboratory of the Government Chemist set up the National Centre for School Biotechnology (NCSB). Money also came from the Gatsby Charitable Foundation. For the first five years, the UK government's DTI was involved, but from 1990 onwards wanted the organization to become self-supporting as it had to cut back on budgets. By 1992 the government provided no money for the centre. Structure The site was set up in former buildings of the University of Reading's Department of Microbiology. In 2001, the NCBE moved to new purpose-built premises in the University’s School of Food Biosciences, however the creation of a new School of Pharmacy at the University forced the NCBE to move to new premises elsewhere on the campus in 2005. Function It reaches out to schools to give up-to-date information on biotechnology. Biotechnology is a rapidly evolving subject, and schools cannot keep up-to-date with all that they would be required to know. It produces educational resources. It runs the Microbiology in Schools Advisory Committee (MISAC). See also Centre for Industry Education Collaboration at York National Centre for Excellence in the Teaching of Mathematics, University of York Science and Plants for Schools, another well-known science resource for UK schools References External links NCBE DNA to Darwin Education resources from the University of Leicester European Initiative for Biotechnology Education 1985 establishments in the United Kingdom Biology education in the United Kingdom Biotechnology in the United Kingdom Biotechnology organizations Educational institutions established in 1985 Genetics education Science education in the United Kingdom Scientific organizations established in 1985 University of Reading
National Centre for Biotechnology Education
[ "Engineering", "Biology" ]
442
[ "Biotechnology organizations", "Biotechnology in the United Kingdom", "Biotechnology by country" ]
63,270,016
https://en.wikipedia.org/wiki/Direction-preserving%20function
In discrete mathematics, a direction-preserving function (or mapping) is a function on a discrete space, such as the integer grid, that (informally) does not change too drastically between two adjacent points. It can be considered a discrete analogue of a continuous function. The concept was first defined by Iimura. Some variants of it were later defined by Yang, Chen and Deng, Herings, van-der-Laan, Talman and Yang, and others. Basic concepts We focus on functions , where the domain X is a finite subset of the Euclidean space . ch(X) denotes the convex hull of X. There are many variants of direction-preservation properties, depending on how exactly one defines the "drastic change" and the "adjacent points". Regarding the "drastic change" there are two main variants: Direction preservation (DP) means that, if x and y are adjacent, then for all : . In words: every component of the function f must not switch signs between adjacent points. Gross direction preservation (GDP) means that, if x and y are adjacent, then . In words: the direction of the function f (as a vector) does not change by more than 90 degrees between adjacent points. Note that DP implies GDP but not vice versa. Regarding the "adjacent points" there are several variants: Hypercubic means that x and y are adjacent iff they are contained in some axes-parallel hypercube of side-length 1. Simplicial means that x and y are adjacent iff they are vertices of the same simplex, in some triangulation of the domain. Usually, simplicial adjacency is much stronger than hypercubic adjacency; accordingly, hypercubic DP is much stronger than simplicial DP. Specific definitions are presented below. All examples below are for dimensions and for X = { (2,6), (2,7), (3, 6), (3, 7) }. Properties and examples Hypercubic direction-preservation A cell is a subset of that can be expressed by for some . For example, the square is a cell. Two points in are called cell connected if there is a cell that contains both of them. Hypercubic direction-preservation properties require that the function does not change too drastically in cell-connected points (points in the same hypercubic cell). f is called hypercubic direction preserving (HDP) if, for any pair of cell-connected points x,y in X, for all : . The term locally direction-preserving (LDP) is often used instead. The function fa on the right is DP. Some authors use a variant requiring that, for any pair of cell-connected points x,y in X, for all : . A function f(x) is HDP by the second variant, iff the function g(x):=f(x)-x is HDP by the first variant. f is called hypercubic gross direction preserving (HGDP), or locally gross direction preserving (LGDP), if for any pair of cell-connected points x,y in X, . Every HDP function is HGDP, but the converse is not true. The function fb is HGDP, since the scalar product of every two vectors in the table is non-negative. But it is not HDP, since the second component switches sign between (2,6) and (3,6): . Some authors use a variant requiring that, for any pair of cell-connected points x,y in X, . A function f(x) is HGDP by the second variant, iff the function g(x):=f(x)-x is HGDP by the first variant. Simplicial direction-preservation A simplex is called integral if all its vertices have integer coordinates, and they all lie in the same cell (so the difference between coordinates of different vertices is at most 1). A triangulation of some subset of is called integral if all its simplices are integral. Given a triangulation, two points are called simplicially connected if there is a simplex of the triangulation that contains both of them. Note that, in an integral triangulation, every simplicially-connected points are also cell-connected, but the converse is not true. For example, consider the cell . Consider the integral triangulation that partitions it into two triangles: {(2,6),(2,7),(3,7)} and {(2,6),(3,6),(3,7)}. The points (2,7) and (3,6) are cell-connected but not simplicially-connected. Simplicial direction-preservation properties assume some fixed integral triangulation of the input domain. They require that the function does not change too drastically in simplicially-connected points (points in the same simplex of the triangulation). This is, in general, a much weaker requirement than hypercubic direction-preservation. f is called simplicial direction preserving (SDP) if, for some integral triangulation of X, for any pair of simplicially-connected points x,y in X, for all : . f is called simplicially gross direction preserving (SGDP) or simplicially-local gross direction preserving (SLGDP) if there exists an integral triangulation of ch(X) such that, for any pair of simplicially-connected points x,y in X, . Every HGDP function is SGDP, but HGDP is much stronger: it is equivalent to SGDP w.r.t. all possible integral triangulations of ch(X), whereas SGDP relates to a single triangulation. As an example, the function fc on the right is SGDP by the triangulation that partitions the cell into the two triangles {(2,6),(2,7),(3,7)} and {(2,6),(3,6),(3,7)}, since in each triangle, the scalar product of every two vectors is non-negative. But it is not HGDP, since . References Theory of continuous functions Types of functions Discrete mathematics
Direction-preserving function
[ "Mathematics" ]
1,340
[ "Functions and mappings", "Discrete mathematics", "Theory of continuous functions", "Mathematical objects", "Topology", "Mathematical relations", "Types of functions" ]
63,276,409
https://en.wikipedia.org/wiki/Gram%E2%80%93Euler%20theorem
In geometry, the Gram–Euler theorem, Gram-Sommerville, Brianchon-Gram or Gram relation (named after Jørgen Pedersen Gram, Leonhard Euler, Duncan Sommerville and Charles Julien Brianchon) is a generalization of the internal angle sum formula of polygons to higher-dimensional polytopes. The equation constrains the sums of the interior angles of a polytope in a manner analogous to the Euler relation on the number of d-dimensional faces. Statement Let be an -dimensional convex polytope. For each k-face , with its dimension (0 for vertices, 1 for edges, 2 for faces, etc., up to n for P itself), its interior (higher-dimensional) solid angle is defined by choosing a small enough -sphere centered at some point in the interior of and finding the surface area contained inside . Then the Gram–Euler theorem states: In non-Euclidean geometry of constant curvature (i.e. spherical, , and hyperbolic, , geometry) the relation gains a volume term, but only if the dimension n is even:Here, is the normalized (hyper)volume of the polytope (i.e, the fraction of the n-dimensional spherical or hyperbolic space); the angles also have to be expressed as fractions (of the (n-1)-sphere). When the polytope is simplicial additional angle restrictions known as Perles relations hold, analogous to the Dehn-Sommerville equations for the number of faces. Examples For a two-dimensional polygon, the statement expands into:where the first term is the sum of the internal vertex angles, the second sum is over the edges, each of which has internal angle , and the final term corresponds to the entire polygon, which has a full internal angle . For a polygon with faces, the theorem tells us that , or equivalently, . For a polygon on a sphere, the relation gives the spherical surface area or solid angle as the spherical excess: . For a three-dimensional polyhedron the theorem reads:where is the solid angle at a vertex, the dihedral angle at an edge (the solid angle of the corresponding lune is twice as big), the third sum counts the faces (each with an interior hemisphere angle of ) and the last term is the interior solid angle (full sphere or ). History The n-dimensional relation was first proven by Sommerville, Heckman and Grünbaum for the spherical, hyperbolic and Euclidean case, respectively. See also Euler characteristic Dehn-Sommerville equations Angular defect Gauss-Bonnet theorem References Polytopes Real algebraic geometry Geometry
Gram–Euler theorem
[ "Mathematics" ]
556
[ "Mathematical theorems", "Mathematical problems", "Geometry", "Theorems in geometry" ]
62,367,764
https://en.wikipedia.org/wiki/H2BK5ac
H2BK5ac is an epigenetic modification to the DNA packaging protein Histone H2B. It is a mark that indicates the acetylation at the 5th lysine residue of the histone H2B protein. H2BK5ac is involved in maintaining stem cells and colon cancer. Lysine acetylation and deacetylation Proteins are typically acetylated on lysine residues and this reaction relies on acetyl-coenzyme A as the acetyl group donor. In histone acetylation and deacetylation, histone proteins are acetylated and deacetylated on lysine residues in the N-terminal tail as part of gene regulation. Typically, these reactions are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activity, although HATs and HDACs can modify the acetylation status of non-histone proteins as well. The regulation of transcription factors, effector proteins, molecular chaperones, and cytoskeletal proteins by acetylation and deacetylation is a significant post-translational regulatory mechanism These regulatory mechanisms are analogous to phosphorylation and dephosphorylation by the action of kinases and phosphatases. Not only can the acetylation state of a protein modify its activity but there has been recent suggestion that this post-translational modification may also crosstalk with phosphorylation, methylation, ubiquitination, sumoylation, and others for dynamic control of cellular signaling. In the field of epigenetics, histone acetylation (and deacetylation) have been shown to be important mechanisms in the regulation of gene transcription. Histones, however, are not the only proteins regulated by posttranslational acetylation. Nomenclature H2BK5ac indicates acetylation of lysine 5 on histone H2B protein subunit: Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. Epigenetic implications The post-translational modification of histone tails by either histone modifying complexes or chromatin remodeling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Trophoblast stem cell epithelial mediation MAP3K4 controls the activity of CBP histone acetyltransferase which acetylates histones H2A and H2B to maintain the trophoblast stem cell epithelial phenotype. Trophoblast are cells forming the outer layer of a blastocyst, which provide nutrients to the embryo and develop into a large part of the placenta. They are formed during the first stage of pregnancy and are the first cells to differentiate from the fertilized egg. or, after gastrulation, Methods The histone mark acetylation can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. See also Histone code Histone acetylation References Epigenetics Post-translational modification
H2BK5ac
[ "Chemistry" ]
1,298
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
62,371,798
https://en.wikipedia.org/wiki/Sanford%20Consortium
The Sanford Consortium is a non-profit biomedical research institute in La Jolla, California. It was formed from a collaboration between the Burnham Biomedical Research Institute, Salk Institute for Biological Studies, Scripps Research, La Jolla Institute for Immunology, and the University of California, San Diego. The institute was previously known as the Sanford Consortium for Regenerative Medicine (SCRM). The 'consortium' research building is 136,700 square-feet. It is located on a 7.5-acre property nearby to and leased from UC San Diego. The development and construction of the building and facilities from 2007 through 2011 cost $106,572,300 in construction and $21,028,500 in equipment acquisition costs. References Medical research institutes in California Cancer organizations based in the United States Stem cell research La Jolla, San Diego Medical and health organizations based in California Independent research institutes
Sanford Consortium
[ "Chemistry", "Biology" ]
184
[ "Translational medicine", "Tissue engineering", "Stem cell research" ]
62,375,030
https://en.wikipedia.org/wiki/Non-catalytic%20tyrosine-phosphorylated%20receptor
Non-catalytic tyrosine-phosphorylated receptors (NTRs), also called immunoreceptors or Src-family kinase-dependent receptors, are a group of cell surface receptors expressed by leukocytes that are important for cell migration and the recognition of abnormal cells or structures and the initiation of an immune response. These transmembrane receptors are not grouped into the NTR family based on sequence homology, but because they share a conserved signalling pathway utilizing the same signalling motifs. A signaling cascade is initiated when the receptors bind their respective ligand resulting in cell activation. For that tyrosine residues in the cytoplasmic tail of the receptors have to be phosphorylated, hence the receptors are referred to as tyrosine-phosphorylated receptors. They are called non-catalytic receptors, as the receptors have no intrinsic tyrosine kinase activity and cannot phosphorylate their own tyrosine residues. Phosphorylation is mediated by additionally recruited kinases. A prominent member of this receptor family is the T-cell receptor. Features and Classification Members of the Non-catalytic tyrosine-phosphorylated receptor family share a couple of common features. The most prominent feature is the presence of conserved signalling motifs containing tyrosine residue, such as Immunoreceptor tyrosine-based activation motifs (ITAMs), in the cytoplasmic tail of the receptors. The receptor signaling pathway is initiated by ligand binding to the extracellular domains of the receptor. Upon binding, the tyrosine residues in the signaling motifs are phosphorylated by membrane-associated tyrosine kinases. The receptors themselves have no intrinsic tyrosine kinase activity. The phosphorylated NTRs, in turn, initiate a specific intracellular signaling cascades. The signaling cascade is down-regulated by dephosphorylation by protein tyrosine phosphatases. Additional characteristics of the receptor family are a rather small (< 20 nm) extracellular domain and the binding to ligands that are anchored to solid surfaces or membranes of other cells. NTRs are exclusively expressed in leukocytes. Based on those features, about 100 distinct NTRs have been identified. The table below lists different classes of NTRs. Members of a class have a high sequence homology and typically share the same gene locus. Structure NTRs are transmembrane glycoproteins with typically small ectodomains of 6 to 10 nm. NTRs have either an N-terminal or C terminal ectodomains. Ectodomains have a high sequence diversity between members. Many NTRs have an unstructured intracellular domain which contains tyrosine residues that can be phosphorylated by tyrosine kinases. Some receptors in this family, however, lack a cytoplasmic tail and therefore associate with adaptor proteins containing the same tyrosine residues. Adaptor proteins associate to their respective NTR through their transmembrane helixes carrying oppositely charged residues. The cytoplasmic domains do not contain any intrinsic tyrosine kinase activity. Conserved tyrosine-containing motifs Tyrosine residues of NTRs mostly appear in conserved amino acid motifs with defined sequence signatures that define whether the receptor plays an activator or inhibiting role in the cell. These motifs allow binding of proteins containing a SH2 domain. Motifs are intrinsic or in the associated adaptor subunits. Immunoreceptor tyrosine-based activation motifs (ITAMs) are short amino acid sequences that contain two tyrosine residues (Y) arranged as Yxx(L/I)x6-8Yxx(L/I), where L and I indicate Leucine or Isoleucine residue respectively (according to amino acid abbreviations), x denotes any amino acids, a subscribe 6-8 indicates a sequence of 6 to 8 amino acids in length. ITAMs recruits activating kinases to the NTR. Inhibitory signals are transduced by Immunoreceptor tyrosine-based inhibitory motifs (ITIMs) of the signature (S/I/V/L)xYxx(I/V/L), bind to cytoplasmic tyrosine phosphatases. Immunoreceptor Tyrosine-based Switch Motifs (ITSMs) with the signature TxYxx(I/V) may induce both activator and inhibitory signals. These motifs are confined to SLAM family receptors. Finally, Immunoglobulin Tail Tyrosine Motifs (ITTMs) with a YxNM signature have been found to have a costimulatory effect. Signalling Pathway Biophysics of receptor-ligand binding The signalling pathway of an NTR is induced upon binding to its respective ligand. NTRs, as they are defined, have a short ectodomain (5 - 10 nm) and bind to surface-anchored ligands. For binding to take place, the membrane of the leukocyte has to come into close proximity to the surface with the ligand. The receptor-ligand complex, once bound, spans a dimension of about 10-16 nm. Ectodomains of other surface molecules can be much larger (up to 50 nm), therefore the membrane has to bend towards the ligand, which introduces tension within the membrane. Additionally, large pulling forces can act on the complex, changing dissociation rates of the complex. Receptor triggering NTR triggering, the initial step of the NTR signalling pathway, involves phosphorylation of the tyrosine residues in the cytoplasmic domain of the receptor or the associated adaptor protein. Once phosphorylated, these residues recruit further signalling proteins. Phosphorylation of the tyrosine residues is performed by membrane-anchored Src family kinases (SFK) (e.g. Lck, Fyn, Lyn, Blk), while receptor protein tyrosine phosphatases (RPTP) (e.g. CD45, CD148) mediate the dephosphorylation of the same residues. SFK and RPTP are constitutively active. In an untriggered state, the activity of phosphatases dominates, keeping NTRs in an unphosphorylated state, and thus preventing signal initiation. It has been shown that inhibition of tyrosine phosphatases induces phosphorylation in NTRs and signalling even without ligand binding. It is therefore assumed that a perturbation of SFK and RPTP balance due to ligand binding, leading to stronger kinase activity and hence accumulation of phosphorylated tyrosine residues, is needed for initiation of downstream signalling. Different mechanisms of how the balance is disturbed upon ligand binding have been suggested. The induced proximity or aggregation model suggests that upon receptor-ligand binding multiple receptors aggregate. SFKs have multiple phosphorylation sites that regulate their catalytic activity. If the kinase is associated with an NTR, aggregation brings two or more SFK into close proximity, which allows them to phosphorylate each other. Hence due to receptor aggregation, SFKs are activated leading to higher kinase activity and increased NTR phosphorylation. Evidence for this model is given by mathematical models and an experiment where artificially cross-linking NTRs led to signal induction. However, there is not sufficient evidence that receptor aggregation happens in vivo. According to the Conformational change model, binding of a ligand induces a conformational change in the receptor such that the cytosolic domain becomes accessible for kinases. Thus phosphorylation is only possible when the receptor is bound to a ligand. However, structural studies have failed to show conformational changes. The Kinetic segregation model proposes that RPTPs are physically excluded from NTR-ligand-binding regions. Ectodomains of RPTPs are much larger compared to NTRs and SFKs. The interaction between ligand and receptor brings the membranes into close contact, and the gap between membranes is too narrow for membrane proteins with large ectodomains to diffuse into the region. This increase the ratio of SFKs over RPTPs in the region surrounding the receptor-ligand complex. Any non-bound NTR would diffuse out of these regions too quickly to induce a downstream signal. Evidence for this model is given by the observation that in T cells, phosphatases CD45 and CD148 segregate from the T-cell receptor upon ligand binding. It was also shown that truncation of phosphatase ectodomains as well as the elongation of ligand ectodomains reduces the segregation and inhibits NTR triggering. Similar findings have been reported for Receptors, CD28 family receptors, Dectin-1. Downstream signaling pathway Phosphorylated tyrosine residues in cytoplasmic tails of NTRs serve as docking sites for SH2 domains of cytosolic signalling proteins. Once bound to the NTR they are activated by phosphorylation and can propagate the signal. Whether a receptor acts as an inhibitor or activator depends on the conserved tyrosine-containing motifs present in its cytoplasmic domain. Activatory motifs (ITAMs) bind kinases, such as Syk family kinases (e.g. ZAP70 for T-cell receptor) that phosphorylate a range of substrates, thereby inducing a signalling cascade leading to the activation of the leukocyte. Inhibitory motifs (ITIM) on the other hand recruit the cytoplasmic tyrosine phosphates SHP1, SHP2 and the phosphatidylinositol phosphatase SHIP-1. The phosphatases can attenuate the signal by dephosphorylating a broad range of signalling molecules. Signal integration from multiple NTRs At any given time, multiple NTR types can be engaged with their receptive ligands, inducing activatory, costimulatory as well as inhibitory signals. The functional response of the leukocytes depends on the integration of the signals. References Receptors Immune system Immune receptors Transmembrane proteins Leukocytes Cell signaling
Non-catalytic tyrosine-phosphorylated receptor
[ "Chemistry", "Biology" ]
2,132
[ "Organ systems", "Receptors", "Immune system", "Signal transduction" ]
62,375,361
https://en.wikipedia.org/wiki/Frontiers%20of%20Biogeography
Frontiers of Biogeography is a peer-reviewed open access scientific journal publishing biogeographical science, with the academic standards expected of a journal operated by and for an academic society. It published on behalf of the International Biogeographical Society, using the eScholarship Publishing platform. The current editor-in-chief is Robert J. Whittaker. Abstracting and indexing The journal is abstracted and indexed in: References External links Open access journals Ecology journals Geography journals Biogeography Academic journals established in 2009 English-language journals
Frontiers of Biogeography
[ "Biology", "Environmental_science" ]
113
[ "Environmental science journals", "Biogeography", "Ecology journals", "Environmental science journal stubs" ]
62,375,753
https://en.wikipedia.org/wiki/Biogeographia
Biogeographia: The Journal of Integrative Biogeography is a peer-reviewed open access scientific journal publishing original research and reviews in biogeography since 1970. It is published on behalf of the Italian Biogeography Society (Società Italiana di Biogeografia), using the eScholarship Publishing platform. The current editor-in-chief is Diego Fontaneto. Abstracting and indexing The journal is abstracted and indexed in: Notable articles The four most highly cited papers with more than 150 citations by the end of 2020 are: Vigna Taglianti, A., Audisio, P. A., Biondi, M., Bologna, M. A., Carpaneto, G. M., De Biase, A., ... & Zapparoli, M. Halffter, G. Vigna Taglianti, A., Audisio, P. A., Belfiore, C., Biondi, M., Bologna, M. A., Carpaneto, G. M., ... & Zoia, S. Sindaco, R., Venchi, A., Carpaneto, G. M., & Bologna, M. A. The three most downloaded papers with more than 1000 views by the end of 2020 are: Halffter, G. Amori, G., & Castiglia, R. Bianchi, C. N., & Morri, C. References External links Open access journals Ecology journals Geography journals Academic journals established in 1970 English-language journals
Biogeographia
[ "Environmental_science" ]
337
[ "Environmental science journals", "Ecology journals" ]
62,375,820
https://en.wikipedia.org/wiki/Journal%20of%20Aerospace%20Engineering
The Journal of Aerospace Engineering is a peer-reviewed scientific journal published by the American Society of Civil Engineers and combines civil engineering with aerospace technology (but also incorporates other elements of civil engineering) to develop structures for space and extreme conditions. Topics of interest include aerodynamics, computational fluid dynamics, wind tunnel testing of buildings and structures, aerospace structures and materials, and more. History The journal has previously published under the names Journal of the Aero-Space Transport Division (1962-1966) and as the Journal of the Air Transport Division (1956-1961) Abstracting and indexing The journal is abstracted and indexed in Ei Compendex, Science Citation Index Expanded, ProQuest databases, Civil Engineering Database, Inspec, Scopus, and EBSCO databases. External links Aerospace engineering journals American Society of Civil Engineers academic journals
Journal of Aerospace Engineering
[ "Engineering" ]
169
[ "Aerospace engineering journals", "Aerospace engineering" ]
77,712,898
https://en.wikipedia.org/wiki/Agriculture%20in%20ants
Agriculture and domestication are practices undertaken by certain ant species and colonies. These ants use agricultural methods and are known as one of the few animal groups, along with Homo sapiens, to have achieved the level of eusociality necessary to practice agriculture. It is estimated that ants began this practice at least 50 million years ago. The domestication of plant, fungus, and animal species by ants is well documented. For some ant species or groups, this is an activity essential to their survival, particularly in a symbiotic relationship with the cultivated species, especially plants or fungi. Some plants require the presence of ants for their survival and offer benefits to the ants in return, creating a mutualistic relationship between their species. The agricultural practices of ants vary widely from one species to another, but they can engage in creating compost necessary for plant growth, fighting pathogens that affect cultivated species, destroying invasive species that threaten their crops, creating "ant gardens" of up to fifty different plants, optimizing crops by adapting to the solar cycle and other natural cycles, or generally engaging in grooming activities. In some cases, it is believed that ants can achieve productivity levels similar to the early stages of human agriculture. Ants also domesticate numerous animal species, especially aphids and Lepidoptera. Discovered only in 2016, ant farming and agriculture with plants is a rapidly evolving field of discoveries. As of 2022, it is estimated that ants assist in the dispersal of seeds for over 11,000 plant species, are in mutualistic relationships with at least 700 plant species, and engage in purely agricultural processes with hundreds of others. Regarding domesticated animals, more than 1,000 of the 4,000 known species of aphids and around 500 species of Lepidoptera are affected by ant domestication. Terminology The use of the term "agriculture", which may not be entirely appropriate for mutualistic relationships—particularly in cases where a colony is hosted by a plant, such as a tree, in exchange for protection and aid in its survival and growth—is well documented in the scientific literature for processes where ants create crops and directly cultivate plants or fungi. The use of the term "domestication" is also well established when ant domestication has led to specific evolutionary changes in the species involved. Causes and prevalence Causes It remains difficult to determine the causes that led different ant species to adopt these behaviors over millions of years of evolution, due to the vast diversity of behaviors depending on the location, the plants, fungi, and animals involved, as well as the great diversity of ant species. However, numerous studies focus particularly on these evolutionary developments, especially in a comparative framework with the human species, to identify commonalities and differences between the two processes. Overall, it seems that leafcutter ant species that developed agricultural practices involving fungi began doing so at least 65-55 million years ago and may have been the first to have engaged in such behavior, though it's not certain. The common ancestor of these species is dated to -65/-55 million years ago. It seems, according to research dating from 2017, that this change occurred in dry habitats, notably in South America. Prevalence As with the causes that led to such behavioral evolution in certain ants, it remains difficult to assess the overall prevalence of these behaviors. As of 2022, it is estimated that ants assist in the dispersal of seeds for over 11,000 plant species, are in mutualistic relationships with at least 700 plant species, and engage in purely agricultural processes with hundreds of others. Regarding domesticated animals, more than 1,000 of the 4,000 known species of aphids and around 500 species of Lepidoptera are affected by ant domestication. In comparison, Homo sapiens engages in farming and agriculture with '260 plant, 470 animal and 100 mushroom-forming fungal species'. Plant farming by ants was only discovered in 2016, making it a very young and rapidly evolving field of study. However, these phenomena appear to involve hundreds of different ant species out of the approximately 13,000 species discovered to date. In 2022, it was believed that approximately 37 various ant species engaged in true plant cultivation, without considering domestication and fungiculture. Processes Ants, depending on the species, engage in a wide range of behaviors and practices. Some species, such as leafcutter ants, form symbiotic relationships with certain fungi. In these cases, the queen of a future colony often carries with her a clone of the fungus from her original colony, which her new colony will cultivate and tend to in order to ensure their survival and food supply. They attack pathogens that affect these fungi, defend them against potential threats, and generally engage in grooming to maintain the health of the fungi. This allowed leafcutter ants to become the dominant herbivore species in South America and made them able to create massive ant colonies, containing millions of workers and thousands of ant rooms. The agricultural practices of ants vary widely from one species to another, but they can engage in creating compost necessary for plant growth, fighting pathogens that affect cultivated species, destroying invasive species that threaten their crops, creating "ant gardens" of up to fifty different plants, optimizing crops by adapting to the solar cycle and other natural cycles, or generally engaging in grooming activities. In some cases, it is believed that ants can achieve productivity levels similar to the early stages of human agriculture. They are also known to have, with Homo Sapiens, and a very few number of other animal groups, managed domestication of other animals, in that case, aphids and Lepidoptera. Some ant species, such as Philidris nagasau, were proven recently to create large plant gardens containing dozens of different plants, that they use and tend to. This gave them the ability to develop very large colonies and they enjoy results similar to the beginnings of human agriculture, that humans were able to achieve during the Neolithic period. References Agronomy Food industry Ants Symbiosis Ethology Behavior
Agriculture in ants
[ "Biology" ]
1,222
[ "Behavior", "Symbiosis", "Biological interactions", "Behavioural sciences", "Ethology" ]
77,714,151
https://en.wikipedia.org/wiki/Cheyava%20Falls
Cheyava Falls is a rock discovered on Mars by NASA's Perseverance rover during its exploration of the Jezero Crater. This rock, named after a Grand Canyon waterfall, has drawn significant attention due to its potential as an indicator of ancient life on Mars. The rover's instruments detected organic compounds within the rock, which are essential for all known life. According to NASA, Cheyava Falls "possesses qualities that fit the definition of a possible indicator of ancient life". "Cheyava Falls" is characterized by large white calcium sulfate veins and bands of reddish material, indicative of hematite, a mineral that gives Mars its rusty color. The veins are "filled with millimeter-size crystals of olivine". The rock features millimeter-sized off-white splotches surrounded by black material, resembling "leopard spots." These spots contain iron and phosphate, elements often associated with microbial life. According to a seven-step scale called Confidence of Life Detection (CoLD) used by NASA astrobiologists, the rock is on Step One, "Detect possible signal". The rock's composition suggests it was once exposed to water. However, there are alternative, non-biological explanations for its features. The rover has analyzed the rock using various instruments but its team concludes that a definitive understanding will require returning the sample to Earth for more in-depth study. The "arrowhead-shaped rock" was found at the northern edge of Neretva Vallis area, on July 18, 2024, and is 1 meter by 0.6 meters. On July 21, Perseverance took a sample of the rock that became its 22nd core sample that can be delivered to Earth by a future mission. The rover made a "selfie" with a rock on July 23. Gallery References Mars 2020 Rocks on Mars Astrobiology
Cheyava Falls
[ "Astronomy", "Biology" ]
378
[ "Origin of life", "Speculative evolution", "Astrobiology", "Biological hypotheses", "Astronomical sub-disciplines" ]
77,715,265
https://en.wikipedia.org/wiki/Pixel%20stealing%20attack
In cybersecurity, pixel stealing attacks are a group of timing side-channel attacks that allow cross-origin websites to infer how a particular pixel is displayed to the user. History One of the earliest known instances of a pixel-stealing attack was described by Paul Stone in a white paper presented at the Black Hat Briefings conference in 2013. Stone's approach exploited a quirk in how browsers rendered images encoded in the SVG format. SVG images support various features, including the ability to apply SVG filters that applies transform image content. Stone discovered that by measuring the time it took for a browser to render a morphological filter over a known set of pixels and then comparing this with the time taken to render the same filter over a pixel from an unknown website, he could infer the color of the pixels. This allowed him to build a grayscale image of the other website which could be then used to leak information about the website. References Client-side web security exploits
Pixel stealing attack
[ "Technology" ]
201
[ "Computer security stubs", "Computing stubs" ]
77,717,243
https://en.wikipedia.org/wiki/N1-Acetyl-N2-formyl-5-methoxykynuramine
{{DISPLAYTITLE:N1-Acetyl-N2-formyl-5-methoxykynuramine}} N1-Acetyl-N2-formyl-5-methoxykynuramine (AMFK) is a metabolite of melatonin and an antioxidant. References Antioxidants Metabolites Acetamides Methoxy compounds Formamides Benzaldehydes
N1-Acetyl-N2-formyl-5-methoxykynuramine
[ "Chemistry" ]
97
[ "Metabolites", "Metabolism" ]
77,723,569
https://en.wikipedia.org/wiki/Greg%20Landsberg
Greg Landsberg is an American particle physicist. He is the Thomas J. Watson Sr. Professor of Physics at Brown University. Biography Landsberg obtained his doctor of philosophy from SUNY Stony Brook in 1994, supervized by Paul Grannis. He worked at the DØ experiment at Fermilab during and after his PhD. He entered Brown University's faculty in 1998. In 2001 Landsberg became a Alfred P. Sloan Fellow. In the same year, he wrote with Savas Dimopoulos about the generation of minuscule blackholes in the Large Hadron Collider (LHC). Landsberg was also the Deputy Physics Coordinator of DØ, before he led the Brown team to join the CMS Experiment at CERN in 2004. In 2009 he was elected a Fellow of the American Physical Society, see list and announcement by his department. In 2010, Landsberg proposed a theory in which the universe's dimensions grow as it expands. He also participated in the search of the Higgs Boson. From 2012 to 2013, he was the Physics Coordinator at the CMS Experiment. He became the Thomas J. Watson Sr. Professor of Physics at Brown University in 2014. References Brown University faculty Sloan Research Fellows 21st-century American physicists Particle physicists Stony Brook University alumni Year of birth missing (living people) Living people
Greg Landsberg
[ "Physics" ]
272
[ "Particle physicists", "Particle physics" ]
67,578,931
https://en.wikipedia.org/wiki/Allomothering%20in%20humans
Allomothering, or allomaternal care, is parental care provided by group members other than the genetic mother. This is a common feature of many cooperative breeding species, including some mammal, bird and insect species. Allomothering in humans is universal, but the members who participate in allomothering vary from culture to culture. Common allomothers are grandmothers, older siblings, extended family members, members of religious communities and ritual kin (such as godparents). The life history strategy of humans involves a long period of dependency, termed "secondary altriciailty" by Adolf Portmann, which should result in longer interbirth intervals. However, compared to other primates, humans have short interbirth intervals resulting in numerous overlapping dependents all without an increase in child mortality. Allomothering explains how humans can have children spaced only a few years apart and manage to raise multiple children at once. Food provisioning, help with childcare and investment in the child's learning can be provided by members of the community to help ease the mother's investment. Allomothering participants and specific helping behavior varies widely from group to group. Theory Cooperative breeding is a reproductive strategy that has been observed in birds, insects, and mammals. In cooperative breeding, parents receive support in childrearing from other members of the community. The parental support of helpers other than parents is known as allomaternal care (from the perspective of the mother, in species where both maternity and paternity is known it is often referred to alloparenting/allopaternal care) and can be carried out by kin and non-kin members of the community. Cooperative breeding is often seen to arise in monogamous mating systems with high coefficients of relatedness between group members and where females give birth to multiple offspring. The presence of allomothers is associated with reductions in interbirth intervals, increases in litter size, higher annual rates of survival. Cooperative breeding reduces the investment necessary for the parents of an offspring allowing the freed-up resources to be directed towards producing more offspring. Studies among meerkats suggest that while helpers incur great short term costs, long term costs are minimal or non-existent. Help by allomothers can be conditional upon health at the onset of the reproductive cycle, such as weight in meerkats, and helpers are able to modify their behavior to offset some of the short-term costs, i.e., increased time spent foraging and alternate breeding cycles in which they help. Cockburn shows that helpers among birds may benefit from increased numbers of non-descendant kin, increased access to territory and resources, increased access to mating options, increases in social status and longer time to acquire skills. Cooperative breeding is common in many mammal species, but the type of care can vary greatly. Isler and van Shaik's survey allomothering in placental mammals found that 46% engaged in no help, 10% only provided protection, 3% provided only allonursing, 24% provided all forms of help other than provisioning and 16% provided complete help, including provisioning (see Figure 1, pg. 55).  Allomaternal help with provisioning was most common among members of Carnivora and Primates. The study also looked at correlations of allomaternal care and brain size, finding mixed results among many of the orders, however among Carnivora there was a correlation between male help and brain size and in Primates there was a correlation between allonursing and brain size. It has also been demonstrated that increases in allomothering among chimps is associated with a reduction in lactation efforts and shorter weaning times.  Hrdy argues that in order for cooperative breeding to occur, the underlying neural circuitry must be present in both sexes as well as pre- and post-reproductive aged individuals. This neurocircuitry includes a tendency to be attracted to, hold and protect infants, all of which are common among primates in various degrees. Evolution among humans Humans produce offspring that develop slowly and are incapable of moving or retrieving food for a lengthy period after birth. In animals that have infant altriciality and extended development, interbirth intervals are often longer to allow the mother to fully invest in one offspring before conceiving another. However, humans do not follow the pattern seen in other apes: human life history features relatively short interbirth intervals resulting in child-rearing costs that the mother alone cannot provide. Allomaternal care is a universal feature of human reproduction that helps provide additional childcare and other resources for the parents. While short interbirth intervals can negatively impact child outcomes, increasing the risk of child mortality, allomaternal care can reduce these negative outcomes while allowing interbirth intervals to remain short. When compared to other living primates, humans have short interbirth intervals (IBI), averaging 3.1 years in natural fertility populations, and total fertility rates (TFR) of 6.1 offspring versus interbirth intervals and total fertility rates for chimpanzees (IBI = 5.5, TFR = 2), gorillas (IBI = 3.9, TFR = 3) and orangutans (IBI = 9.2). Humans also have a unique period in their prolonged dependency, childhood, which allows for increased development and social learning. The emergence of cooperative breeding in early hominins may explain several key life history traits such as larger brains and our demographic success around the globe. Isler and van Schaik analyzed life history traits such as first age of reproduction and interbirth intervals in primates and applied the resulting data to ancestral hominins and determined that without cooperative breeding the first age of reproduction for Australopithecus afarensis would be around 12.6 years and for Qafzeh Homo sapiens around 26.1 years. The predicted interbirth intervals would range from 6 to 8.4 years. Assuming some form or allomaternal care as early as A. afarensis, first age of reproduction comes down to 10.9 years and interbirth intervals are reduced to approximately 3.4 years. For Qafzeh H. sapiens, first age of reproduction is reduced to 22.6 years and interbirth intervals are around 4.7 years. Based on their study, Isler and van Shaik conclude that a change in lifestyle resulting in substantial increases in allomothering occurred early in the Homo genus. This implies that cooperative breeding has been an important part of human history for nearly two million years. Demographic reconstructions of hunter-gatherer populations in the Pleistocene attempt to analyze the probability of having allomothers in a community and rely on assumptions about residential patterns in early humans. Kurland and Sparks (pers. comm. In Hrdy, 2006) provide estimates of several relatives' presence assuming different mortality rates. Under low mortality rates, the chance of primipara, a woman giving birth to her first child, having her mother around is around 50% and under high mortality rates the chance drops to 25%. The chance of having an older sibling around is much higher, as are the chances of having cousins. While this indicates that a new mother would have a least some close kin nearby to help with childcare, residential patterns may change these likelihoods. If humans were majority patrilocal, where males remain in natal groups and females disperse, we would expect lower maternal kin available for allomaternal assistance. This suggesting that mothers would need to rely more on paternal kin and unrelated individuals as potential allomothers, or for at least some of the mother's kin to temporarily reside with the parents during periods of high need. However, we know that humans are ambilocal or bilocal, meaning either males or females may disperse, which can impact the availability of maternal or paternal kin. Bilocality may have led to the diverse use of both kin and non-kin as allomothers in humans. Allomothering appears to also be tied to the environment, with increased levels of allomothering seen in regions of reduced climate predictability and lower average temperatures and precipitation. Cognitive & hormonal implications Human females respond to social conditions during and immediately after pregnancy and may decide to abandon, neglect or commit infanticide if social support is lacking, if the infant is not considered viable (low birth weight, twins) or under certain extreme conditions (such as famine). Nearly all individuals show a response to infant crying and laughing, including fathers, virgin females, older children and even strangers. Oxytocin and prolactin, hormones released by mothers during lactation that may facilitate bonding, are also produced by males and others in the presence of a crying infant. Infants appear to have coevolved adaptations in response to those in older children and adults. Infants that look healthier (larger or plump) have higher rates of survival. Newborn humans are also predisposed to seek out faces and will imitate faces they see or respond to attention by smiling and laughing. Smiling and laughing appears to be an attempt to draw in potential allomothers, as well as a way to bond with parents. Likewise, older infants can learn the intentions of others. Humans show advanced theory of mind and the ability to read and predict others' behavior and point of view may be impart due to the high levels of allomothering seen in humans. Early infants engage other humans through laughter, they quickly learn to discriminate between those that show them more attention and care for them, indicating that they possess a quickly understanding who intends to care for them. Also, there is evidence for cognitive and socialization implications in nonhuman primates' allomothering. Emotional health implications Allomothering can be helpful in the emotional health of both mother and infants Mothers: About the mothers, it is shown that the mothers' social network and supports that are provided through the cooperative breeding system or “availability of others” may mitigate not only the burden of physical pregnancy but also the psychological and emotional burden of motherhood. This reduced pressure on mothers contributes to the improvement of their well-being and parental investment. It is shown that a lack of support from social networks would lead mothers to abandon their children and increase the risk of postpartum depression. In this vein, Hagen (1999) suggested that post-partum depression might help mothers to show their need for social support. For example, research showed that the emotional support of infants' maternal grandmothers was associated with less depression in the mothers. In this study, the interesting finding was that the positive influence of maternal grandmothers' support in decreasing mothers' depression risk was not related to the geographic proximity of the grandmothers. Infants: Mothers' post-partum depression can cause reduced parental investment and not responding to infants' cues for their needs. As a result, the ignorance of infants' cues can affect their secure attachment and cognitive functions. Hence, the caregiving network can decrease these negative consequences. Cameron (1998) mentioned that in mammals generally, some mammals' offspring may suckle the breast for soothing in distressed situations rather than/ as well as for nutritive purposes. Then, Hewlett and Winn suggested that the need for soothing in the distressed situation may be a reason for the evolution of allo-maternal breast suckling (allo-nursing/ allo-suckling) although this is less accepted than the other potential proposed explanations of allomothering. Immunity implications The allo-nursing specifically discusses the idea of breastfeeding other individuals' offspring of both kin and non-kin. Since all-nursing is costly for females, there have been many hypotheses to explain the possible reasons for its evolution. One of the hypotheses is the “neuroendocrine function of allosuckling “(NFA). Based on NFA, both infants, and allo-mothers can benefit from the function of allo-nursing. For example, as a benefit for the allo-nursers, an infant, through suckling, can stimulate the nipple to produce prolactin. Prolactin production can cause fertility suppression and immune system improvement in allonursers. In addition, prohibiting fertility can be beneficial for females and their own offspring especially when their own infants do not stimulate their mothers' nipples. This hypothesis also predicts that allo-nursing with the increased prolactin and its positive consequences in the immune system in allonursers, are more common in mammals living in an environmental setting with a higher load of parasites. Infants also can be provided by different bacteria through being breastfed by allo-mother/s. The bacteria can be transmitted by skin-to-skin contact and by milk through the process of allo-nursing. Indeed, there is a critical period in infants' development in which the gastrointestinal is colonized by bacteria to shape the gastrointestinal microbiome (GIM). Regarding hygiene and the related hypothesis including the old friends' hypothesis, it is shown that this period of colonization is critical because colonization helps the development and education of the immune system. Human milk microbiome (HMM) is found to be important for the colonization of GIM and consequently for the development of the immune system in infants because during the critical period, HMM is the first and most reliable source of bacteria, and this modulates the immune system of infants. Studies on small-scale societies including foragers (hunter-gatherer) and horticulturalists, showed that allo-nursing is more common among forager women who live in tropical environments (with higher loads of parasites and bacterial infections) than the more arid environment in which infectious diseases are lower. In the tropical environment, most of the death in small-scale societies is because of parasite, bacterial, viral infectious diseases. Some of these tropical forager societies in which allo-mothering nursing is common are the Ache, Bofi, Agta, Aka, Efe´, Chabu, and Onge´e but it is not common in the forager societies like the Nayaka, Hadza, Paliyan, Martu and !Kung who live in the non-tropical area. However, the studies showed some exception in horticulturalist in tropical area like Ngandu in which the allo-nursing is discouraged and researchers believe that it may be due to the knowledge about increasing possibility of common infection transmission through breastfeeding in their populated villages. Yet, for the Aka people the primary reason of death is parasite not transmission of other infections. Therefore, allo-nursing, due to NFS and its immunological positive consequences, can benefit Aka women and benefits of allo-nursing may be higher than the cost of infection transmissions. Furthermore, the size and frequency of caregiving network existing in allo-mothering practices can affect the diversity and bacterial composition of human milk and this may influence infants' immune system and health. The research that has addressed the relationship between HMM and infants' immunity, emphasis the need for more studies since the interaction between the microbes and immunology is complicated. Allomothering by kin Hrdy states that the altruism of allomothers can be explained by Hamilton's rule and therefore allomothers enhance their inclusive fitness by helping kin. However, in humans, bilocality and complex cultural norms around marriage or mating produce groups that may not be highly related suggesting that allomothering is not limited to kin. Despite varying and complex residential patterns, kin do appear to be strong sources of allomaternal care in many societies. Fathers, grandparents, older siblings and other close kin help in childcare, provisioning, protection and education. Allomothers are perhaps less important immediately after birth. Infants rely on the mother for milk to survive and mothers benefit from food provisioning by the fathers to produce sufficient amounts of milk for their infants. Infants whose mother's die during childbirth have a low probability of survival, between 1-5% in some pre-demographic transition populations, higher in post-demographic transition populations.  If the mother dies during the first year of life the chance of survival increase to 35-50%, and the effect of the mother's death nearly disappears after the child reaches two. This indicates the important of parents during the first couple of years of life, but also demonstrates that once weaning is complete, allomothers are capable of rearing children to adulthood. Fathers The importance of fathers varies considerably, but many authors now agree that there is little impact on child survival from the loss of a father. Sear and Mace's study of 15 populations found that in 53%, the death of the father was not correlated with an increase in child mortality. The father does not directly feed an infant and it is possible that other males of the family or community can step in if something happens to the father. Kramer's (2010) cross-cultural study found that in direct allomaternal care provided by fathers varies from less than 1% (Alyawara) to nearly 16% (Aka) with an average of 4.8% across the populations studied. However, in many societies the father does play an important role. Among the Ache of South America, the loss of the father did affect the child's survival. Meat sharing by hunters is an important part of Ache life and, while the majority of the meat acquired by a male may not go directly to the family, it is used to build relations and exchange for other goods. Interestingly, the father is not very involved in childcare among the Ache. The same study found that among the Hiwi (also of South America), where the father does provide direct childcare and food provisioning, the death of the father had no effect on child survival. A father's most important contribution to children may be providing protection from other males, especially in groups that practice infanticide. Sear and Mace point out that many of the populations in the study look at the impact of the loss of a father on young children and suggest that since a father cannot provide direct nourishment to breastfeeding children, their importance may come later in the child's life. Fathers teach subsistence strategies to older children (for example, teaching hunting and trapping skills to older boys in hunter-gatherer societies). Allal et al. found that the marriage and fertility of women who do not have fathers may be impacted. Some hunter-gather populations in South America also have “partible paternity”, the idea that multiple men can impregnate a woman and all are considered the father, which could provide “back-up” fathers to children. Grandparents Grandmothers can care for children freeing the mother to engage in foraging or economic activities, or in the care of a child that has not been weaned. Likewise, grandmothers often continue to forage late in life to help with food provisioning for their grandchildren and a lactating mother. Child mortality in rural Gambia saw a significant decline when the maternal grandmother was present (see Table 4). In a comparison of nine populations on the proportion of direct childcare received by a child, Kramer found that grandmothers accounted for between 1.2% (Maya) and 14.3% (Mardu) of the total childcare. Sear and Mace determine that grandmothers do not have a universally positive effect on child survival and there is a difference between maternal and paternal grandmothers. Maternal grandmothers improved child survival in 69% of cases while paternal grandmothers improved survival in only 53% of observed cases. They also found that paternal grandmothers were detrimental in two cases and maternal grandmothers in one. Paternal grandmothers may on average be older than maternal grandmothers, due to common age differences in males and females at first reproduction. Paternal grandmothers may also be more reluctant to invest in their grandchildren due to the lack of certainty of paternity. The same study also found the timing of impact of paternal and maternal grandmothers varies, with maternal grandmothers having greater effects after the first year of life (allomaternal care) and paternal grandmothers having greater effects during pregnancy and in the first few months of a child's life (help with tasks during pregnancy or causing high levels of stress to the mother during pregnancy). Grandfathers do not appear to be important sources of allomaternal care, with maternal grandfathers having no impact on child survival in 83% of cases and paternal grandfathers having no impact on 50% and a negative impact on 25% of cases. Little explanation on why grandfathers contributed less to childcare was found in the anthropological literature. It is possible that the even greater age of a grandfather when compared to grandmothers made it difficult to help with childrearing. In addition, grandfathers, like fathers, may contribute primarily through food provisioning, however the late age of the grandfather is likely to be well passed his hunting prime. Grandmother hypothesis Grandmothers are often considered a significant source of allomaternal care and this fact has led to the “grandmother hypothesis”, suggesting that women developed long post-menopausal periods to help with their children's offspring. This long lifespan after menopause is unique to humans and may help explain early weaning and high fertility rates. Hill and Hurtado argue that early reproductive senescence in females is an evolutionary dilemma since natural selection should favor continued reproduction. They test the grandmother hypothesis with data collected from the Ache and determine it does not support the idea that early menopause is maintained by natural selection favoring women who stop reproduction in order to invest more in their grandchildren. Despite Hill and Hurtado's finding, grandmothers often account for a much of the allomaternal care seen in a variety of societies. Siblings Older siblings may be the greatest source of allomaternal care among kin. Kramer's (see Table 1) comparison of nine populations found that siblings accounted for between 1.1% and 33% of the direct childcare received by a child. Older sisters appear to be more important than brothers for many of the groups in the study with sisters ranging from 5% - 33% and brothers 1.1% - 16.3%. Ivey (see Figures 1 & 2) found that among the Efe, both sisters and brothers contributed significantly to childcare. Sear and Mace found that five of the six studies indicated that child mortality increased survival of younger siblings. Older siblings, while still dependent on their parents. can engage in a variety of useful tasks depending on their age. It is possible that dependence on adult allomothers was not an early selective pressure for the development of allomothering in humans, but rather maternal – juvenile cooperation likely played a more important role. Older sisters often engage in childcare, helping to look after younger siblings. This is apparently true regardless of subsistence strategy; and holds true even for Industrial societies. Kramer (see Figure 4) shows a cross cultural comparison of children in ten societies, all of whom engage in at least some childcare of younger siblings. Equally important, older children can help offset their own cost by engaging in foraging or economic actives. The same figure from Kramer shows that all of the groups engaged in more economic work than childcare. Kramer (see Figure 2) compares groups organized by subsistence strategy and shows that foraging and economic work are common among foragers, horticulturalists, agriculturalists and pastoralists, with the latter two groups showing some of the highest rates of food production and domestic tasks by children. Extended family Other kin may be sources of allomothers when present, however evidence from the ethnographic literature demonstrates varied amounts of contribution and different impacts on the children. Aunts' and uncles' contribution varies depending on residential patterns, inheritance patterns and resource allocation. Among the Kipsigis of Kenya, a child's paternal uncle had a positive effect on reducing child mortality in the richer half of the sample but not on the poorer half, however for maternal uncles the effects were reversed with poorer families showing a greater reduction in child mortality. Local resource competition, namely conflict over land inheritance among the father's brothers appears to account for the pattern of paternal kin effects. Others have found that the maternal aunts have similar effects in societies with female inheritance. Efe infants spend a significant amount of time in the care of aunts and male cousins. Allomothering by non-kin Studies of extant hunter-gatherer groups demonstrates that groups are composed of more than just direct kin, with between 25 – 50% of group members as unrelated or distantly related. Using modern hunter-gatherers as representatives of past hunter-gathers is not a perfect analogy, the data in combination with archaeological and paleoanthropological evidence are the only sources of information we can use to reconstruct past groups. It is likely that past hunter-gatherer groups were also composed of a mixture of related (kin) and unrelated (non kin) individuals meaning that allomothering by non kin occurred in at least some past societies. Research with contemporary hunter-gatherers, horticulturalists, and modern, industrial societies often finds that non-kin – friends, neighbors, and fictive kin – provide allomaternal care. Surrogate breast feeders, known as a wet nurses in Western medical literature, may have played an important role as allomothers prior to the introduction of bottle feeding and formula. Wet nursing was recorded in ancient Israel and Egypt, among historical and modern Sunni Arab populations, as well as in ancient India and Greece. While the frequency of wet nurses has been debated, there is ample references in the literature to suggest that is did occur (and in some societies still does). Religion and religious communities may also increase the frequency of allomothering. Studies of religious communities in England and New Zealand show increased allomaternal care by unrelated members of the community. Religion is thought to increase prosocial behavior with religiosity being a costly signal that indicates to other members that a practicing individual is trustworthy and likely to cooperate and reciprocate. Israeli kibbutz are collective settlements where members share almost all aspects of their lives: all incomes are given to the kibbutz an in return the kibbutz distributes goods and services equally, they dine in communal dining halls and childcare is communal. Children live in a communal nursery and later group houses together. Parenting is also distributed among the community; perhaps an extreme form of allomothering. Fictive kin are unrelated individuals that have kinship terms bestowed on them. There are generally two types of fictive kin: named kin, determined by factors such as age, gender and prestige and applied to a large number of community members (such as in Northern India), and ritual kin, named at a specific ceremony, such as baptism, at which time the relationship between the individual undergoing the ceremony and the named kin is formalized. Named kin may function similarly to religious communities by increasing familiarity and increasing prosocial behavior, however little research appears to have been conducted on this form of fictive kin. Godparents are one of the better-known ritual kin systems in Western culture. Godparents are common to Catholic (and other Christian) communities in Europe and throughout the Americas (due to colonization). Godparents are expected to provide extra resources to the family; naming a godparent creates a strong bond within the community or a tie to an outside community where new resources may be accessible in times of need. Other examples of ritual kin are milk kin in some Arab societies and the Japanese oyabun-kobun system. In a literature review of alloparental care, Kenkel et al. found that children are between six and hundred times more likely to die from abuse while under the care of unrelated adults in modern societies, however they also state that the term alloparenting is often omitted from studies on modern populations resulting in "blind spots" in the literature. In urban/industrial societies The nuclear family has dominated U.S. and some European populations life for many decades. The typical family is often thought of as two parents and their children living in one house. However, a recent poll by the Pew Research Center shows a rise in the number of U.S. Americans living in multigenerational homes, from a low of 12% in 1980 to 20% in 2016. The growing price of housing in the U.S. and the overall rise in the cost of living has made owning homes and living as a nuclear family more difficult. It is once again becoming common for grandparents to live in the same house as their grandchildren, providing a source of childcare for the families. Urban areas in China also show that while two generation and single parent households are on the rise, however the extended family still represents the majority of Chinese households. Older siblings in most modern, industrialized are required to attend school, possibly eliminating a source of allomothering, it appears that grandmothers still play an important role in childcare. Wet nursing may still be an option in some societies. The Arab populations previously mentioned still have wet nurses but the occurrence is quickly declining, however modern technologies like formula and breast pumps have made it unnecessary in populations with access to those technology. Ritual kin systems, such as godparents and the oyabun-kobun system, are also still active in their respective societies and may still function as a source of allomaternal care. Religious communities within modern societies are still relevant as Shaver et al. has shown in their studies in England and New Zealand. In industrialized, urban societies daycare, school, nannies, etc. may provide many of the same benefits that would have traditionally been provided by kin and well-known community members. It is possible for parents in urban areas to enroll in daycare as soon as weaning is complete (sometimes early with breast-pumping technology) and as long as resources or finances permit, the child could be looked after for the duration of the day. The cost of these services may be prohibitive to many low-income families, creating a divide in allomaternal care depending on income. It is also unclear whether the primary role of a service such as daycare is to allow for more attention to producing more children or allowing parents to pursue other endeavors (careers). The research into daycare as a form of allomothering may be complicated by its cost and other limitations to access. Daycare may be more common among wealthy and/or higher educated individuals who are less likely to have children out of choice, meaning discerning impact on interbirth intervals or other metrics may be challenging. Allomothering is still relevant in most industrialized societies, even if the source has shifted. Reliance on extended family may have fallen, but recently is on the rise, and more use of paid childcare system such as daycare or compulsory systems such as schooling fill in some of the gaps that occur from living in a mobile, globalized, industrial society. Critiques There are a couple of critiques to consider related to cooperative breeding and allomothering in humans. The first is proposed by Bogin et al. in which they argue that humans are not actually cooperative breeders. The authors argue that because allomothering and provisioning are not based on genetic relatedness, as most other cooperative breeders are, the term needs to be modified to incorporate the wider range of behavior seen in humans. They propose the term “biocultural reproduction” that they believe better describes the high amount of allomothering by kin and non kin, and accounts for the variation in allomothering practices seen from culture to culture in humans. This critique does not apply specifically to allomothering as discussed in this article, but rather to the reproductive strategy system that incorporates it. The second critique relevant to allomothering concerns human kinship distinctions. Schneider, along with other anthropologists, have argued that distinguishing between real and fictive kin does not always occur in human cultures and perhaps should be abandoned. The critique may be misunderstood to mean that humans cannot tell the difference between related and unrelated individuals. Maternity in humans, like many other animals, is known and the mother's female relatives relatedness can be assumed by individuals with relative certainty. Humans are also unique as fairly stable pair-bonding allows for some degree of paternal certainty. In fact, Chapais argues that patrilineal kinship is a prerequisite for the flexibility of residential patterns seen in humans and this kinship is not culturally based but has a deep biological substrate upon which it is built. Gintis and Chapais arguments suggest that while kinship terms are often applied to individuals outside of related individuals, the relatedness of those individuals is known. A distinction is still useful and we see a difference in the contribution of allomothering by related versus unrelated individuals in many, if not most populations. References Parenting Sociobiology
Allomothering in humans
[ "Biology" ]
6,777
[ "Behavioural sciences", "Behavior", "Sociobiology" ]
67,579,646
https://en.wikipedia.org/wiki/Time%20in%20Slovenia
In Slovenia, the standard time is Central European Time (; CET; UTC+01:00). Daylight saving time is observed from the last Sunday in March (02:00 CET) to the last Sunday in October (03:00 CEST). This is shared with several other EU member states. History The Austro-Hungarian Empire adopted CET on 1 October 1891. Slovenia would continue to observe CET after independence, and observed daylight saving time between 1941 and 1946, and again since 1983. Notation Slovenia uses both the 12-hour and 24-hour clock. IANA time zone database In the IANA time zone database, Slovenia is given one zone in the file zone.tab – Europe/Ljubljana. Data for Slovenia directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself: See also Time in Europe List of time zones by country List of time zones by UTC offset References External links Time in Slovenia at TimeAndDate.com. Geography of Slovenia
Time in Slovenia
[ "Physics" ]
211
[ "Spacetime", "Physical quantities", "Time", "Time by country" ]
67,579,823
https://en.wikipedia.org/wiki/Cut%20glass
Cut glass or cut-glass is a technique and a style of decorating glass. For some time the style has often been produced by other techniques such as the use of moulding, but the original technique of cutting glass on an abrasive wheel is still used in luxury products. On glassware vessels, the style typically consists of furrowed faces at angles to each other in complicated patterns, while for lighting fixtures, the style consists of flat or curved facets on small hanging pieces, often all over. Historically, cut glass was shaped using "coldwork" techniques of grinding or drilling, applied as a secondary stage to a piece of glass made by conventional processes such as glassblowing. Today, the glass is often mostly or entirely shaped in the initial process by using a mould (pressed glass), or imitated in clear plastic. Traditional hand-cutting continues, but gives a much more expensive product. Lead glass has long been misleadingly called "crystal" by the industry, evoking the glamour and expense of rock crystal, or carved transparent quartz, and most manufacturers now describe their product as cut crystal glass. There are two main types of object made using cut glass: firstly drinking glasses and their accompanying decanters and jugs, and secondly chandeliers and other light fittings. Both began to be made using the cut glass style in England around 1730, following the development there of a reliable process for making very clear lead glass with a high refractive index. Cut glass requires relatively thick glass, as the cutting removes much of the depth, and earlier clear glass would mostly have appeared rather cloudy if made thick enough to cut. For both types of object, some pieces are still made in traditional styles, broadly similar to those of the 18th century, but other glassmakers have applied modern design styles. Expensive drinking glasses had previously mostly concentrated on elegant shapes of extreme thinness. If there was decoration it was mostly either internal, with hollow bubble or coloured spirals within the stem ("twists"), or surface decoration in enamelled glass or glass engraving. Outside Venice and Spain, lighting fittings had not previously made much use of glass in Europe; the enamelled mosque lamp of Islamic art was a different matter. But cut glass "drops", faceted in a style derived from gem cutting in jewellery, refracted and spread the light in way that was new, and were enthusiastically embraced by makers and their customers. The main skeleton of the chandelier was very often metal, but this was often all but hidden by a profusion of faceted glass pieces, held in place by metal wire. Technique In the first century AD, Pliny the Elder described how patterns may be cut on glass vessels by pressing them against a rotating wheel of hard stone. The process of cutting has stayed the same in modern times apart from changes in details since that description in the middle of the first century AD. It has always used a small rotating wheel of, or coated with, some abrasive substance, and usually with a liquid lubricant such as water, perhaps mixed with sand, falling onto the area being worked and then being collected below. The wheels were originally powered by treadles, but by the mid-19th century workshops had several stations linked to steam power. Today electric power is used. For cutting flat facets a turntable device called a "lap", already used in gem-cutting, was adopted. Typically the design is marked with paint on the glass before cutting – in England red is usually used. One advantage of cut glass for the manufacturer is that it can very often be arranged for the small flaws such as bubbles that are inevitable in a proportion of glass pieces, and would lead to a clear piece being rejected, to be placed in the areas to be cut away. Conversely, if imitation cut glass using moulds is made, the complexity of the mould shapes greatly increases the number of faults and rejects. A second operation polishes the cut glass, traditionally using a wooden or cork wheel "fed with putty powder and water". In the late 19th century, an alternative method using fluoric acid was introduced; this made the process of polishing faster and cheaper. However, it "gives a dull finish and tends to round off the edges of the cuts". Labour was the main cost in making cut glass. Arguing against the reduction of tariffs in 1888, a leading figure in the American industry claimed that "We take a piece of glass .... costing 20 cents and .... in many cases put $36 of labor on it". History Technically, the decorative "cutting" of glass is very ancient, although the term "cut glass" generally refers to pieces from the 18th century onwards. The Bronze Age Indus Valley civilization made glass beads that were engraved with simple shapes. Ancient Roman glass used a variety of techniques, but mostly large amounts of drilling, often followed by polishing, to produce the deeply under-cut cage cups, objects of extreme luxury, cameo glass in two colours, and objects cut in relief, of which the Lycurgus Cup is the outstanding survivor. Islamic art, especially that of the Fatimid court in Egypt, valued bowls and other objects in "carved", that is, cut rock crystal (quartz, a clear mineral), and this style was also produced in glass, which was cheaper and easier to work. Cameo glass was also produced. Similar relief effects were also achieved even more cheaply in mould-blown glass. The 13 or 14 surviving examples of the so-called Hedwig glasses were probably made by Islamic artists, but perhaps for the European market. Perhaps from the 12th century, they are either very late examples of Islamic glass-cutting, or isolated ones of medieval European use of the technique. Very shallowly scratched or cut engraved glass was revived by at least the Renaissance, but there was very little use of deeper cutting which, however, continued to be used in rock crystal and other forms of hardstone cutting. In Germany in the late 17th and early 18th centuries there was a revival, for "two generations", of cut relief decoration, water-powered and imitating rock crystal. Typical pieces were cups and goblets with coats of arms surrounded by rich Baroque ornament, with the background cut away to leave the reliefs raised. This is called the Hochschnitt ("high cut") style. In the later 17th century George Ravenscroft developed a cheap and reliable lead "crystal" glass with a high refractive index in England, which various other glassmakers adopted. After some time, the potential of cut glass using this basic material began to be realized; a high lead content also made the glass easier to cut. Chandeliers In the early 18th century, bevelled edges to large mirrors became fashionable in England, achieved by rubbing with abrasives, but also by "cutting". The making of "looking glasses" was a different branch of glassmaking from the makers of drinking glasses, and it seems to have been in the former that "the craft of cutting was born", and the mirror makers were the workshops who expanded into chandeliers. A London glassmaker advertised in 1727 that he sold "Looking Glasses, Coach Glasses and Glass Schandeliers". The earliest examples, like that given to the chapel in Emmanuel College, Cambridge, in 1732, were glass versions of the standard brass designs long used in England, imported or locally-made versions of a Netherlandish and north French design style that had been developed since the 15th century. Around the mid-century, designs took up the use of multiple faceted pendants, which had been used in the enormously expensive chandeliers of the French court, where instead of glass carved clear rock crystal (quartz) had been used. Over the rest of the 18th century, and the early part of the next the number of drops increased, and the main stem of the chandelier, typically in metal, tended to disappear behind long chains of them. By the Regency period there might be "some thirty drops in perhaps six or eight graded sizes, and each drop might have 32 facets on each side. Costs soared." The dominance of cut glass in other lighting devices such as candlesticks, sconces, girandoles, and lamps was never as complete, but all were often made in it. By 1800 it was already common to dismantle chandeliers and reconfigure them into a more fashionable shape, and subsequently most old chandeliers have been converted from candles to electricity, often after a period as gasoliers using lamp oil. Vessels Starting out by decorating mainly wine glasses, decanters and other drinkware, by the 19th century cut glass was used for a variety of tableware shapes, mostly those associated with desserts ("sweetmeat glass" is a term used by collectors), and for bowls and trays either for use at the table or in the drawing room. These larger shapes allowed the room for cutters to produce many of the most interesting and characteristic cut designs, which experts can often date rather precisely, as they passed through several different styles. Starting with the Rococo, there were Neoclassical and Regency styles, and finally one with "Gothic" arches by about 1840. The Regency style added to the 18th-century diamond shapes zones with many parallel bands, furrows, or flutes, either vertical or horizontal, initially rather narrow, but later wider, in the "broad flute" style. From about 1800 to 1840 "almost all British luxury table glass was cut", and the style spread to Europe and North America. English cutters were instructing French workers at the Saint-Louis glass factory by 1781, and later Belgian cutters at Val Saint Lambert by 1826. On wine glasses and similar shapes, the rim where the drinker's mouth would touch was left smooth, but the bowl, especially the lower part, the stem, and the foot might be cut. A starburst on the underside of the foot was common. On jugs, cups for eating desserts from, and bowls the rim was often cut with zig-zags or other ornament. Especially in the 18th century, cutting was often combined with glass engraving above, and by the 1840s it was popular to have areas of "frosting", rubbing the glass with abrasives to reduce its transparency. Competition from cheaper, but lower quality pressed glass in the cut glass style began as early as the 1820s, and grew greatly in the 1830s, but the British cut glass industry continued to expand. In 1845 a commentator stated "cut-glass is now comparatively cheap". The ability of British glass designers to patent their designs after 1842 was a help; the mould makers (often called "die-sinkers" in the trade) were apparently often independent of the glass factories. At least in America, where the cut glass industry was growing rapidly, "cutting shops" were also often, or usually in the 19th century, independent operations buying glass blanks from the glassmakers. In the 1870s Bohemian cutters began to arrive in Corning, New York, one of the centres of the industry, supplementing English immigrants. 1850 onwards The centrepiece at the crossing of the Crystal Palace holding the Great Exhibition of 1851 in London was a huge glass fountain (8.25 metres or 27 feet high), including much cut glass, by the leading Birmingham firm of F. & C. Ostler. Cut glass had dominated both its main market niches for several decades, but a number of factors were about to challenge it, at least as far as vessels were concerned. The Victorian taste for over-ornamentation was beginning to take over, and some of the cut glass displayed at the Great Exhibition was described as "prickly monstrosities". In the year of the exhibition, the hugely influential critic John Ruskin, in his Modern Painters, denounced the whole technique, writing "We ought to be ashamed of it" and "all cut glass is barbarous, for the cutting conceals its ductility and confuses it with [rock] crystal". At the same time, and further stimulated by the Great Exhibition itself, the British style was spreading across the Western world, and in particular cut American and Bohemian glass was attacking the British market. The previous excise duty long charged on glass was abolished in 1845, which both encouraged the development of exciting new styles of decorating glass, and also made glass cheaper, leading to a flood of pressed glass imitations of cut glass style that tended to devalue the prestige of the style. Nonetheless, cut glass remained a staple in most prosperous British households, and was still widely exported. In the 1870s the "brilliant", "brilliant cut" or "American Brilliant" style emerged, perhaps first seen in America in glass exhibited at the 1876 Philadelphia Centenary Exhibition: "its most complex brilliant cutting involved covering the glass surface with intersecting cuts that created innumerable, often fragmentary shapes making up larger patterns. Basic motifs used were stars, hobnail or polygonal diamonds, strawberry diamonds and fan scallops...". Decline The last decades of the 19th century saw exciting new developments in glass design, with much use of colour, the Victorian version of cameo glass using glass etching, opaline glass in France, and other innovations. Cut glass, especially in the brilliant style, did not mix well with these – the great majority of it has always used clear glass. An exception is the distinct Japanese style of Satsuma kiriko, which adds a thin layer of coloured flashed glass which is then cut through, giving a colour contrast. Similar effects were sometimes used in the West, especially in continental Europe. Cut glass vessels remained popular, but an increasingly conventional and conservative taste, little used for art glass, a new term for decorative glass with artistic aspirations. This was even more the case with Art Nouveau glass and that of the Arts and Crafts Movement, which both took on board Ruskin's criticisms, and preferred sinuous curving forms that emphasized the flowing, frozen liquid nature of glass. At the end of the century the market for expensive decorative glass appears to have slumped, perhaps because so much was now being made and traded internationally. Corning's cut glass industry peaked in 1905, when a directory recorded 490 cutters there, and 33 engravers, though the quality of some work was falling; by 1909 the number of cutters had fallen to 340. The arrival of Modernism in the early 20th century did not do much to change this, and in 1923 an English expert complained that "to the aesthetic soul [cut glass] is still a thing accursed ... a striking testimony to the persistence of Ruskin's influence". He tried to do a survey of likely owners of 18th-century cut glass such as historic houses, Oxbridge colleges and London livery companies, but found very few would admit to owning any. But some glassmakers, for example in Art Deco, were sympathetic to linear and geometric decoration and made use of the technique, often as one of a number of techniques used in a single piece. This continues to be the case in the recent studio glass movement. In mid-20th-century England there was a revival in engraved glass, which was often accompanied by some cutting; the work of Keith Murray includes examples. Traditional cut glass designs are still used, for example in what Americans call the Old fashioned glass, a whisky or cocktail tumbler. In chandeliers, however, the clear cut glass style has been adapted successfully to modern styles and still holds its own, especially for large public spaces such as hotel lobbies. Cut-glass accent In British English, a "cut-glass accent" is an especially clipped version of British upper-class Received Pronunciation, where "words are pronounced very clearly and carefully". The accent is agreed to be less common now than it was several decades ago, with even leading exponents such as Queen Elizabeth II having softened their pronunciation over the years. Notes References Battie, David and Cottle, Simon, eds., Sotheby's Concise Encyclopedia of Glass, 1991, Conran Octopus, Davison, Sandra and Newton, R.G., Conservation and Restoration of Glass, 2008, Taylor & Francis, google books Farr, Michael, Design in British Industry: A Mid-century Survey, 1955, Cambridge University Press, google books "History": "A History of the Chandelier" Osborne, Harold (ed), The Oxford Companion to the Decorative Arts, 1975, OUP, Powell, Harry J., Glass-making in England, 1923, Cambridge University Press, google books Sinclaire, Estelle F., Complete Cut and Engraved Glass of Corning (New York State Series), 1997, Syracuse University Press, google books Sparke, Penny, "At the Margins of Modernism: The Cut – Crystal Object in the Twentieth Century", 1995, Bulletin of the John Rylands University Library of Manchester, 1995 , 77 ( 1 ) : 31–38, PDF Further reading Fisher, Graham, Jewels on the Cut: An Exploration of the Stourbridge Canal and the Local Glass Industry, 2010, Sparrow Books Spillman, Jane Shadel, The American Cut Glass Industry: T.G.Hawkes and His Competitors, 1999, ACC Art Books Swan, Martha Louise, American Cut and Engraved Glass of the Brilliant Period in Historical Perspective, 1986, Gazelle Book Services Warren, Phelps, Irish Glass: Waterford, Cork, Belfast in the Age of Exuberance (Faber monographs on glass), 1981, Faber & Faber Glass Drinkware Chandeliers History of glass
Cut glass
[ "Physics", "Chemistry" ]
3,606
[ "Homogeneous chemical mixtures", "Amorphous solids", "Unsolved problems in physics", "Glass" ]
53,490,631
https://en.wikipedia.org/wiki/OPTOS%20formalism
OPTOS (optical properties of textured optical sheets) is a simulation formalism for determining optical properties of sheets with plane-parallel structured interfaces. The method is versatile as interface structures of different optical regimes, e.g. geometrical and wave optics, can be included. It is very efficient due to the re-usability of the calculated light redistribution properties of the individual interfaces. It has so far been mainly used to model optical properties of solar cells and solar modules but it is also applicable for example to LEDs or OLEDs with light extraction structures. History The development of the OPTOS formalism started in 2015 at the Fraunhofer Institute for Solar Energy Systems ISE in Freiburg, Germany. The mathematical formulation has been described in detail in several open access publications. A basic version of the code including documentation with function references has been available since the end of 2015 at the homepage of Fraunhofer ISE. Continuous updates and a list of OPTOS related publications can be found on ResearchGate. OPTOS simulation procedure One key aspect of OPTOS simulations is the division of the modeled system into interface and propagation regions. The light redistribution properties are calculated with the most appropriate method for each interface individually and depending on the relevant structure dimension. Large scale structures can for example be modeled via ray tracing while for interfaces with structure dimensions in the range of the wavelength wave optical approaches like RCWA, FDTD or FEM can be used. System description The discretization of the complete angular space into a fixed number of angle channels, as second key aspect of the OPTOS formalism, allows representing the angular power distribution within the system by a vector v which consists of one entry for each angle channel. The value of the entry is the power fraction of the corresponding angle channel with respect to the total incident power. Interface interaction The light redistribution properties of an interface are represented by the so-called reflection and transmission matrices, R and T. They store for each of the angle channels the redistribution information into other angle channels for light incident onto a certain interface with a certain wavelength. There are in total four different redistribution matrices for each interface, characterized by the incidence direction as well as reflection or transmission redistribution. Propagation through the sheet The incoherent propagation of light through the sheet can also be represented by a matrix. If no light redistribution takes occurs on the path, the propagation matrix D is a diagonal matrix. The single entries consist of the Lambert-Beer absorption factor, including cosine of the polar angle and the absorption coefficient of the respective material. Calculation of optical properties Using the pre-calculated matrices described above, optical properties like reflectance, transmittance or absorptance within the sheet can be calculated via matrix multiplications [2–4] and can be performed within seconds or minutes using a standard personal computer. Also a depth-dependent absorption profile can be calculated. This is of special importance for the subsequent electrical simulation of structured silicon solar cells. OPTOS simulation characteristics Strengths Versatility – Optical systems with interface structures operating in different optical regimes can be accurately simulated. The redistribution properties of each interface are modeled individually with the most suitable method. Efficiency - The re-usability of the redistribution allow for the very fast simulation of different structure combinations, sheet thickness variations and the optical analysis with respect to different angles of incidence. Linear polarization can be taken into account by exchanging each entry of the power distribution vector with two entries, one for each polarization direction. Each matrix entry has to be exchanged with a two by two matrix taking also the redistribution between different polarization directions into account. Limitations OPTOS couples redistribution properties of different interfaces. If there is no accurate modeling technique to calculate redistribution matrices, such interfaces cannot be included in OPTOS. OPTOS models the propagation through the sheet incoherently. If the sheet thickness becomes very low and interference effects play a significant role, this needs to be handled coherently and not as “thick” sheet. However, as coherently modeled sub-system, it can be included in OPTOS as effective interface. Circular or elliptical polarization effects are not taken into account as all phase information is neglected during the propagation. Application Examples The main application of OPTOS has so far been the simulation of: Solar cells with different front and rear side structures such as random pyramids, the isotexture, the honeycomb texture or diffractive gratings. The layer stack of solar panels, including the effect of the encapsulation onto the optical solar cell properties as well as the investigation of different angles of incidence. Complex optical interactions in photovoltaic systems with nanowire solar cells. The OPTOS formalism has been incorporated into the open-source software RayFlare. This software also allows the user to calculate appropriate redistribution matrices using various methods including the transfer-matrix method, ray tracing, and rigorous coupled-wave analysis. Alternative fields of application could be: LEDs or OLEDs with light extraction structures Display technology, for example brightness enhancement films References External links OPTOS page at Fraunhoer ISE website (includes documentation and download of basic version) OPTOS project on ResearchGate (with continuous updates and a list of OPTOS related publications) Physical optics Computational electromagnetics
OPTOS formalism
[ "Physics" ]
1,057
[ "Computational electromagnetics", "Computational physics" ]
53,493,192
https://en.wikipedia.org/wiki/GLAD-PCR%20assay
Glal hydrolysis and Ligation Adapter Dependent PCR assay (GLAD-PCR assay) is the novel method to determine R(5mC)GY sites produced in the course of de novo DNA methylation with DNMTЗA and DNMTЗB DNA methyltransferases. GLAD-PCR assay do not require bisulfite treatment of the DNA. Method was specially designed to determine methylation of RCGY site of interest in human and mammalian genomes in excess of corresponding unmethylated sites. This is a typical situation for DNA preparations from clinical samples of blood and tissues. GLAD-PCR assay is based on the new type of enzymes - site-specific methyl-directed DNA-endonucleases (MD DNA endonucleases). These enzymes are very similar to restriction enzymes in biochemical properties and cleave DNA completely, but act in opposite way: they cleave only methylated DNA and do not cleave unmethylated DNA at all. Mammalian DNA-methyltransferases DNMT1, DNMT3a and DNMT3b catalyze a reaction of DNA methylation. DNMT1 maintains DNA methylation pattern in vivo modifying a new strand after replication. DNMT3a and DNMT3b are responsible for DNA methylation de novo including abnormal hypermethylation in cancer cells. It is well known that hypermethylation of CpG-islands in regulatory regions of promoter and/or first exon in a variety of genes often occurs at early stages of sporadic carcinogenesis. This leads to downregulation of the genes expression in tumor cells, whereas in a healthy tissue the corresponding genes remain to be active. Thus, the detection of such epigenetic biomarkers is one of the most promising diagnostic and prognostic tools Study of DNMT3a and DNMT3b substrate specificity has shown that both enzymes predominantly recognize RCGY site and modify internal CG-dinucleotide to form 5’-R(5mC)GY-3’/3’-YG(5mC) R-5’ sequence. One of new enzymes GlaI recognizes and cleaves site R(5mC)GY. Due to this unique substrate specificity, GlaI is a convenient tool for identification of de novo methylated sites in the human and mammalian DNA.GLAD-PCR assay includes 3 simple steps: GlaI hydrolysis of the studied DNA. At this step only R(5mC)GY sites are hydrolyzed. Unmethylated RCGY sites remain uncut. The universal adapter ligation. As an adapter an oligonucleotide duplex 5’-CCTGCTCTTTCATCG-3’/3’-pGGACGAGAAAGTAGCp-5’ is used, where “p” means phosphate. Subsequent real-time PCR with Taqman probe. Genome primer and TaqMan probe are designed for DNA region of interest, another hybrid primer consist of two parts: one part is complementary to the universal adapter and another one - to the DNA at the point of GlaI hydrolysis. Assay is performed in one tube, takes about 2–3 hours and determines even several copies of DNA with R(5mC)GY site of interest. References Epigenetics Methylation Molecular biology techniques
GLAD-PCR assay
[ "Chemistry", "Biology" ]
718
[ "Molecular biology techniques", "Methylation", "Molecular biology" ]
53,494,139
https://en.wikipedia.org/wiki/Shirley%20E.%20Schwartz
Shirley Ellen Schwartz or Ellen Shirley Schwartz (August 26, 1935 – May 8, 2016) was an American chemist and research scientist at General Motors, specializing in the study and development of industrial lubricants and automobile oil change indicator systems. She was inducted into the Michigan Women's Hall of Fame in 1996 for her accomplishments in the field of chemistry. Early life and education Born Ellen Shirley Eckwall in Detroit, Schwartz grew up in the Detroit suburb of Pleasant Ridge, and graduated from Lincoln High School in Ferndale. She earned three academic degrees in chemistry. She attended the University of Michigan where she received her Bachelor of Science degree in chemistry in 1957. Schwartz then enrolled at Wayne State University and earned her master's degree 1962 and her doctorate in 1970. Career After teaching at Oakland Community College and the Detroit Institute of Technology, Schwartz began working at BASF Corporation in Wyandotte, Michigan, where she developed an industrial lubricant that, by virtue of being primarily water, reduced the amount of oil and consequently pollution. She then spent over 18 years working at General Motors, where she was senior research scientist, working in Research and Development Operations at the General Motors Technical Center in Warren, Michigan. During her career she came to hold more than 20 patents, and authored 173 technical papers. From 1989 to 2003 she wrote a regular column titled Love Letters to Lubrication Engineers in the journal of the Society of Tribologists and Lubrication Engineers, and was remembered in a 2016 memorial in that journal as "the mother of the oil life monitor found in most GM cars, which is responsible for not having to change oil nearly as often as we did previously, or conversely, not ruining your engine if you don't change it often enough." When presenting her with an achievement award in 1999, the Society of Women Engineers summarized Dr. Schwartz's career thusly: "Dr. Schwartz has examined engine oil degradation; wear, corrosion, and elastomer durability in engines; the effects of methanol and ethanol fuel on engines; and lubricants for air conditioners that use alternative refrigerants (other than Freon R12). Her work in these areas have been targeted towards: obtaining the maximum useful life of engine oil finding acceptable ways to use alternative energy sources developing refrigerant systems that will not hurt the earth's ozone layer" Awards and honors Schwartz was named a fellow of the Society of Automotive Engineers in 1999 and was elected to the National Academy of Engineering in 2000 "for contributions to lubrication engineering and for enriching the technical community through free-lance writing." She additionally received many industry awards: General Motors Kettering Award (1988): Awarded for computer-based method that assesses engine oil degradation as a function of oil temperature and displays the remaining life of the oil for the vehicle General Motors McCuen Award (1993) Gold Award from the Engineering Society of Detroit (1989) Wilbur Deutsch Memorial Award from the Society of Tribologists and Lubrication Engineers (1987) Colwell Award (1992) from the Society of Automotive Engineers Distinguished Speaker Award (1995) from the Society of Automotive Engineers Personal life Schwartz married her husband, Ron Schwartz, in 1957. References 1935 births 2016 deaths 20th-century American chemists American materials scientists Detroit Institute of Technology faculty General Motors people Members of the United States National Academy of Engineering Tribologists Women materials scientists and engineers University of Michigan College of Literature, Science, and the Arts alumni Wayne State University alumni People from Pleasant Ridge, Michigan People from Warren, Michigan Chemists from Michigan Scientists from Detroit Deaths from Alzheimer's disease in the United States Deaths from dementia in Michigan
Shirley E. Schwartz
[ "Materials_science", "Technology" ]
751
[ "Tribology", "Tribologists", "Materials scientists and engineers", "Women materials scientists and engineers", "Women in science and technology" ]
58,035,891
https://en.wikipedia.org/wiki/Bioinformatics%20discovery%20of%20non-coding%20RNAs
Non-coding RNAs have been discovered using both experimental and bioinformatic approaches. Bioinformatic approaches can be divided into three main categories. The first involves homology search, although these techniques are by definition unable to find new classes of ncRNAs. The second category includes algorithms designed to discover specific types of ncRNAs that have similar properties. Finally, some discovery methods are based on very general properties of RNA, and are thus able to discover entirely new kinds of ncRNAs. Discovery by homology search Homology search refers to the process of searching a sequence database for RNAs that are similar to already known RNA sequences. Any algorithm that is designed for homology search of nucleic acid sequences can be used, e.g., BLAST. However, such algorithms typically are not as sensitive or accurate as algorithms specifically designed for RNA. Of particular importance for RNA is its conservation of a secondary structure, which can be modeled to achieve additional accuracy in searches. For example, Covariance models can be viewed as an extension to a profile hidden Markov model that also reflects conserved secondary structure. Covariance models are implemented in the Infernal software package. Discovery of specific types of ncRNAs Some types of RNAs have shared properties that algorithms can exploit. For example, tRNAscan-SE is specialized to finding tRNAs. The heart of this program is a tRNA homology search based on covariance models, but other tRNA-specific search programs are used to accelerate searches. The properties of snoRNAs have enabled the development of programs to detect new examples of snoRNAs, including those that might be only distantly related to previously known examples. Computer programs implementing such approaches include snoscan and snoReport. Similarly, several algorithms have been developed to detect microRNAs. Examples include miRNAFold and miRNAminer. Discovery by general properties Some properties are shared by multiple unrelated classes of ncRNA, and these properties can be targeted to discover new classes. Chief among them is the conservation of an RNA secondary structure. To measure conservation of secondary structure, it is necessary to somehow find homologous sequences that might exhibit a common structure. Strategies to do this have included the use of BLAST between two sequences or multiple sequences, exploited synteny via orthologous genes or used locality sensitive hashing in combination with sequence and structural features. Mutations that change the nucleotide sequence, but preserve secondary structure are called covariation, and can provide evidence of conservation. Other statistics and probabilistic models can be used to measure such conservation. The first ncRNA discovery method to use structural conservation was QRNA, which compared the probabilities of an alignment of two sequences based on either an RNA model or a model in which only the primary sequence conserved. Work in this direction has allowed for more than two sequences and included phylogenetic models, e.g., with EvoFold. An approach taken in RNAz involved computing statistics on an input multiple-sequence alignment. Some of these statistics relate to structural conservation, while others measure general properties of the alignment that could affect the expected ranges of the structural statistics. These statistics were combined using a support vector machine. Other properties include the appearance of a promoter to transcribe the RNA. ncRNAs are also often followed by a Rho-independent transcription terminator. Using a combination of these approaches, multiple studies have enumerated candidate RNAs, e.g., Some studies have proceeded to manual analysis of the predictions to find a details structural and functional prediction. See also 6A RNA motif AbiF RNA motif ARRPOF RNA motif CyVA-1 RNA motif List of RNA structure prediction software References Non-coding RNA Bioinformatics
Bioinformatics discovery of non-coding RNAs
[ "Engineering", "Biology" ]
762
[ "Bioinformatics", "Biological engineering" ]
58,037,802
https://en.wikipedia.org/wiki/Compensatory%20conductance
The compensatory root water uptake conductance (Kcomp) () characterizes how a plant compensates its water uptake under heterogeneous water potential. It controls the root water uptake in a soil where the water potential is not uniform. See also Standard Uptake Fraction Hydraulic conductivity References Plant physiology
Compensatory conductance
[ "Biology" ]
68
[ "Plant physiology", "Plants" ]
58,039,179
https://en.wikipedia.org/wiki/Aluthge%20transform
In mathematics and more precisely in functional analysis, the Aluthge transformation is an operation defined on the set of bounded operators of a Hilbert space. It was introduced by Ariyadasa Aluthge to study p-hyponormal linear operators. Definition Let be a Hilbert space and let be the algebra of linear operators from to . By the polar decomposition theorem, there exists a unique partial isometry such that and , where is the square root of the operator . If and is its polar decomposition, the Aluthge transform of is the operator defined as: More generally, for any real number , the -Aluthge transformation is defined as Example For vectors , let denote the operator defined as An elementary calculation shows that if , then Notes References External links Bilinear forms Matrices Topology
Aluthge transform
[ "Physics", "Mathematics" ]
160
[ "Mathematical objects", "Matrices (mathematics)", "Topology", "Space", "Geometry", "Spacetime" ]
58,042,122
https://en.wikipedia.org/wiki/Tetramethylphosphonium%20bromide
Tetramethylphosphonium bromide is an organophosphorus compound with the formula (CH3)4PBr. It is a white, water-soluble solid, the salt of the cation tetramethylphosphonium and the bromide anion. It is prepared by treating trimethylphosphine with methyl bromide. Reactions Deprotonation gives methylenetrimethylphosphine ylide, which can sustain a second deprotonation: (CH3)4PBr + BuLi → CH3)3P=CH2 + LiBr + BuH CH3)3P=CH2 + BuLi → CH3)2P(CH2)2Li + BuH The latter is a precursor to many coordination complexes, e.g., the dicuprous complex Cu2[(Me2P(CH2)2]2. References Quaternary phosphonium compounds Bromides Organophosphorus compounds
Tetramethylphosphonium bromide
[ "Chemistry" ]
209
[ "Functional groups", "Salts", "Organic compounds", "Bromides", "Organophosphorus compounds" ]
59,675,893
https://en.wikipedia.org/wiki/Tissue%20engineering%20of%20heart%20valves
Tissue engineered heart valves (TEHV) offer a new and advancing proposed treatment of creating a living heart valve for people who are in need of either a full or partial heart valve replacement. Currently, there are over a quarter of a million prosthetic heart valves implanted annually, and the number of patients requiring replacement surgeries is only suspected to rise and even triple over the next fifty years. While current treatments offered such as mechanical valves or biological valves are not deleterious to one's health, they both have their own limitations in that mechanical valves necessitate the lifelong use of anticoagulants while biological valves are susceptible to structural degradation and reoperation. Thus, in situ (in its original position or place) tissue engineering of heart valves serves as a novel approach that explores the use creating a living heart valve composed of the host's own cells that is capable of growing, adapting, and interacting within the human body's biological system. Research has not yet reached the stage of clinical trials. Procedure Scaffolds Various biomaterials, whether they are biological, synthetic, or a combination of both, can be used to create scaffolds, which when implanted in a human body can promote host tissue regeneration. First, cells from the patient in which the scaffold will be implanted in are harvested. These cells are expanded and seeded into the created scaffold, which is then inserted inside the human body. The human body serves as a bioreactor, which allows the formation of an extracellular matrix (ECM) along with fibrous proteins around the scaffold to provide the necessary environment for the heart and circulatory system. The initial implantation of the foreign scaffold triggers various signaling pathways guided by the foreign body response for cell recruitment from neighboring tissues. The new nanofiber network surrounding the scaffold mimics the native ECM of the host body. Once cells begin to populate the cell, the scaffold is designed to gradually degrade, leaving behind a constructed heart valve made of the host body's own cells that is fully capable of cell repopulation and withstanding environmental changes within the body. The scaffold designed for tissue engineering is one of the most crucial components because it guides tissue construction, viability, and functionality long after implantation and degradation. Biological Biological scaffolds can be created from human donor tissue or from animals; however, animal tissue is often more popular since it is more widely accessible and more plentiful. Xenograft, from a donor of a different species from the recipient, heart valves can be from either pigs, cows, or sheep. If either human or animal tissue is used, the first step in creating useful scaffolds is decellularization, which means to remove the cellular contents all the while preserving the ECM matrix, which is advantageous compared to manufacturing synthetic scaffolds from scratch. Many decellularization methods have been used such as the use of nonionic and ionic detergents that disrupt cellular material interactions or the use of enzymes to cleave peptide bonds, RNA, and DNA. Fabricated There are also current approaches that are manufacturing scaffolds and coupling them with biological cues. Fabricated scaffolds can also be manufactured using either biological, synthetic, or a combination of both materials from scratch to mimic the native heart valve observed using imaging techniques. Since the scaffold is created from raw materials, there is much more flexibility in controlling the scaffold's properties and can be more tailored. Some types of fabricated scaffolds include solid 3-D porous scaffolds that have a large pore network that permits the flow through of cellular debris, allowing further tissue and vascular growth. 3-D porous scaffolds can be manufactured through 3-D printing or various polymers, ranging from polyglycolic acid (PGA) and polylactic acid (PLA) to more natural polymers such as collagen. Fibrous scaffolds have the potential to closely match the structure of ECM through its use of fibers, which have a high growth factor. Techniques to produce fibrous scaffolds include electrospinning, in which a liquid solution of polymers is stretched from an applied high electric voltage to produce thin fibers. Conversely to the 3-D porous scaffolds, fibrous scaffolds have a very small pore size that prevents the pervasion of cells within the scaffold. Hydrogel scaffolds are created by cross-linking hydrophilic polymers through various reaction such as free radical polymerization or conjugate addition reaction. Hydrogels are beneficial because they have a high water content, which allows the ease of nutrients and small materials to pass through. Biocompatibility The biocompatibility of surgically implanted foreign biomaterial refers to the interactions between the biomaterial and the host body tissue. Cell line as well as cell type such as fibroblasts can largely impact tissue responses towards implanted foreign devices by changing cell morphology. Thus the cell source as well as protein adsorption, which is dependent on biomaterial surface property, play a crucial role in tissue response and cell infiltration at the scaffold site. Methodology Inflammatory response Acute inflammation Implantation of any foreign device or material through the means of surgery results in at least some degree of tissue trauma. Therefore, especially when removing a native heart valve either partially or completely, the tissue trauma will trigger a cascade of inflammatory responses and elicit acute inflammation. During the initial phase of acute inflammation, vasodilation occurs to increase blood flow to the wound site along with the release of growth factors, cytokines, and other immune cells. Furthermore, cells release reactive oxygen species and cytokines, which cause secondary damage to surrounding tissue. These chemical factors then proceed to promote the recruitment of other immune responsive cells such as monocytes or white blood cells, which help foster the formation of a blood clot and protein-rich matrix. Chronic inflammation If the acute inflammatory response persists, the body then proceeds to undergo chronic inflammation. During this continual and systemic inflammation phase, one of the primary driving forces is the infiltration of macrophages. The macrophages and lymphocytes induce the formation of new tissues and blood vessels to help supply nutrients to the biomaterial site. New fibrous tissue then encapsulates the foreign biomaterial in order to minimize interactions between the biomaterial and surrounding tissue. While the prolonging of chronic inflammation may be a likely indicator for an infection, inflammation may on occasion be present upwards to five years post-surgery. Chronic inflammation marked by the presence of fibrosis and inflammatory cells was observed in rat cells 30 days post implantation of a device. Following chronic inflammation, mineralization occurs approximately 60 days after implantation due to the buildup of cellular debris and calcification, which has the potential to compromise the functionality of biocompatible implanted devices in vivo. Foreign body response Under normal physiological conditions, inflammatory cells protect the body from foreign objects, and the body undergoes a foreign body reaction based on the adsorption of blood and proteins on the biomaterial surface. In the first two to four weeks post implant, there is an association between biomaterial adherent macrophages and cytokine expression near the foreign implant site, which can be explored using semi-quantitative RT-PCR. Macrophages fuse together to form foreign body giant cells (FBGCs), which similarly express cytokine receptors on their cell membranes and actively participate in the inflammatory response. Device failure in organic polyether polyurethane (PEU) pacemakers compared to silicone rubber showcases that the foreign body response may indeed lead to degradation of biomaterials, causing subsequent device failures. The utilization of to prevent functionality and durability compromise is proposed to minimize and slow the rate of biomaterial degradation. Benefits Tissue engineered heart valves offer certain advantages over traditional biological and mechanical valves: Living valve – The option of a living heart valve replacement is highly optimal for children as the live valve has the ability to grow and respond to its biological environment, which is especially beneficial for children whose bodies are continually changing. This option would help reduce the number of reoperation needed in a child's life. Customized process – Since the scaffolds used in tissue engineering can be manufactured from scratch, there is a higher degree of flexibility and control. This allows the potential of tailoring tissue engineered heart valves and its properties such as the scaffold's shape and biomaterial makeup to be tailored specifically to the patient. Risks and challenges Many risks and challenges must still be addressed and explored before tissue engineered heart valves can fully be clinically implemented: Contamination – Particular source materials can foster a microbiological environment that is conducive to the susceptibility of viruses and infectious diseases. Anytime an external scaffold is implanted within the human body, contamination, while inevitable, can be diminished through the enforcement of sterile technique. Scaffold Interactions - There are many risks associated with the interactions between cells and the implanted scaffold as specific biocompatibility requirements are still largely unknown with current research. The response to these interactions are also highly individualistic, dependent on the specific patient's biological environment; therefore, animal models researched prior may not accurately portray outcomes in the human body. Due to the highly interactive nature between the scaffold and surrounding tissue, properties such as biodegradability, biocompatibility, and immunogenicity must all be carefully considered as they are key factors in the performance of the final product. Structural complexity – Heart valves with their heterogeneous structure are very complex and dynamic, thus posing a challenge for tissue engineered valves to mimic. The new valves must have high durability while also meeting the anatomical shape and mechanical functions of the native valve. History Synthetic scaffolds Studies performed seeded scaffolds made of polymers with various cell lines in vitro, in which the scaffolds degraded over time while leaving behind a cellular matrix and proteins. The first study on tissue engineering of heart valves was published in 1995. During 1995 and 1996, Shinoka used a scaffold made of polyglycolic acid (PGA), approved by the FDA for human implantation, and seeded it with sheep endothelial cells and fibroblasts with the goal of replacing a sheep's pulmonary valve leaflet. What resulted from Shinoka's study was an engineered heart valve that was much thicker and more rigid, which prompted Hoerstrup to conduct a study to replace all three pulmonary valve leaflets in a sheep using a poly-4-hydroxybutyrate (P4HB) coated PGA scaffold and sheep endothelial cells and myofibroblast. Biological scaffolds Another option studied was using decellularized biological scaffolds and seeding them with their corresponding cells in vitro. In 2000, Steinhoff implanted a decellularized sheep pulmonary valve scaffold seeded with sheep endothelial cells and myofibroblasts. Dohmen then created a decellularized cryopreserved pulmonary allograft scaffold and seeded it with human vascular endothelial cells to reconstruct the right ventricular outflow tract (RVOT) in a human patient in 2002. Perry in 2003 seeded a P4HB coated PGA scaffold with sheep mesenchymal stem cells in vitro; however, an in vivo study was not performed. In 2004, Iwai conducted a study using a poly(lactic-co-glycolic acid) PLGA compounded with collagen microsponge sphere scaffold, which was seeded with endothelial and smooth muscle cells at the site of a dog's pulmonary artery. Sutherland in 2005 utilized a sheep mesenchymal stem cell seeded PGA and poly-L-lactic acid (PLLA) scaffold to replace all three pulmonary valve leaflets in a sheep. In vivo implant studies A handful of studies utilized tissue engineering of heart valves in vivo in animal models and humans. In 2000, Matheny conducted a study in which he used a pig's small intestinal submucosa to replace one pulmonary valve leaflet. Limited studies have also been conducted in a clinical setting. For instance in 2001, Elkins implanted SynerGraft treated decellularized human pulmonary valves in patients. Simon similarly used SynerGraft decellularized pig valves for implantation in children; however, these valves widely failed as there were no host cells but rather high amounts of inflammatory cells found at the scaffold site instead. Studies led by Dohmen, Konertz, and colleagues in Berlin, Germany involved the implantation of a biological pig valve in 50 patients who underwent the Ross operation from 2002 to 2004. Using a decellularized porcine xenograft valve, also called Matrix P, in adults with a median age of 46 years, the aim of the study was to offer a proposal for pulmonary valve replacement. While some patients died postoperatively and had to undergo reoperation, the short-term results appear to be going well as the valve is behaving similarly to a native, healthy valve. One animal trial combined the transcatheter aortic valve replacement (TAVR) procedure with tissue engineered heart valves (TEHVs). A TAVR stent integrated with human cell-derived extracellular matrix was implanted and examined in sheep, in which the valve upheld structural integrity and cell infiltration, allowing the potential clinical application to extend TAVR to younger patients. Research While many in vitro and in vivo studies have been tested in animal models, the translation from animal models to humans has not begun. Factors such as the size of surgical cut sites, duration of the procedure, and available resources and cost must all be considered. Synthetic nanomaterials have the potential to advance scaffoldings used in tissue engineering of heart valves. The use of nanotechnology could help expand beneficial properties of fabricated scaffolds such as higher tensile strength. See also Tissue engineering Valvular heart disease Valve replacement Artificial heart valve Nanotechnology References Tissue engineering Organ transplantation
Tissue engineering of heart valves
[ "Chemistry", "Engineering", "Biology" ]
2,955
[ "Biological engineering", "Cloning", "Chemical engineering", "Tissue engineering", "Medical technology" ]
59,676,244
https://en.wikipedia.org/wiki/QM-AM-GM-HM%20inequalities
In mathematics, the QM-AM-GM-HM inequalities, also known as the mean inequality chain, state the relationship between the harmonic mean, geometric mean, arithmetic mean, and quadratic mean (also known as root mean square). Suppose that are positive real numbers. Then These inequalities often appear in mathematical competitions and have applications in many fields of science. Proof There are three inequalities between means to prove. There are various methods to prove the inequalities, including mathematical induction, the Cauchy–Schwarz inequality, Lagrange multipliers, and Jensen's inequality. For several proofs that GM ≤ AM, see Inequality of arithmetic and geometric means. AM-QM inequality From the Cauchy–Schwarz inequality on real numbers, setting one vector to : hence . For positive the square root of this gives the inequality. HM-GM inequality The reciprocal of the harmonic mean is the arithmetic mean of the reciprocals , and it exceeds by the AM-GM inequality. implies the inequality: The n = 2 case When n = 2, the inequalities become for all which can be visualized in a semi-circle whose diameter is [AB] and center D. Suppose AC = x1 and BC = x2. Construct perpendiculars to [AB] at D and C respectively. Join [CE] and [DF] and further construct a perpendicular [CG] to [DF] at G. Then the length of GF can be calculated to be the harmonic mean, CF to be the geometric mean, DE to be the arithmetic mean, and CE to be the quadratic mean. The inequalities then follow easily by the Pythagorean theorem. Tests To infer the correct order, the four expressions can be evaluated with two positive numbers. For and in particular, this results in . See also Inequalities among pythagorean means Generalized mean inequality References External links The HM-GM-AM-QM Inequalities Useful inequalities cheat sheet entry "means" on the right of page 1 Inequalities
QM-AM-GM-HM inequalities
[ "Physics", "Mathematics" ]
440
[ "Means", "Mathematical analysis", "Point (geometry)", "Mathematical theorems", "Geometric centers", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Articles containing proofs", "Mathematical problems", "Symmetry" ]
59,676,605
https://en.wikipedia.org/wiki/Institute%20of%20Theoretical%20Astrophysics
The Institute of Theoretical Astrophysics (Norwegian: Institutt for teoretisk astrofysikk, abbreviated ITA) is a research and teaching institute dedicated to astronomy, astrophysics and solar physics located at Blindern in Oslo, Norway. It is a department of The Faculty of Mathematics and Natural Sciences at the University of Oslo. It was founded in its current form by Svein Rosseland with funding from the Rockefeller Foundation in 1934, and was the first of its kind in the world when it opened. Prior to that, it existed as the University Observatory which was created in 1833. It thus is one of the university's oldest institutions. As of 2019, it houses research groups in cosmology, extragalactic astronomy, and The Rosseland Centre for Solar Physics, a Norwegian Centre of Excellence. History The observatory Prior to 1934, the university's astronomy efforts revolved around the University Observatory (Norwegian: Universitetsobservatoriet, abbreviated Observatoriet, lit. the Observatory) located downtown Oslo. The first observation facilities were provided in 1815 to the newly appointed professor Christopher Hansteen of the recently established Royal Frederick University (which was renamed the University of Oslo in 1939) in an octagonal shack at Akershus Festning, Christiania. Construction began in 1831 on a larger observatory which also could house Hansteen and his family. At its completion in 1833 it became the first building to have been erected by the university. An Institute of Theoretical Astrophysics The Observatory's final director, professor Svein Rosseland (appointed in 1928) did not consider its future to be promising. In a letter to a colleague, he wrote, He visited the Harvard College Observatory in 1929, and accepted a professorship there. However, rector Sem Sæland of the University of Oslo saw this as a great loss, as Rosseland already had become an internationally renowned scientist at the time. Sæland coordinated a political effort in which Rosseland was offered to manage an astronomical fund provided by the state, prospects of new university facilities, and was promised a general renovation of the observatory. Rosseland accepted, and returned to Oslo in 1930. He then contacted Niels Bohr in Copenhagen who recently had founded the Institute of Theoretical Physics for inspiration and his level plans. Rosseland concluded that the director should reside at the institute. In his opinion, work did not comply with working hours, and the director should always be available. An architectural competition was announced, and the winning design was sent to the Rockefeller Foundation. His proposal of an institute of theoretical astrophysics did not exist elsewhere in the world at the time. The foundation replied 15 April 1931, granting him 105,000 dollars to erect the institute and 15,000 dollars to obtain scientific equipment. The architectural firm of Finn Bryn and Johan Fredrik Ellefsen designed the building for Rosseland at Blindern campus in Oslo. It opened 1 July 1934 and was named Svein Rosselands hus (lit. the house of Svein Rosseland). The building is a striking example of functionalism, unlike the nearby building for physics and chemistry which originally was designed in neoclassical style. Both scientists and the library were moved from the observatory to the new premises. The two first floors and the basement were purposed for research and teaching, and Rosseland himself resided in the upper three floors. A plaque honoring the Rockefeller Foundation can be found near the entrance. In the early days, the institute housed Rosseland himself, his assistant Gunnar Randers, two of the founders of modern meteorology: retired Norwegian dean of science Vilhelm Bjerknes and professor Halvor Solberg, as well as Carl Størmer, a mathematics professor who also studied the northern lights. Rosseland's international recognition led to visits from prominent scientists such as Martin Schwarzschild. Instruments, observatories, and telescopes The institute housed the Oslo Analyzer in its basement between 1934 and 1954. It was the most powerful differential analyzer in the world for four years after its creation. Key pieces were buried in the garden behind the institute during WW2 to prevent the machine from being used by the nazis. The institute had its own solar observatory outside Oslo between 1954 and 1987, the Harestua Solar Observatory. It has been used for science educational purposes after ceasing to exist as a research facility. A subsequent telescope was proposed, the Large European Solar Telescope. After a completion of an initial scientific requirement analysis in 1982, a legal body was formed in 1983. The telescope was however never realized. Solar physicists at ITA have routinely been using the Swedish Solar Telescope since it saw first light in 2002. The institute contributed to, and made use of, the solar imager High Resolution Telescope and Spectrograph of the Naval Research Laboratory which was launched on rockets and flew once with the Space Shuttle between 1975 and 1985. The space-borne Solar and Heliospheric Observatory was launched in 1995. The institute provided the ground test system and computers. In 1988, the Nordic Optical Telescope at La Palma was opened. It was co-funded by Norway and is used by astronomers at ITA. The works of the former celestial mechanics research group at the institute were instrumental in determining the path ESA's Rosetta spacecraft would take when approaching its target, the comet 67P/Churyumov–Gerasimenko in 2014. The institute lead the Norwegian contributions to the Planck mission of ESA until its final release of results in 2018. Scientists at the institute were instrumental in analyzing the resulting maps of the cosmic microwave background (CMB). A Center of Excellence The solar physics group at the institute was granted status as a Norwegian Centre of Excellence in 2017 for the period 2017–2027 under the direction of Mats Carlsson. Directors Christopher Hansteen (1834–1861) Carl Frederik Fearnley (1861–1890) Hans Geelmuyden (1890–1919) Jens Fredrik Schroeter (1919–1927) Svein Rosseland (1928–35 at the observatory, 1935–1965 at ITA) Mats Carlsson (1997–2003 (?)) Per Barth Lilje (2003–2012) Viggo Hansteen (2013–2017) Per Barth Lilje (2017–) Research The institute is engaged in various fields of theoretical, observational and numerical astrophysics. The cosmology group is engaged in analysis of data from cosmic microwave background-related experiments such as CORE, GreenPol, LiteBIRD, PASIPHAE, QUIET and SPIDER. The group is also researching the nature of the cosmological accelerating expansion and the nature of dark matter, both through theoretical and numerical investigations into modifying general relativity as well as the future Euclid mission of ESA. The extragalactic astronomy group is organized under the cosmology group. Its scientists use both simulations of galaxy formations, radiative transfer simulations, and observations of gravitationally lensed galaxies to understand and investigate the Universe beyond our own galaxy. Observations are carried out with the Hubble Space Telescope and the Nordic Optical Telescope among others. The group is also a key player in the COMAP carbon monoxide intensity mapping experiment. The Rosseland Centre for Solar Physics combines theory, numerics and observations to provide insights into the solar atmosphere. It is regarded as one of the world's foremost solar physics research institutions. With an allocated amount of 115 million CPU hours in 2018, it also is the most data intensive research group in Norway. The institute hosts the European data center for data from the Hinode satellite. It has an in-house developed 3D numerical model of the solar atmosphere called Bifrost. Besides using the Swedish Solar Telescope and Hinode for solar observations, the group also makes use of the space-borne Interface Region Imaging Spectrograph (IRIS), the Solar Dynamics Observatory as well as the ground-based Swedish Solar Telescope (SST) and the Atacama Large Millimeter Array (ALMA). The Almanac of Norway The official almanac of Norway has been published since 1644. After the dissolution of the Denmark-Norway union in 1814, the almanac has been edited in Norway. In 1814, it was edited by the Danish astronomer Thomas Bugge. Christopher Hansteen became editor in 1815 and remained so until 1862. Directors and astronomers at the Observatory and ITA have been editing it ever since. References Astrophysics research institutes University of Oslo
Institute of Theoretical Astrophysics
[ "Physics" ]
1,709
[ "Astrophysics research institutes", "Astrophysics" ]
59,677,453
https://en.wikipedia.org/wiki/Tonelada
The tonelada (Spanish and Portuguese for "a tunful") was a conventional Spanish and Portuguese unit of mass, volume, and capacity roughly equivalent to the English "ton" in its various senses. In English following Spain and Portugal's adoption of the metric system, the toneladas are most often used to specify the capacity of Spanish and Portuguese ships during the Age of Exploration with greater care than simply using the misleadingly vague calque "ton". However, as with the ton, the specific size of the units varied with time and location. Spanish unit The Spanish tonelada of volume was reckoned as 2 butts or pipes ( or ) and equivalent to 968.2 liters or 255.8 gallons. The Spanish tonelada of shipping capacity varied in size and method of computation over the years but scholars place the usual value for southern Spain from Columbus through the Age of Exploration at about or This was the same as the "sea ton" () used in early modern Bordeaux, France, and roughly half of the English old measure and British gross register tons. (The present system of tonnage varies logarithmically with ship size and cannot be linearly converted.) At other times, it was closer to of the British shipping ton. The Spanish tonelada of mass was normally reckoned as 20 quintals or 2000 Spanish pounds (). The Castilian Spanish pound was standardized as about 460 grams by the 19th century, producing a tonelada of around 920 kilograms or 2030 pounds avoirdupois. In Mexico, the tonelada was instead reckoned as 2240 Castilian pounds, 1030.4 kg or 2266.9 lbs., while Valencia used only 1920 slightly heavier poundsabout 534 gramsso that it was equivalent to 1025.3 kg or 2255.7 lbs. Portuguese unit The Portuguese tonelada of volume was initially reckoned as 2 pipes (), which in the 19th century was equivalent to 860.3 liters or 226.3 gallons. Following metrification, Portugal's used a quasimetric tonelada of exactly 800 liters while Brazil used a kiloliter tonelada of exactly 1000 liters. The Portuguese tonelada of mass was reckoned as 1728 arratels in Europe and Rio de Janeiro but 2240 arratels in Pernambuco. The arratel was standardized in Portugal and Brazil as about 460 grams by the 19th century, producing a lighter tonelada of around 793.2 kilograms or 1748.5 pounds avoirdupois and a heavier one around 1028.2 kg and 2266.7 lbs. See also Metric ton (Portuguese & or simply ) English tons (Portuguese & , , or simply ) Notes References Citations Bibliography . . . . . . . Spanish customary measurements Obsolete units of measurement Units of mass Units of volume Nautical terminology Ship measurements
Tonelada
[ "Physics", "Mathematics" ]
599
[ "Obsolete units of measurement", "Matter", "Units of volume", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
74,749,311
https://en.wikipedia.org/wiki/Barium%20nitrite
Barium nitrite is a chemical compound, the nitrous acid salt of barium. It has the chemical formula Ba(NO2)2. It is a water-soluble yellow powder. It is used to prepare other metal nitrites, such as lithium nitrite. Synthesis Barium nitrite can be made by reacting barium nitrate with lead metal sponge, or by reaction of lead nitrite with barium chloride. Safety Barium nitrite is toxic if ingested or inhaled, as both barium and the nitrite ion are toxic. References Inorganic compounds Barium compounds Nitrites
Barium nitrite
[ "Chemistry" ]
129
[ "Inorganic compounds" ]
74,753,162
https://en.wikipedia.org/wiki/Manganese%28III%29%20phosphate
Manganese(III) phosphate is an inorganic chemical compound of manganese with the formula MnPO4. It is a hygroscopic purple solid that absorbs moisture to form the pale-green monohydrate, though the anhydrous and monohydrate forms are typically each synthesized by separate methods. Production and properties Manganese phosphate monohydrate is produced by the reaction of an Mn(II) salt, such as manganese(II) sulfate, and phosphoric acid, followed by oxidation by nitric acid. Another method of producing the monohydrate is by the comproportionation of permanganate and Mn(II) in phosphoric acid: MnO4– + 4 Mn2+ + 10 PO43– + 8 H+ → 5 [Mn(PO4)2]3– + 4 H2O The diphosphomanganate(III) ion slowly converts to the monohydrate. Heating of the monohydrate does not yield the anhydrous form, instead, it decomposes to manganese(II) pyrophosphate (Mn2P2O7) at 420 °C: 4 MnPO4·H2O → 2 Mn2P2O7 + 4 H2O + O2 To produce the anhydrous form, lithium manganese(II) phosphate is oxidized with nitronium tetrafluoroborate under inert conditions. The anhydrous form is sensitive to moisture. In the absence of moisture, it decomposes at 400 °C, but when moisture is present, it slowly transitions to an amorphous phase and decomposes at 250 °C. Structure and natural occurrence The anhydrous form has an olivine structure and naturally occurs as the mineral purpurite. The monohydrate has a monoclinic structure, similar to that of magnesium sulfate monohydrate, but has distortions at the octahederal manganese center due to the Jahn-Teller effect. It naturally occurs as the mineral serrabrancaite. The monohydrate form has cell parameters of a = 6.912 Å, b = 7.470 Å, β = 112.3°, and Z = 4. It consists of interconnected distorted trans-[Mn(PO4)4(H2O)2] octahederons. References Manganese(III) compounds Phosphates
Manganese(III) phosphate
[ "Chemistry" ]
515
[ "Phosphates", "Salts" ]
74,756,339
https://en.wikipedia.org/wiki/Optical%20glass
Optical glass refers to a quality of glass suitable for the manufacture of optical systems such as optical lenses, prisms or mirrors. Unlike window glass or crystal, whose formula is adapted to the desired aesthetic effect, optical glass contains additives designed to modify certain optical or mechanical properties of the glass: refractive index, dispersion, transmittance, thermal expansion and other parameters. Lenses produced for optical applications use a wide variety of materials, from silica and conventional borosilicates to elements such as germanium and fluorite, some of which are essential for glass transparency in areas other than the visible spectrum. Various elements can be used to form glass, including silicon, boron, phosphorus, germanium and arsenic, mostly in oxide form, but also in the form of selenides, sulfides, fluorides and more. These materials give glass its characteristic non-crystalline structure. The addition of materials such as alkali metals, alkaline-earth metals or rare earths can change the physico-chemical properties of the whole to give the glass the qualities suited to its function. Some optical glasses use up to twenty different chemical components to obtain the desired optical properties. In addition to optical and mechanical parameters, optical glasses are characterized by their purity and quality, which are essential for their use in precision instruments. Defects are quantified and classified according to international standards: bubbles, inclusions, scratches, index defects, coloring, etc. History The earliest known optical lenses, dating from before 700 BC, were produced under the Assyrian Empire: they were made of polished crystals, usually quartz, rather than glass. It wasn't until the rise of the Greeks and Romans that glass was used as an optical material. They used it in the form of spheres filled with water to make lenses for lighting fires (burning glass), as described by Aristophanes and Pliny, or to make very small, indistinct characters larger and sharper (magnifying glass), according to Seneca. Although the exact date of their invention is not known, glasses are said to have been described in 1299 by Sandro di Popozo in his Treatise on Family Conduct: "I am so altered by age, that without these lenses called spectacles, I would no longer be able to read or write. They have recently been invented for the benefit of poor old people whose eyesight has become bad". At the time, however, "glasses" were actually made from beryl or quartz. The only lens available at the time, ordinary soda-lime glass, was unable to compensate for optical aberrations. However, it evolved slowly over the centuries. It was first lightened by the use of ashes, which contain manganese dioxide that transforms ferrous oxide (FeO) into ferric oxide (Fe2O3), which is much less colorful. Then, around 1450, Angelo Barovier invented "crystalline glass" (vetro cristallino) or "Venetian glass" (cristallo di Venezia), improving on the previous process by purifying the ashes by leaching to obtain a purer potash. Lime was introduced, first for economic reasons in the 14th century, then as a technical improvement in Bohemia in the 17th century (Bohemian glass), eliminating a very large proportion of impurities. This practice did not arrive in France until the middle of the eighteenth century. It was at this time that the Manufacture Royale de Glaces de Miroirs (Compagnie de Saint-Gobain S.A.) began to produce glass composed of 74% silica, 17.5% soda and potash, and 8.5% lime. Thus, the first complex optical instruments, such as Galileo's telescope (1609), used ordinary soda-lime glass (the first crown glass), composed of sand, soda, potash and sometimes lime, which, although suitable for glazing or bottles, was hardly suitable for optical applications (distortion, blurred effect, irregularities, etc.). In 1674, the British inventor George Ravenscroft, wishing to rival Venetian and Bohemian crystal while being less dependent on imported raw materials, replaced lime with lead(II) oxide to compensate for glass's lack of resistance to humidity, thus inventing lead crystal (the first flint glass, named after the high-purity English siliceous stone used), brighter than ordinary glass, composed of silica, lead oxide and potash. Chester Moore Hall (1703-1771), using the two types of glass available (soda-lime crown and lead flint), invented the first achromatic doublet. His work was taken up by John Dollond in his Account of some experiments concerning the different refrangibility of light, published in 1758. The real revolution in optical glass came with the development of industrial chemistry, which facilitated the composition of glass, allowing properties such as refractive index and dispersion coefficient to be varied. Between 1880 and 1886, the German chemist Otto Schott, in collaboration with Ernst Abbe, invented new glasses containing oxides such as "anhydrous baryte" (barium oxide BaO) and anhydrous boric acid (B2O3), with which he developed barium crowns, barium flints and borosilicate crowns. Between 1934 and 1956, other oxides were used. Then, by adding phosphates and fluorides, phosphate crowns and fluorine crowns were obtained. As optics became increasingly complex and diverse, manufacturers' catalogs expanded to include 100 to 200 different lenses; glass melts increasingly included special components such as oxides of heavy elements (high refractive index and low dispersion), chalcogenides (sulfide, selenide, telluride), halides such as fluorides (low refractive index and high dispersion) or phosphides, cerium-doped glasses to obtain radiation-resistant lenses, and so on. Since the 1980s, however, glass catalogs have tended to become increasingly limited. Properties The most important physical properties of glass for optical applications are refractive index and constringency, which are decisive in the design of optical systems, and transmission, glass strength and non-linear effects. Index and constringency The refractive index indicates the refractive power of a glass, i.e. its ability to deflect light rays to a greater or lesser extent. This deflection can be deduced from Descartes' law. The refractive index is a wavelength-dependent quantity, creating chromatic aberrations in a system by refracting rays more or less according to their wavelength: This is the phenomenon observed when light is decomposed by a prism. Several laws have approximated this relationship to wavelength, notably Cauchy's law and Sellmeier equation. The refractive index of a glass is given for the yellow line known as the d line of helium (then noted nd) or for the green e line of mercury (then noted ne), depending on usage and the two main standards used. The dependence of refractive index on wavelength requires a measure of the glass's dispersion, i.e. the difference in deviation between two wavelengths. A highly dispersive glass will deflect short wavelengths to a great extent, but long wavelengths to a lesser extent. The measure of dispersion is the Abbe number, or constringence. The main dispersion is the difference nF-nC (helium lines) or nF'-nC' (cadmium lines), and constringencies for the same lines as the refractive index are deduced by and . A high Abbe number means that the glass is not very dispersive, and vice versa. Lenses are usually divided into two groups with the generic names of crown and flint, referring respectively to low-dispersion, low-index lenses and high-dispersion, high-index lenses. Typically, the distinction is made at around νd=50: lenses below this value are flints, the others are crowns. These two parameters alone are needed to differentiate between lenses: two lenses with equal nd and νd are identical. Glasses are represented on the so-called Abbe diagram, a graph with abscissa nd and ordinate νd, where each glass is denoted by a point on the graph. Oxide glasses fall into a range of nd from 1.4 to 2.0 and νd from 20 to 90, with SiO2 being the oxide glass with the highest contringency and lowest index. Fluoride glasses can go up to νd>100 and nd<1.4, BeF2 being the fluoride glass at the highest constringency and lowest index. Chalcogenide glasses have indexes exceeding 2, a large proportion of which cannot be shown on an Abbe diagram due to their absorption in visible wavelengths preventing a relevant νd measurement. For optical materials that are opaque in the visible range, constringence is measured at longer wavelengths. This classification also has its limitations when it comes to active optical glasses (birefringence, acousto-optic effect, and non-linear effects), optical filters or graded-index lenses, so we restrict the term "classical optical glass" to the aforementioned glasses, i.e. those with limited index and dispersion, which can be described essentially by their dispersive behavior and refractive index. Transmission and absorption Another very important characteristic of optical glass is its absorption and transmission behaviour. The use to which the future lens will be put determines its behavior: filters that absorb in certain spectral bands, lenses that are highly transparent in the visible, ultraviolet or infrared, resistance to radiation. As a general rule, the transmittance of the glass is given by the manufacturer, noted τi or Ti, a value that depends on the thickness of the material and whose measurement makes it possible to take into account the loss of transmission due to absorption and diffusion by internal defects in the glass. Since the transmittance term takes into account the refractive index via the Fresnel coefficient, it is also dependent on the wavelength and thickness of the sample via the formula where is the transmittance and thickness. Transmittance windows are of particular interest when it comes to choosing the right glass for applications such as far-infrared or far-ultraviolet. These windows are the result of the absorption of the materials making up the glass, which increases in the infrared and ultraviolet. Absorption in these two wavelength ranges is due to distinct phenomena, and can evolve differently depending on environmental conditions. Ultraviolet absorption In the ultraviolet, or UV, the drop in transmission is due to the electronic transitions of the elements making up the glass: valence electrons absorb wavelengths whose energy corresponds to their band gap. According to solid-state band theory, electrons can only take on certain specific energy values in particular energy levels, but with sufficient energy, an electron can move from one of these levels to another. Light waves are charged with an energy hν, inversely proportional to wavelength (ν=c/λ), which can enable an electron to pass from one level to another if this energy is sufficient and therefore if the wavelength is short enough. A silica glass absorbs wavelengths below 160 nm, a glass based on boron trioxide (B2O3) absorbs below 172 nm, a phosphorus pentoxide glass (P2O5) absorbs below 145 nm. There are two types of oxygen in oxide glasses: bridging and non-bridging (possessing an excess electron charge), detectable by photoelectron spectroscopy. Non-bridging oxygen possesses electrons whose kinetic energy after release by monochromatic X-rays is higher than that of bridging oxygen. Bonds between non-bridging oxygens and cations are generally ionic. These characteristics give the glass its energetic band properties, making it more or less effective at transmitting radiation. Depending on the intensity of the bonds with the cations in the glass, the transmission window varies: in the presence of alkali metals, electrons can move from one band to the other more easily, as they are less bound to the non-bridging oxygens. On the other hand, the introduction of aluminium (Al2O3) to replace silica will increase the glass's transmission window, as the tetrahedral configuration of alumina reduces the proportion of non-contacting oxygens and therefore of electrons able to move from the valence band to the conduction band. As a result, glasses containing heavy metals (such as Ti+ or Pb2+) tend to transmit less well than others, since the oxygen will tend to share its electrons with the cation and thus reduce the band gap. The disadvantage is that the addition of these metals results in higher refractive indices. Depending on the heavy metal used, the drop in UV transmission will be more or less rapid, so lead lenses transmit better than niobium or titanium lenses. Attention to crucible and furnace materials is therefore very important, as these can also influence the UV transmission window. Platinum, for example, is widely used in glass melting, but inclusions of platinum particles in the glass paste can cause undesirable transmission losses due to impurities. Another source of variation in UV transmission loss is ambient temperature: the higher the temperature of the glass, the more the UV drop will shift towards longer wavelengths, due to the material's reduced band gap. Solarization, which is the exposure of glass (or paint, for that matter) to electromagnetic radiation, can "yellow" glass depending on the wavelength and intensity of the radiation. Lenses with the best UV transmission are the most affected by solarization, which modifies their transmission window. Lenses can be doped with cerium dioxide (CeO2), which shifts the transmission drop to longer wavelengths and stabilizes it. This doping is one of the processes used to create anti-radiation glasses, since a glass doped in this way has the particular ability to protect against the most energetic types of radiation, such as X-rays and gamma rays. Infrared absorption In infrared, or IR, the physical phenomena leading to a drop in transmission are different. When a molecule receives a given amount of energy, it begins to vibrate in different modes: fundamental, first harmonic, second harmonic, etc., corresponding to periodic movements of the atoms in the molecule; each frequency associated with the energy of the molecule's vibration mode is absorbed. In silica glass, the Si-O bond has two main modes of vibration, rotation and elongation. Since the frequency of elongation is 0.34 × 1014 Hz, absorption will take place at 8.8 μm (fundamental), 4.4 μm (harmonic 1), 2.9 μm (harmonic 2), etc. As the absorption due to this vibration is very strong, silica becomes opaque from the first harmonic onwards. Most quartz glasses even show a marked drop in transmission at harmonic 2. Chalcogenide glasses are used to reduce the frequency of molecular vibrations: as sulfur or selenium are heavier, their vibration modes are weaker, and their transmission is, therefore, better in the infrared. However, this comes at the price of visible transmission, since chalcogenide glasses are opaque in the visible. Another solution is to produce halide glasses, in particular fluoride glasses. As fluorine is highly electronegative, the bonds between anions and cations are weakened, and vibrations are therefore weaker. Glass humidity, i.e. the presence of water in the material, has a strong influence on the transmission curve of glasses in the 2.9 μm to 4.2 μm region. Water takes the form of OH− groups, whose O-H bond vibrates at a frequency of around 90 THz, equivalent to an absorption of wavelengths from 2.9 μm to 3.6 μm. The higher the humidity of the sample, the greater the local drop in transmission, with very high humidity even causing absorption at the harmonics of the O-H bond vibration, at around 200 nm. Emission and non-linear phenomena Lasers are often used at very high illuminance levels. It has been found that in this high illumination range, the refractive index follows a law that deviates from the linear domain and becomes proportional to the intensity of the luminous flux: where is the refractive index of the material, the wavelength, the intensity of the light beam, the refractive index for low powers. For silica, for example, is 3.2 × 10–20 m2 W−1 for =1,060 nm. The most dispersive glasses tend to have the highest non-linear refractive indices, probably due to the metal ions present in the glass. Above TW mm−2, the fluence (or flux) is sufficient to create higher-order non-linear optical phenomena such as multiphonon absorption and avalanche photo-ionization. The first phenomenon makes the material absorbent through the addition of two photons, which release an electron. The second phenomenon is the acceleration of an electron released by the electromagnetic field, the electron's kinetic energy being transmitted to other neighboring electrons. These two combined effects can cause damage to glass by destroying the vitreous lattice (freed electrons give energy to other electrons which are more easily freed, and the lattice bonds are weakened by electron depletion). The material may be vaporized at sufficient speed that the phonons cannot transmit the energy in the form of heat to the rest of the glass. In 1988, an experiment showed that silica, whose lattice is isotropic, is capable of emitting green radiation when crossed by powerful infrared radiation. The generation of a second harmonic in this setting is atypical, but could be explained by the presence of F-center. Fluorescence can appear in optical glasses. Fluorescence is the re-emission of higher-wavelength radiation from an illuminated material. The energy of the incident light excites the material's electrons, which then de-excite and return to their ground state, emitting a photon with a longer wavelength than the original one. This phenomenon is particularly troublesome in applications where the presence of stray light or light of a different wavelength from the reference wavelength poses a problem. In lasers, for example, it's important to agree on a single, precise spectral line. Causes of fluorescence include rare-earth ions, impurities and color centers. Fabrication The basic materials used to manufacture optical lenses must be particularly pure, as any inclusion or impurity could not only degrade performance but also cause considerable damage to the lens (breakage, darkening, tinting, etc.). For example, the sand used to manufacture silica-based glass must contain an extremely low proportion of ferric oxide (Fe2O3) (10 ppm maximum) and even lower proportions of other oxides and elements (cobalt, copper, nickel, etc.). There are very few geographical sites where the sands are sufficiently pure for these applications. Most glass is melted in a pot furnace, which is used to melt limited quantities of glass, while certain mass-produced optical glasses (such as borosilicate glass) are melted in tank furnaces for continuous glass production. The glassmaking process comprises a number of stages, beginning with the melting of the glass paste, followed by refining and then tempering or annealing, which are two different finishes. Finally, if required, the glass can be polished, particularly in the case of mirrors and lenses, for any application where the objective is high image quality. The materials are placed together in the furnace and gradually heated to their melting point. Chemical reactions of composition or decomposition of molecules take place, resulting in significant off-gassing during this phase. Hydrates, carbonates, nitrates and sulfates recompose to form the glass paste with the vitrifying elements, giving rise to gases such as water vapor, carbon dioxide, sulfur dioxide and others. For example, 1 L of soda-lime glass paste releases around 1,440 L of various gases when heated to 100 °C, of which 70% is carbon dioxide. Refining is an essential stage in the quality of optical lenses, since it involves homogenizing the glass so that the components are evenly distributed throughout the paste and the gas is fully released. Homogenization avoids the problem of streaks appearing in the lens. Chemical agents are used to release the gases, in particular arsenic pentoxide (As2O5), which decomposes into arsenic trioxide (As2O3), releasing oxygen which combines with the other elements and gases released, causing the bubbles remaining in the paste to rise. Defects such as bubbles, streaks, inclusions and discolorations can appear as a result of the glass melting process. Bubbles result from insufficient refining, streaks from glass heterogeneity (the glass has a different refractive index locally, causing distortion), inclusions may come from glass that has crystallized locally or from fragments of the vessels used for melting, glass discoloration originates from insufficient purity of the mixed products. The tempering process is reserved for glass whose structure is to be hardened. Glass used for optics is often fragile and thin, so it is not tempered. Optical fibers are tempered after drawing, to give them sufficient mechanical strength. Annealing consists in slowly cooling a glass in a controlled manner from a certain temperature at which it has begun to solidify (around 1,000 °C for silica glass or 450 °C for soda-lime glass, for example). Annealing is necessary to eliminate internal stresses in the material that may have arisen during melting (impurities, streaks, bubbles, etc.) and to prevent uneven cooling in a material, with internal parts taking longer to heat and cool. The annealing time ranges from a hundred to a thousand hours, depending on the quantity of glass to be annealed and its composition. Types of glass The progressive development of the optical glass industry has led to the creation of new lens families. Lenses can be differentiated by their main components, which give them their mechanical, thermal and optical characteristics. In addition to the two main glass groups, flint and crown, based essentially on SiO2 silica or oxides, other groups exist, such as halide glasses and chalcogenide glasses (excluding oxygen). The following tables summarize most glass families and their composition. Each composition has its own particular properties and defects. Increasing the index often requires sacrificing transmission in the ultraviolet, and although research since the early days of glassmaking has considerably improved this state of affairs, it is not possible to obtain highly dispersive, low-refractive glasses, or low-dispersive, high-refractive glasses. Oxide glass Flints and crowns are glasses composed of oxides, often SiO2 or TiO2 titanates. Their index ranges from 1.4 to 2.4. This large group can be identified by its characteristic transmission profile ranging from 200 nm to 2.5 μm, due to the high gap energies and photon absorption peaks of hydroxyl groups in the infrared. A variety of oxides are used, the most common being silica-based glasses, but other molecules can also be used to form glassy systems, such as : germanium dioxide (GeO2) ; diboron trioxide (B2O3); phosphorus pentoxide (P2O5); aluminosilicates and borosilicates; phosphates. Phosphate glasses have lower melting temperatures and are more viscous than borosilicate glasses, but they are less resistant to chemical attack and less durable. Glasses based on a phosphate, borate or borophosphate vitreous system are good candidates for athermalization, since their , i.e. the variation in refractive index with temperature, is generally negative. Athermalization consists in compensating for thermal expansion of the material by changing its index. The family of phosphate glasses is particularly well-suited to these possibilities. Crown glass family Borosilicate crowns are the most widely produced glass family, and the ones with the best control over final homogeneity. This family includes BK7, the glass widely used in optics. Alkali oxides and boron trioxide B2O3 make it easier to melt silicon dioxide SiO2, which requires very high temperatures to liquefy. Barium crowns and dense crowns were developed for barium's ability to significantly increase the refractive index without significantly reducing the glass's constringency or ultraviolet transmission, which lead oxide tends to do. Some lenses use a mixture of zinc oxide ZnO and barium oxide BaO. Crowns, zinc crowns and crown flints are small families of glasses containing a wide variety of oxides (CaO or BaO, ZnO and PbO respectively) to increase hardness or durability. Phosphate crowns are characterized by their relatively low dispersion and medium index, with generally higher dispersion in the blue, making them useful for correcting chromatism in optical combinations. Fluorine crowns use fluorine's properties to reduce the dispersion and index of the glass: fluorine's high electronegativity and the smaller radius of fluorine ions are responsible for this. As with phosphate crowns, these lenses are particularly suitable for correcting chromatic aberration, thanks to their partial dispersion in the blue. Flint glass family Dense or light flints are long-established families, such as borosilicate crowns, and are used as optical glass as well as crystal for everyday glassmaking. Their main properties derive from the proportion of PbO introduced. PbO increases refractive index while decreasing Abbe number, and also affects partial dispersion. Lead oxide will also increase the density of the glass and reduce its resistance to chemical attack. The ability of the PbO-SiO2 couple to vitrify enables PbO proportions of over 70 mol per 100 to be achieved, which would not be possible if PbO were merely a chemical mesh modifier. Indeed, a high PbO concentration produces tetrahedral PbO4, which can form a glass-like mesh. The inclusion of PbO has several drawbacks. Firstly, the glasses are slightly yellow due to the high concentration of lead oxide. Secondly, inclusions and impurities such as iron(III) oxide Fe2O3 or chromium(III) oxide Cr2O3 degrade glass transmission to a far greater extent than in soda, potash or lime glasses. Thirdly, a chemical equilibrium between Pb2+ and Pb4+ is established and, in the presence of oxygen-saturated glass, leads to the creation of lead dioxide PbO2, a brown compound that darkens glass. However, this latter coloration can be reversed by a redox transformation of the glass paste, since it does not originate from impurities. To overcome these problems, titanium dioxide TiO2 and zirconium dioxide ZrO2 can be added, increasing the glass's chemical stability and preserving its ultraviolet transmission. Barium flints crystallize less easily than other glass families due to the presence of lead(II) oxide (PbO) in the mixture. The higher the proportion of PbO, the higher the refractive index and the lower the melting temperature, so these are glasses which, although very useful for their high indexes, present complications during melting. The BaO in these glasses is sometimes replaced by ZnO. Lanthanum flints and lanthanum crowns are extended families achieving high refractive indices with medium dispersion. The use of SiO2 in the paste creates crystallization instabilities, a pitfall avoided by replacing silica with boron trioxide B2O3 and divalent oxides. To further increase their refractive index, the use of multiple oxides has become widespread, including gadolinium, yttrium, titanium, niobium, tantalum, tungsten and zirconium oxides (Gd2O3, Y2O3, TiO2, Nb2O5, Ta2O5, WO3 and ZrO2). Short flints are a family distinguished not by their index or constringency, but by their partial dispersion. Named for their narrow blue spectrum, short flints are also an asset in optical system design for their low blue impact. They are obtained by replacing the lead oxide in flint glasses with Sb2O3 antimony oxide. Halogenide glass The first fluoride glasses appeared around 1970 to meet a growing need for mid-infrared transmitting glasses. These glasses are composed by replacing the oxygen in oxide glasses with a halogen, fluorine or chlorine, and more rarely with heavy halogens. Their transmission covers the visible and mid-infrared range, from 200 nm to 7 μm, due to the rather high band gap (on average, a fluoride glass has its transmission dip at around 250 nm, due to its band gap of around 5 eV) and the low-frequency vibrations of the heavy-metal fluoride bonds; silica absorption results from vibrations of Si-O bonds at 1.1 × 103 cm−1, whereas fluorozirconate absorption will result from vibrations of Zr-F bonds at a frequency of 0.58 × 103 cm−1, which is why oxide and halide glasses behave so differently in the infrared. By using rare earths instead of heavy metals, we obtain a rare-earth fluoride glass that transmits even further into the infrared. Another way of transmitting further into the infrared is to make chloride glass instead of fluoride glass, but this reduces the stability of the glass. A type of glass recently developed at the University of Rennes uses a tellurium halide. As the energy gap in the visible is greater, the drop in transmission in the visible advances to 700 nm-1.5 μm, while its transmission improves in the far infrared. As the refractive index of such a glass is very high, it behaves like a chalcogenide glass, with a strong reflection that reduces its transmission. Fluoride lenses are also useful for their near-UV transmission. Near-UV transmitting glasses are few in number, but include lithium fluoride, calcium fluoride and magnesium fluoride glasses. Chalcogenide glass Chalcogenide glasses have been specifically developed since the 1980s to improve the infrared transmission of optical glasses. Oxygen is replaced by another chalcogen (sulfur, selenium, tellurium), and silicon is replaced by heavier metals such as germanium, arsenic, antimony and others. Their index is greater than 2, and they appear black due to their weak gap and multiple absorption bands in the visible range. The transmission of these glasses ranges from 1 μm to 12 μm, but is lower than that of oxide or halide glasses due to their very high refractive index, which results in a high reflection coefficient. This group can be divided into two families: glasses that can be doped with rare-earth ions or not. The former are mainly composed of germanium and gallium sulfides and selenides, while the latter, although not doped, offer the best transmission performance in the far infrared. Classical glass designations The field of optical lenses encompasses a multitude of materials with extremely diverse properties and equally diverse applications. Nevertheless, it is generally accepted that optical lenses fall into several major families. A large proportion of optical lenses are so-called "classic" lenses, designed for applications such as imaging and filtering. Smaller families of lenses are also part of the optical glass family, such as optical fibers, or so-called "active" lenses for applications in nonlinear optics or acousto-optics, for example. Special glasses Fused quartz Quartz glass is distinguished from other optical glass by the source of the material used in its manufacture. Many manufacturers produce quartz glass, but the differences lie mainly in the nature of the impurities and the water content. These differences give each quartz glass its own special characteristics, such as transmission and resistance to chemical attack. Quartz glass is made from a single material: silica. Its main properties are low expansion (α≈0.5 × 10−6 K−1), high thermal stability (up to 1,000 K) and transmission in the ultraviolet and infrared, which can be adapted as required. Optical filter Filters are glasses designed to transmit only certain parts of the spectrum of incident light. A filter can be colorless, a simple optical glass whose transmission drop serves to cut off wavelengths beyond a certain value, or colored in various ways, by the introduction of heavy metal or rare-earth ions, by molecular coloration or even by a colloidal suspension. Filter glasses show noticeable photoluminescence. Optical filters in colored glass take the form of a blade with a parallel face and a thickness that depends on the transmission qualities required; like electronic filters, they are referred to as high-pass, low-pass, band-pass or notch filters. Laser lenses Several types of glass are used for lasers, including Li2O-CaO-SiO2 glasses for their resistance to thermal shock, and potassium-barium-phosphate glasses, whose effective cross-section is large enough for stimulated emission. The addition of sodium, lithium or aluminum oxides drastically reduces distortion. These glasses are athermalized. In addition to these two types of glass, lithium-aluminum phosphates can be used. These are treated by ion exchange and are particularly resistant, making them ideal for applications where the average laser power is very high (e.g. femtosecond pulsed lasers), or fluorophosphates, which have a slightly non-linear index. These Nd3+-doped glasses are used as active laser medium. Gradient index lenses Gradient-index lenses exploit the special properties of light propagation in a variable-index medium. In 1854, James Clerk Maxwell invented the "Fisheye lens" in response to a problem from the Irish Academy asking for the refractive index of an image-perfect material. This theoretical lens, spherical in shape, has an index of the form where is the refractive index of the glass at a point on the spherical lens and the radius of this lens; it enables any point on its surface to be imaged perfectly at another point diametrically opposite. A generalization of this spherical lens was proposed in 1966 by Rudolf Karl Lüneburg (de), using a different index profile. In 1905, Robert Williams Wood developed a lens consisting of a blade with a parallel face whose index varies parabolically, with the extremum of the index lying on the axis of revolution of the component. The Wood lens can be used to focus or diverge rays, just like a conventional lens. Since around 1970, glass manufacturing technology has made it possible to develop, qualify and machine graded-index glasses. Two main uses for graded-index glasses are in telecommunications, with optical fibers, and in imaging, with lenses machined from graded-index material. Gradients can also be divided into three types of profile: spherical gradients, cylindrical gradients and axial gradients. There are several techniques for producing graded-index glass: neutron bombardment, ion filling or glass layer superposition. Depending on the technique used, the gradient will be stronger or weaker, and its profile more or less controlled. Injection or ionic filling methods can produce gradients of 10 to 50 mm, with an index amplitude of 0.04. Neutron bombardment and chemical vapor deposition methods produce shallow gradients (approx. 100 μm) of low amplitude. For larger gradients, there is partial lens polymerization of a monomer reacting to UV exposure (gradients of around one hundred millimeters for an index amplitude of 0, 01), or superimposing and then partially melting layers of borosilicate or flint glass (lanthanum-containing glasses are not suitable for this technique due to their recrystallization problems and thermal instability). A final technique consists of melting and then rotating the paste so that a material gradient, and therefore an index gradient, is established in the glass. Doped lenses Certain extreme environments are not conducive to the use of conventional lenses; when the system is exposed to far-field UV radiation (X, gamma, etc.) or particle fluxes such as alpha or beta, a drop in lens transmission is observed, due to discoloration of the material. Generally speaking, electromagnetic radiation causes a drop in blue transmission, a phenomenon known as solarization. As this is detrimental to system performance, it was necessary to develop new types of radiation-resistant lenses. Radiation has a variety of effects: ionization, electron or hole capture, fission of Si-O bonds, etc. These effects can easily be amplified by the presence of impurities that change the valence of molecules or concentrate radiation, causing local degradation of the glass. In order to reduce the drop in glass transmission and performance, they are doped with CeO2, which slightly shifts the glass's transmission drop, but makes it virtually impossible to feel the effects of radiation on the glass's optical performance. Other glasses In addition to the lenses already mentioned, all of which are specific in their design or use, there are also special glass-like materials. These include athermalized lenses, which are produced in such a way that the optical path through the lens is independent of temperature. Note that the difference in optical path as a function of temperature is determined by the thickness of the glass, the coefficient of thermal expansion, the index, the temperature and the thermo-optical coefficient. Athermalized glasses can be found in the fluorinated crowns, phosphate crowns, dense crowns, barium and titanium flints and other families. Glass-ceramics or ceramic glasses are glasses in which the crystal-forming process has been stimulated over a long, complex heating period. The addition of crystals to initiate crystallization results in a glass with a crystallized proportion ranging from 50 to 90%. Depending on the crystals incorporated and the proportion of glass in the ceramic glass, the properties will differ. Generally speaking, ceramic glass is highly resistant to thermal shock and has near-zero thermal expansion (Schott AG's Zerodur, for example, was used specifically for the Very Large Telescope for these thermal properties). Glass quality There are numerous standards for optical components, the aim of which is to unify the notations and tolerances applied to components, and to define optical quality standards. There are two main standards: MIL (American military standard) and ISO (international standard). In France, the AFNOR standard is very similar to the ISO standard, as the Union de normalisation de la mécanique is keen to conform as closely as possible to ISO publications. The MIL and ISO standards cover a very wide field, and both standardize lenses, their defects, surface treatments, test methods and schematic representations. Manufacturers There are a number of manufacturers of special lenses for the various fields of optics, whose catalogs offer a wide choice of optical lenses and special lenses, sometimes in addition to filters and active lenses and crystals. Since 1980, however, catalogs have tended to reduce the choice, although optical design tools continue to include catalogs that no longer exist. Manufacturers include the following: Schott AG Heraeus Quarzglas Ohara Corporation Hoya Corporation Corning Inc. : Lens catalog not available, but Corning lenses remain available on optical design software and production of special lenses continues. Pilkington : has refocused its catalog on ophthalmic and flat lenses. Sumita Optical Glass Hikari Glass : a Nikon subsidiary producing optical lenses. OAO Lytkarinski Zavod Optitcheskogo Stekla CDGM In addition to catalogs of optical glass and various materials, other manufacturers also sell active or special optical glass. Examples include graded-index glass, used to focus light beams in optical fibers; optical fibers, which in a significant proportion of cases are spun optical glass wires; and optical filters. These products can be found in the catalogs of a larger number of manufacturers, a non-exhaustive but relevant list of which can be found in the same catalogs selected by optical design software: 3M Precision Optics Archer Optx Coherent CVI Edmund Industrial Optic Esco Geltech ISP Optics JML LightPath Technologies Linos Photonics Melles Griot Midwest Optical Newport Glass NSG America Optics for Research OptoSigma Philips Quantum Rolyn Optics Ross Optical Special Optics Thorlabs Applications Optical glasses are mainly used in many optical instruments, as lenses or mirrors. These include, but are not limited to, telescopes, microscopes, photographic lenses and viewfinder lenses. Other possible optical systems include collimators and eyepieces. Optical lenses, especially ophthalmic lenses, are used for prescription glasses. Glasses can also be made of photochromic glass, whose tint changes according to radiation. Optical glasses are used for other, much more diverse and specialized applications, such as high-energy particle detectors (glasses detecting Cherenkov radiation, scintillation effects, etc.) and nuclear applications, such as on-board optics in systems subjected to radiation, for example. Optical glass can be spun to form an optical fiber or form graded-index lenses (SELFOC lens or Geltech lenses) for injection into these same fibers. Optical glass in one form or another, doped or undoped, can be used as an amplifying medium for lasers. Last but not least, microlithography, using ultraviolet-transmitting glasses such as Schott's FK5HT (Flint crown), LF5HT (Flint light) or LLF1HT (Flint extra light), named i-line glasses by the company after the ray i of mercury. Notes References Further reading Bibliography Publications Articles External links Official records: LCCN GND Japan Israel Czech Republic "How the generic optical glass code works" archive, on Newport Glass, 2003 (accessed March 12, 2012) "Cross reference list between similar glasses" archive [PDF], on Ohara corp, 2012 (accessed March 12, 2012) "Optical glass" archive [PDF], on Hoya, 2012 (consulted on March 12, 2012) "Sumita Optical Glass" archive [PDF], on Sumita, 2012 (accessed March 12, 2012) "Potapenko special glass" archive, on opticalglass.com.ua, 2012 (accessed March 12, 2012) Transmittance of optical glass, Schott AG, coll. "Technical information" (no. 35), October 2005, 12 p. (read online archive) Fluorescence of optical glass, Schott AG, coll. "Technical information" (no. 36), August 2010 (read online archive) Stress in optical glass, Schott AG, coll. "Technical information" (no. 27), July 2004 (read online archive) "Norme ISO 10110" archive, International Organization for Standardization. "Info Vitrail" archive, on infovitrail.com, SARL ARBO-COM (accessed July 1, 2012) Site featuring a fairly comprehensive glossary of technical glassmaking terms. "Verre online" archive, on verreonline.fr, Institut du verre (consulted July 1, 2012) Glass Infrared Light Refraction Quartz Glass makers
Optical glass
[ "Physics", "Chemistry" ]
8,978
[ "Physical phenomena", "Spectrum (physical sciences)", "Refraction", "Glass", "Unsolved problems in physics", "Electromagnetic spectrum", "Optical phenomena", "Waves", "Homogeneous chemical mixtures", "Light", "Amorphous solids", "Infrared" ]
68,976,588
https://en.wikipedia.org/wiki/SeaBASS%20%28data%20archive%29
The SeaWiFS Bio-optical Archive and Storage System (SeaBASS) is a data archive of in situ oceanographic data used to support satellite remote sensing research of ocean color. SeaBASS is used for developing algorithms for satellite-derived variables (such as chlorophyll-a concentration) and for validating or “ground-truthing” satellite-derived data products. The acronym begins with “S” for SeaWiFS, because the data repository began in the 1990s around the time of the launch of the SeaWiFS satellite sensor, and the same data archive has been used ever since. Oceanography projects funded by the NASA Earth Science program are required to upload data collected on research campaigns to the SeaBASS data repository to increase the volume of open-access data available to the public. As of 2021 the data archive contained information from thousands of field campaigns uploaded by over 100 principal investigators. See also EOSDIS Ocean color Ocean observations Ocean optics Water remote sensing References External links NASA SeaBASS Official Website Earth observation Environmental data Environmental science databases Oceanography
SeaBASS (data archive)
[ "Physics", "Environmental_science" ]
219
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "Environmental science databases" ]
70,494,954
https://en.wikipedia.org/wiki/Phosphide%20bromide
Phosphide bromides or bromide phosphides are compounds containing anions composed of bromide (Br−) and phosphide (P3−) anions. Usually phosphorus is covalently connected into more complex structures. They can be considered as mixed anion compounds. They are in the category of pnictidehalides. Related compounds include the phosphide chlorides, phosphide iodides, nitride bromides, arsenide bromides, and antimonide bromides. List References Phosphides Mixed anion compounds Bromides
Phosphide bromide
[ "Physics", "Chemistry" ]
131
[ "Matter", "Mixed anion compounds", "Salts", "Bromides", "Ions" ]
70,495,008
https://en.wikipedia.org/wiki/Arsenide%20iodide
Arsenide iodides or iodide arsenides are compounds containing anions composed of iodide (I−) and arsenide (As3−). They can be considered as mixed anion compounds. They are in the category of pnictidehalides. Related compounds include the arsenide chlorides, arsenide bromides, phosphide iodides, and antimonide iodides. List References Arsenides Iodides Mixed anion compounds
Arsenide iodide
[ "Physics", "Chemistry" ]
107
[ "Ions", "Matter", "Mixed anion compounds" ]
70,500,748
https://en.wikipedia.org/wiki/Antimonide%20bromide
Antimonide bromides or bromide antimonides are compounds containing anions composed of bromide (Br−) and antimonide (Sb3−). They can be considered as mixed anion compounds. They are in the category of pnictidehalides. Related compounds include the antimonide chlorides, antimonide iodides, arsenide chlorides, arsenide bromides, arsenide iodides, phosphide chlorides, phosphide bromides, and phosphide iodides. The bromoantimonates have antimony in positive oxidation states. The antimony can be linked into chains, in which case it has a formal oxidation state of −1. Alternately it can be in pairs as Sb2, with an oxidation state of −2 for each atom. Many of these compounds are clathrates, whereby there are two interpenetrating structures that are only weakly bound to each other by van der Waals force. List References Bromides Antimonides Mixed anion compounds Clathrates
Antimonide bromide
[ "Physics", "Chemistry" ]
227
[ "Matter", "Mixed anion compounds", "Salts", "Bromides", "Clathrates", "Ions" ]
70,500,920
https://en.wikipedia.org/wiki/Arsenide%20chloride
Arsenide chlorides or chloride arsenides are compounds containing anions composed of chloride (Cl−) and arsenide (As3−). They can be considered as mixed anion compounds. They are in the category of pnictidehalides. Related compounds include the arsenide bromides, arsenide iodides, phosphide chlorides, and antimonide chlorides. List References Arsenides Chlorides Mixed anion compounds
Arsenide chloride
[ "Physics", "Chemistry" ]
100
[ "Matter", "Chlorides", "Inorganic compounds", "Mixed anion compounds", "Salts", "Ions" ]
70,501,237
https://en.wikipedia.org/wiki/GCD%20matrix
In mathematics, a greatest common divisor matrix (sometimes abbreviated as GCD matrix) is a matrix that may also be referred to as Smith's matrix. The study was initiated by H.J.S. Smith (1875). A new inspiration was begun from the paper of Bourque & Ligh (1992). This led to intensive investigations on singularity and divisibility of GCD type matrices. A brief review of papers on GCD type matrices before that time is presented in . Definition Let be a list of positive integers. Then the matrix having the greatest common divisor as its entry is referred to as the GCD matrix on .The LCM matrix is defined analogously. The study of GCD type matrices originates from who evaluated the determinant of certain GCD and LCM matrices. Smith showed among others that the determinant of the matrix is , where is Euler's totient function. Bourque–Ligh conjecture conjectured that the LCM matrix on a GCD-closed set is nonsingular. This conjecture was shown to be false by and subsequently by . A lattice-theoretic approach is provided by . The counterexample presented in is and that in is A counterexample consisting of odd numbers is . Its Hasse diagram is presented on the right below. The cube-type structures of these sets with respect to the divisibility relation are explained in . Divisibility Let be a factor closed set. Then the GCD matrix divides the LCM matrix in the ring of matrices over the integers, that is there is an integral matrix such that , see . Since the matrices and are symmetric, we have . Thus, divisibility from the right coincides with that from the left. We may thus use the term divisibility. There is in the literature a large number of generalizations and analogues of this basic divisibility result. Matrix norms Some results on matrix norms of GCD type matrices are presented in the literature. Two basic results concern the asymptotic behaviour of the norm of the GCD and LCM matrix on . Given , the norm of an matrix is defined as Let . If , then where and for and . Further, if , then where Factorizations Let be an arithmetical function, and let be a set of distinct positive integers. Then the matrix is referred to as the GCD matrix on associated with . The LCM matrix on associated with is defined analogously. One may also use the notations and . Let be a GCD-closed set. Then where is the matrix defined by and is the diagonal matrix, whose diagonal elements are Here is the Dirichlet convolution and is the Möbius function. Further, if is a multiplicative function and always nonzero, then where and are the diagonal matrices, whose diagonal elements are and If is factor-closed, then and . References Matrix theory Number theory
GCD matrix
[ "Mathematics" ]
604
[ "Discrete mathematics", "Number theory" ]
56,306,256
https://en.wikipedia.org/wiki/Nolder
In automotive design, a nolder is a small aerodynamic shape (a strip, wing, protrusion, lip or profile) integral to bodywork or to an aerodynamic attachment – e.g., a spoiler, diffuser or splitter – perpendicular to the direction of air flow travel for the purpose of further managing and refining air flow. Nolders are used in both high-performance as well as in less critical aerodynamic applications. Etymology In 1996, Autocar attributed original use of the term to Ferrari, with other sources citing the nolder as having derived from Formula One racing, where Ferrari has been prominent. The Formula One Dictionary defines a nolder as "a small upside-down L-shaped aerodynamic appendage generally positioned on the trailing edge of the rear wing to increase downforce at low speed." The Automotive Dictionary defines it as a "very small aerodynamic appendage that's fitted to an airfoil to increase down-force without affecting drag resistance." Applications In the design of high-performance vehicles, a nolder of limited size can significantly increase or decrease the lift (Cz) of a vehicle's aerodynamic profile. Nolders are also used in less high-performance applications, for example forcing an airflow separation alongside a vertical rear window to minimize debris accumulation, e.g., with a small hatchback. Examples Examples include the underside of the LaFerrari, which features a nolder to assist with vehicle dynamics. The Ferrari 599 GTO features prominent flanking aerodynamic fins or flying buttresses aside the rear window, maximizing air flow to a linear rear nolder. The Ferrari 355 has a similar nolder profile at the upper portion of its tail. The Koenigsegg CCXR features an optional front splitter with a nolder, and the spoiler at the rear bumper of the Maserati 320S features a supplementary nolder to increase the vertical load to the rear. Early versions of the highly aerodynamic 1982 Ford Sierra suffered crosswind instability, which was addressed in 1985 with the addition of aerodynamic nolders on the rear edge of the rubber seals of the rear-most side windows. For airflow management and to assist in keeping the rear window free from dirt, nolders are integral to the rearmost vertical pillar of Mini Cooper models and the Fiat 500L. See also Diffuser (automotive) Servo tab Trim tab References Automotive body parts Automotive styling features Aerodynamics Formula One
Nolder
[ "Chemistry", "Engineering" ]
500
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
56,307,570
https://en.wikipedia.org/wiki/Dirichlet%27s%20ellipsoidal%20problem
In astrophysics, Dirichlet's ellipsoidal problem, named after Peter Gustav Lejeune Dirichlet, asks under what conditions there can exist an ellipsoidal configuration at all times of a homogeneous rotating fluid mass in which the motion, in an inertial frame, is a linear function of the coordinates. Dirichlet's basic idea was to reduce Euler equations to a system of ordinary differential equations such that the position of a fluid particle in a homogeneous ellipsoid at any time is a linear and homogeneous function of initial position of the fluid particle, using Lagrangian framework instead of the Eulerian framework. History In the winter of 1856–57, Dirichlet found some solutions of Euler equations and he presented those in his lectures on partial differential equations in July 1857 and published the results in the same month. His work was left unfinished at his sudden death in 1859, but his notes were collated and published by Richard Dedekind posthumously in 1860. Bernhard Riemann said, "In his posthumous paper, edited for publication by Dedekind, Dirichlet has opened up, in a most remarkable way, an entirely new avenue for investigations on the motion of a self-gravitating homogeneous ellipsoid. The further development of his beautiful discovery has a particular interest to the mathematician even apart from its relevance to the forms of heavenly bodies which initially instigated these investigations." Riemann–Lebovitz formulation Dirichlet's problem is generalized by Bernhard Riemann in 1860 and by Norman R. Lebovitz in modern form in 1965. Let be the semi-axes of the ellipsoid, which varies with time. Since the ellipsoid is homogeneous, the constancy of mass requires the constancy of the volume of the ellipsoid, same as the initial volume. Consider an inertial frame and a rotating frame , with being the linear transformation such that and it is clear that is orthogonal, i.e., . We can define an anti-symmetric matrix with this, where we can write the dual of as (and ), where is nothing but the time-dependent rotation of the rotating frame with respect to the inertial frame. Without loss of generality, let us assume that the inertial frame and the moving frame coincide initially, i.e., . By definition, Dirichlet's problem is looking for a solution which is a linear function of initial condition . Let us assume the following form, and we define a diagonal matrix with diagonal elements being the semi-axes of the ellipsoid, then above equation can be written in matrix form as where . It can shown then that the matrix transforms the vector linearly to the same vector at any later time , i.e., . From the definition of , we can realize the vector represents a unit normal on the surface of the ellipsoid (true only at the boundary) since a fluid element on the surface moves with the surface. Therefore, we see that transforms one unit vector on the boundary to another unit vector on the boundary, in other words, it is orthogonal, i.e., . In a similar manner as before, we can define another anti-symmetric matrix as , where its dual is defined as (and ). The Dirichlet's ellipsoidal problem then reduces to finding whether the matrix exists that determines the vector and that it is expressible in terms of two orthogonal matrices as in where, further Let be the velocity field seen by the observer at rest in the moving frame, which can be regarded as the internal fluid motion since this excludes the uniform rotation seen by the inertial observer. This internal motion is found to given by whose components, explicitly, are given by These three components show that the internal motion is composed of two parts: one with a uniform vorticity with components and the other with a stagnation point flow, i.e., . Particularly, the physical meaning of can be seen to be attributed to the uniform-vorticity motion. The pressure is found to assume a quadratic form, as derived by the equation of motion (and using the vanishing condition at the surface) given by where is the central pressure, so that . Substituting this back in the equation of motion leads to where is the gravitational constant and is diagonal matrix, whose diagonal elements are given by The tensor momentum equation and the conservation of mass equation, i.e., provides us with ten equations for the ten unknowns, Dedekind's theorem It states that if a motion determined by is admissible under the conditions of Dirichlet's problem, then the motion determined by the transpose of is also admissible. In other words, the theorem can be stated as for any state of motions that preserves a ellipsoidal figure, there is an adjoint state of motions that preserves the same ellipsoidal figure. By taking transpose of the tensor momentum equation, one sees that the role of and are interchanged. If there is solution for , then for the same , there exists another solution with the role of and interchanged. But interchanging and is equivalent to replacing by . The following relations confirms the previous statement. where, further The typical configuration of this theorem is the Jacobi ellipsoid and its adjoint is called as Dedekind ellipsoid, in other words, both ellipsoid have same shape, but their internal fluid motions are different. Integrals The tensor momentum equation admits three integrals, with regards to conservation of energy, angular momentum and circulation. The energy integral is found to be where Next, we have the integral which signifies the conservation of , where the angular momentum components are given by wherein is the total mass. Since the problem is invariant to the interchange of and , from the above integral, we obtain where we substituted the formula for in terms of the vorticity vector . This integral signifies the conservation of , where the circulation components (in the inertial frame) are given by See also Maclaurin ellipsoid Jacobi ellipsoid References Astrophysics Fluid dynamics Equations of astronomy
Dirichlet's ellipsoidal problem
[ "Physics", "Chemistry", "Astronomy", "Engineering" ]
1,272
[ "Concepts in astronomy", "Chemical engineering", "Astrophysics", "Equations of astronomy", "Piping", "Astronomical sub-disciplines", "Fluid dynamics" ]
56,310,192
https://en.wikipedia.org/wiki/Self-propulsion
Self-propulsion is the autonomous displacement of nano-, micro- and macroscopic natural and artificial objects, containing their own means of motion. Self-propulsion is driven mainly by interfacial phenomena. Various mechanisms of self-propelling have been introduced and investigated, which exploited phoretic effects, gradient surfaces, breaking the wetting symmetry of a droplet on a surface, the Leidenfrost effect, the self-generated hydrodynamic and chemical fields originating from the geometrical confinements, and soluto- and thermo-capillary Marangoni flows. Self-propelled system demonstrate a potential as micro-fluidics devices and micro-mixers. Self-propelled liquid marbles have been demonstrated. See also Self propelled particles References Mechanical engineering Surface science
Self-propulsion
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
159
[ "Surface science", "Applied and interdisciplinary physics", "Condensed matter physics", "Mechanical engineering" ]
56,312,430
https://en.wikipedia.org/wiki/Breath-figure%20self-assembly
Breath-figure self-assembly is the self-assembly process of the formation of honeycomb micro-scaled polymer patterns by the condensation of water droplets. "Breath-figure" refers to the fog that forms when water vapor contacts a cold surface. In the modern era systematic study of the process of breath-figures water condensation was carried out by Aitken and Rayleigh, among others. Half a century later the interest in the breath-figure formation was revived in a view of study of atmospheric processes, and in particular the extended study of a dew formation which turned out to be a complicated physical process. The experimental and theoretical study of dew formation has been carried out by Beysens. Thermodynamic and kinetic aspects of dew formation, which are crucial for understanding of formation of breath-figures inspired polymer patterns will be addressed further in detail. Breakthrough in the application of the breath-figures patterns was achieved in 1994–1995 when Widawski, François and Pitois reported manufacturing of polymer films with a self-organized, micro-scaled, honeycomb morphology using the breath-figures condensation process. The reported process was based on the rapidly evaporated polymer solutions exerted to humidity. The introduction to experimental techniques involved in manufacturing of micropatterned surfaces is supplied in reference 1; image representing typical breath-figures-inspired honeycomb pattern is shown in Figure 1. The main physical processes involved in the process are: 1) evaporation of the polymer solution; 2) nucleation of water droplets; 3) condensation of water droplets; 4) growth of droplets; 5) evaporation of water; 6) solidification of polymer giving rise to the eventual micro-porous pattern. This experimental technique allows obtaining well-ordered, hierarchical, honeycomb surface patterns. A variety of experimental techniques were successfully exploited for the formation of breath-figures self-assembly induced patterns including drop-casting, dip-coating and spin-coating. Adapted techniques to achieve varied pattern morphologies and hierarchical designs have also been developed. The characteristic dimension of pores is usually close to 1 μm, whereas the characteristic lateral dimension of the large-scale patterns is ca. 10–50 μm. See also Self-assembly Marangoni effect Droplet cluster References Physical phenomena Self-organization
Breath-figure self-assembly
[ "Physics", "Mathematics" ]
472
[ "Self-organization", "Physical phenomena", "Dynamical systems" ]
66,121,195
https://en.wikipedia.org/wiki/Matrix%20%28composite%29
In materials science, a matrix is a constituent of a composite material. Functions A matrix serves the following functions: It binds the fiber reinforcement. It provides the composite component its shape and directs its surface quality. Organic Matrices Traditional materials such as glues, muds have traditionally been used as matrices for adobe and papier-mâché. The common matrices are polymers (mainly utilized for fibre reinforced plastics). The most common polymer-based composite materials which include carbon fibre, fibreglass and Kevlar, typically involve two parts at least, the resin and the substrate. Asphalt concrete, which is often used in the construction of roads, has a matrix called bitumen. Mud (wattle and daub) has observed considerable use. Epoxy is utilized as a structural glue or structural matrix material in the aerospace industry. Epoxy resin is, when cured, nearly transparent. Polyester resin is fit for most backyard projects. It tends to have a yellowish colour. It is often used in the construction of surfboards and for marine applications. They are usually coated as they can tend to deteriorate over time and sensitive to ultraviolet. Peroxide is considered as the hardener of polyester resin. Mostly, MEKP (methyl ethyl ketone peroxide) is considered for polyester resin. A curing reaction is initiated when the peroxide is combined with the resin, and decomposes to generate free radicals. In these systems, often hardeners are called catalysts. But they do not meet the strictest chemical definition of a catalyst as at the end of the reaction they do not re-appear unchanged. Vinyl ester resin has a lower viscosity than polyester resin and is more transparent. It also tends to have a purplish to bluish to greenish tint. The price of the vinyl ester resin is similar to that of the polyester resin. It utilizes the same hardeners as polyester resin (at a similar mix ratio). It doesn't degrade much over time, when compared to polyester resin, and is more flexible. Generally, vinyl ester resin is considered as fuel resistant. However, it will melt when in contact with gasoline. Shape memory polymer (SMP) resins are those materials that their shape and can be modified regularly by heating above their glass transition temperature (Tg). They become elastic and flexible when heated, allowing for easy configuration. They maintain their new shape when they are cooled. When they are reheated above their Tg, they will return to their original shape. The benefit of these resins is that without losing their material properties, they can be shaped and reshaped regularly. These resins can be utilized in making shape memory composites. Depending on their formulation, they have varying visual characteristics. These resins can be utilized in very cold temperature applications, such as for sensors that show whether perishable goods have warmed above a particular maximum temperature when they are acrylate-based; in space applications when they are cyanate-ester-based; in auto body and outdoor equipment repairs when they are epoxy-based. Inorganic Matrices Cement (concrete), ceramics, sometimes glasses and metals are employed. Unusual matrices such as ice are sometimes proposed as in pykecrete. References Composite materials
Matrix (composite)
[ "Physics" ]
675
[ "Materials", "Composite materials", "Matter" ]
66,124,572
https://en.wikipedia.org/wiki/Maria%20Prandini
Maria Prandini (born 8 September 1969) is an Italian electrical engineer whose research topics have included control theory, pursuit–evasion, and air traffic control. She is a professor at the Polytechnic University of Milan. Education and career Prandini was born in Brescia, earned a laurea in electrical engineering in 1994 from the Polytechnic University of Milan, and completed her Ph.D. in 1998 at the University of Brescia, with Marco Claudio Campi as her doctoral supervisor. After postdoctoral research with Shankar Sastry at the University of California, Berkeley, and visiting positions at Delft University of Technology and the University of Cambridge, she became an assistant professor at the Polytechnic University of Milan in 2002. Recognition In 2020, Prandini was named an IEEE Fellow, affiliated with the IEEE Control Systems Society, "for contributions to stochastic, hybrid and distributed control systems theory". References External links Home page 1969 births Living people Italian electrical engineers Italian women engineers Control theorists Polytechnic University of Milan alumni University of Brescia alumni Academic staff of the Polytechnic University of Milan Fellows of the IEEE
Maria Prandini
[ "Engineering" ]
223
[ "Control engineering", "Control theorists" ]
66,125,330
https://en.wikipedia.org/wiki/Quantum%20logic%20spectroscopy
Quantum logic spectroscopy (QLS) is an ion control scheme that maps quantum information between two co-trapped ion species. Quantum logic operations allow desirable properties of each ion species to be utilized simultaneously. This enables work with ions and molecular ions that have complex internal energy level structures which preclude laser cooling and direct manipulation of state. QLS was first demonstrated by NIST in 2005. QLS was first applied to state detection in diatomic molecules in 2016 by Wolf et al, and later applied to state manipulation and detection of diatomic molecules by the Liebfried group at NIST in 2017 Overview Lasers are used to couple each ion's internal and external motional degrees of freedom. The Coulomb interaction between the two ions couples their motion. This allows the internal state of one ion to be transferred to the other. An auxiliary "logic ion" provides cooling, state preparation, and state detection for the co-trapped "spectroscopy ion," which has an electronic transition of interest. The logic ion is used to sense and control the internal and external state of the spectroscopy ion. The logic ion is selected to have a simple energy level structure that can be directly laser cooled, often an alkaline earth ion. The laser cooled logic ion provides sympathetic cooling to the spectroscopy ion, which lacks an efficient laser cooling scheme. Cooling the spectroscopy ion reduces the number of rotational and vibrational states that it can occupy. The remaining states are then accessed by driving stimulated Raman spectroscopy transitions with a laser. The light used for driving these transitions is far off-resonant from any electronic transitions. This enables control over the spectroscopy ion's rotational and vibrational state. Thus far, QLS is limited to diatomic molecules with a mass within 1 AMU of the laser cooled "logic" ion. This is largely due to poorer coupling of the motional states of the occupants of the ion trap as the mass mismatch becomes larger. Other techniques more tolerant of large mass mismatches are better suited to cases where the ultimate resolution of QLS is not needed, but single-molecule sensitivity is still desired. State transfer protocol The internal states of each ion can be treated as a two level system, with eigenstates denoted and . One of the ion's normal modes is chosen to be the transfer mode used for state mapping. This motional mode must be shared by both ions, which requires both ions be similar in mass. The normal mode has harmonic oscillator states denoted as , where n is the nth level of mode m. The wave function denotes both ions and the transfer mode in the ground state. S and L represent the spectroscopy and logic ion. The spectroscopy ion's spectroscopy transition is then excited with a laser, producing the state: A red sideband pi-pulse is then driven on the spectroscopy ion, resulting in the state: At this stage, the spectroscopy ion's internal state has been mapped on to the transfer mode. The internal state of the ion has been coupled to its motional mode. The state is unaffected by the pulse of light carrying out this operation because the state does not exist. QLS takes advantage of this in order to map the spectroscopy ion's state onto the transfer mode. A final red sideband pi-pulse is applied to the logic ion, resulting in the state: The spectroscopy ion's initial state has been mapped onto the logic ion, which can then be detected. References Spectroscopy
Quantum logic spectroscopy
[ "Physics", "Chemistry" ]
702
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
66,128,381
https://en.wikipedia.org/wiki/Reinforcement%20%28composite%29
In materials science, reinforcement is a constituent of a composite material which increases the composite's stiffness and tensile strength. Function Following are the functions of the reinforcement in a composite: It increases the mechanical properties of the composite. It provides strength and stiffness to the composite in one direction as reinforcement carries the load along the length of the fibre. Fiber reinforcement Crack propagation is prevented considerably, while rigidity is added normally by the reinforcement. Thin fibers can have very high strength, and they can increase substantially the overall properties of the composite provided they are linked mechanically to the matrix. Fiber-reinforced composites have two types, and they are short fibre-reinforced and continuous fiber-reinforced. Sheet moulding and compression moulding operations usually use the long and short fibers. These are available in the form of chips, flakes and random mate (which also can be produced from a continuous fibre laid randomly till the desired thickness of the laminate/ply is attained). A laminated or layered structure is usually constituted in continuous reinforced materials. The continuous and woven fiber styles are usually available in various forms, being pre-impregnated with the given matrix (resin), dry, uni-directional tapes of different widths, plain weave, harness satins, braided, and stitched. Reinforcement uses some of the common fibers such as carbon fibres, cellulose (wood/paper fibre and straw), glass fibers and high strength polymers, for example, aramid. For high-temperature applications, Silicon carbide fibers are used. Particle reinforcement Particle reinforcement adds a similar effect to precipitation hardening in metals and ceramics. Large particles prevent dislocation movement and crack propagation as well as contribute to the composite's Young's Modulus. In general, particle reinforcement effect on Young's Modulus lies between values predicted by as a lower bound and as an upper bound. Therefore, it can be expressed as a linear combination of contribution from the matrix and some weighted contribution from the particles. Where Kc is an experimentally derived constant between 0 and 1. This range of values for Kc reflects that particle reinforced composites are not characterized by the isostrain condition. Similarly, the tensile strength can be modeled in an equation of similar construction where Ks is a similarly bounded constant not necessarily of the same value of Kc The true value of Kc and Ks vary based on factors including particle shape, particle distribution, and particle/matrix interface. Knowing these parameters, the mechanical properties can be modeled based on effects from grain boundary strengthening, dislocation strengthening, and Orowan strengthening. The most common particle reinforced composite is concrete, which is a mixture of gravel and sand usually strengthened by addition of small rocks or sand. Metals are often reinforced with ceramics to increase strength at the cost of ductility. Finally polymers and rubber are often reinforced with carbon black, commonly used in auto tires. References Composite materials
Reinforcement (composite)
[ "Physics" ]
592
[ "Materials", "Composite materials", "Matter" ]
76,328,505
https://en.wikipedia.org/wiki/N1-Acetyl-5-methoxykynuramine
N1-Acetyl-5-methoxykynuramine (AMK) is a metabolite of melatonin that could improve memory by acting on the melatonin receptors. AMK is produced from the metabolization of melatonin by the kynuramine pathway in the brain. It significantly increased the phosphorylation of both ERK and CREB in the hippocampus. It also helps scavenge free radicals. AMK is highly reactive towards dioxygen (O2) radicals because of AMK's N2-amino group. References Melatonin Methoxy compounds Anilines Metabolism
N1-Acetyl-5-methoxykynuramine
[ "Chemistry", "Biology" ]
139
[ "Biotechnology stubs", "Biochemistry stubs", "Cellular processes", "Biochemistry", "Metabolism" ]
73,384,479
https://en.wikipedia.org/wiki/Common%20envelope%20jets%20supernova
Common envelope jets supernova (CEJSN) is a type of supernova, where the explosion is caused by the merger of a giant or supergiant star with a compact star such as a neutron star or a black hole. As the compact star plunges into the envelope of the giant/supergiant, it begins to accrete matter from the envelope and launches jets that can disrupt the envelope. Often, the compact star eventually merges with the core of the giant/supergiant; other times the infall stops before core merger. This kind of supernova has been invoked to explain certain kinds of supernova-like phenomena, including iPTF14hls. History and process In order to explain the unusual supernova iPTF14hls, Soker and Gilkis 2018 proposed a model where astrophysical jets eject the common envelope of a merging star. They may constitute 10^-6 to 2*10^-5 of all core collapse supernovae. In their model, iPTF14hls was a binary star consisting of a giant star and a neutron star. The latter plunged into the envelope of the former and began to accrete material, emitting neutrinos as it did so but without substantially deforming the giant. Eventually, it would have reached the core of the giant and accreted mass at a sufficient rate to produce jets. These jets emanate from the polar areas of the neutron star and can effectively eject matter in these directions, but do not effectively act on material accreting along the neutron star's equatorial plane, which thus continues to reach the neutron star. The jets impact the envelope, inflating it in the form of large bubbles ("cocoons") that remove material from the envelope at speeds approaching a tenth of the speed of light. This causes the envelope of the giant star to be ejected over a timespan of a few hundred days, before the core itself is consumed in about a day, producing gravitational waves. The exiting jets can interact with pre-existent gas clouds around the giant, which creates the luminosity of the supernova and which can last for timespans reaching years. Depending on the original architecture of the stellar system, many variations on this general process are possible, such as when the incoming star is itself a binary such as a neutron star-neutron star binary or other combinations of a neutron star with a companion. In these cases, the binary may break up during the merger, with one of the binary objects ejected. The original core of the star may be tidally disrupted, forming an accretion disk around the neutron star. The incoming neutron star may instead be a black hole; these may be the source of cosmic ultra-high-energy neutrinos. There are several processes that can cause the neutron star to penetrate the giant. Giant stars grow in size just at the end of their evolution, and can envelop a companion star in the process. When a star goes supernova and produces a neutron star, the neutron star receives a "kick" that causes it to penetrate the other star. Finally, interactions between the neutron star-giant binary with a third star, typically the third member star of the group, can cause the neutron star orbit to contract until it interacts with the envelope of the giant. Concomitant processes Already before the actual penetration, tidal acceleration of the giant's envelope by the neutron star causes it to expand, possibly clearing the polar regions of the giant of matter before the merger begins. This lets the jets exit the star from the poles before the neutron star merges with the core; otherwise they are only visible at the beginning of the envelope interaction or when the actual core interacts with the neutron star. The energy that the jets inject into the envelope can cause it to expand so that even when the orbit takes the neutron star out of the envelope, accretion and jet launching continue. These jets are weaker than the ones launched inside the original envelope, but are more efficient at creating radiation as they interact with already-emplaced gas. A key requirement for the occurrence of a common envelope jets supernova is that the neutron star can form an accretion disk as it begins to absorb the material of the companion. Hydrodynamic simulations have offered contrasting results on whether this is possible and on the accretion rate resulting from the interaction, although there is empirical evidence that at least white dwarfs can generate such disks and jets; white dwarf properties resemble these of neutron stars. The process requires high accretion rates, which in turn require that large amounts of material and energy be removed from the proximity of the neutron star; this is accomplished through the emission of neutrinos, which carry energy away. The conditions during a CEJSN may allow the r-process of nucleosynthesis to take place in the jets, in particular when a binary neutron star is involved, since unlike the core of a conventional supernova the CEJSN is not an effective neutrino source. Unlike regular neutron star mergers, the CEJSN is not delayed by the time it takes for the neutron star binary to shrink from gravitational wave emission and thus CEJSN can contribute r-process elements early in the history of the universe. The r-process element enrichment of the galaxy Reticulum II may be explained through a CEJSN, which efficiently distributed r-process elements across the galaxy. Examples Apart from iPTF14hls, other events such as the supernovae SN1979c, SN1998e, SN2019zrk, SN 2020faa and the radio transient VT J121001+495647 have been proposed to be CEJSNs. The gamma-ray burst GRB 101225A could have formed through a common envelope jets supernova-like interaction with a helium star. A CEJSN where the core of the companion star was disrupted may have given rise to the enigmatic supernova remnant W49B. Fast blue optical transients might constitute CEJSNs as well. Impostors This process does not always result in the immediate destruction of the giant; if the giant star survives, a supernova impostor can occur instead, possible examples are the supernova SN 2009ip and the transient AT2018cow. The mass loss the giant suffers during the interaction can cause the orbit of the neutron star to expand and thus to exit the giant's envelope again; that way repeating explosions can occur since the core isn't destroyed by the merger. Eventually, a stripped core can be left that itself will go supernova and form another neutron star; this may be a major source of binary neutron stars. References Sources Supernovae Stellar evolution Binary stars
Common envelope jets supernova
[ "Physics", "Chemistry", "Astronomy" ]
1,383
[ "Supernovae", "Astronomical events", "Astrophysics", "Stellar evolution", "Explosions" ]
73,386,024
https://en.wikipedia.org/wiki/Barium%20chloride%20fluoride
Barium chloride fluoride is an inorganic chemical compound of barium, chlorine, and fluorine. Its chemical formula is . The compound naturally occurs as zhangpeishanite mineral of the matlockite group. One of the deposits where the mineral is mined is Bayan Obo in China. Synthesis Barium fluoride can be prepared by precipitating barium chloride and ammonium fluoride in a solution. Physical properties Barium chloride fluoride forms white crystals. The crystal structure of BaClF is a tetragonal distortion of that of fluoride type . The compound is poorly soluble in water. References Barium compounds Chlorine compounds Fluorine compounds Mixed anion compounds
Barium chloride fluoride
[ "Physics", "Chemistry" ]
150
[ "Matter", "Inorganic compounds", "Mixed anion compounds", "Inorganic compound stubs", "Ions" ]
73,388,311
https://en.wikipedia.org/wiki/Non%20B-DNA
Non-B DNA refers to DNA conformations that differ from the canonical B-DNA conformation, the most common form of DNA found in nature at neutral pH and physiological salt concentrations. Non-B DNA structures can arise due to various factors, including DNA sequence, length, supercoiling, and environmental conditions. Non-B DNA structures can have important biological roles, but they can also cause problems, such as genomic instability and disease. Types of Non-B DNA Non-B DNA can be classified into several types, including A-DNA, Z-DNA, H-DNA, G-quadruplexes, and Triplexes (Triple-stranded DNA). A-DNA is a right-handed double helix structure for RNA-DNA duplexes and RNA-RNA duplexes that is less common than the more well-known B-DNA structure. A-DNA is a form of DNA that occurs when the DNA is in a dehydrated state or is bound to certain proteins, and it has a shorter and wider helix than B-DNA. The helix of A-DNA is also tilted and compressed compared to B-DNA. A-DNA is believed to play a role in certain biological processes, such as DNA replication and gene expression. Z-DNA is a left-handed helix with a zigzag backbone, in contrast to the right-handed B-DNA helix. It is stabilized by the alternating purine-pyrimidine sequence and can form in regions of DNA with high GC-content, supercoiling, or negative superhelicity. Z-DNA has been implicated in gene regulation and immunity, but it can also induce DNA damage and inflammation. H-DNA is a triple-stranded DNA structure that forms when two homologous DNA strands come together and one strand displaces the other. H-DNA is stabilized by Hoogsteen base pairing and can cause mutations, rearrangements, and genome instability. H-DNA is thought to be involved in DNA replication, recombination, and repair, but its precise biological functions remain unclear. G-quadruplexes are four-stranded DNA structures formed by guanine-rich sequences. G-quadruplexes can form in telomeres, oncogene promoters, and other genomic regions and can affect gene expression, DNA replication, and telomere maintenance. G-quadruplexes are also potential targets for cancer therapy. Triplexes are three-stranded DNA structures formed by the binding of a third strand to a DNA duplex. Triplexes can be formed by pyrimidine-rich or purine-rich third strands, and they can occur in genomic regions with inverted repeats, mirror repeats, or other special sequences. Triplexes can affect DNA replication, transcription, and recombination, but they can also cause DNA damage and mutagenesis. Implications of Non-B DNA Non-B DNA can have significant implications for DNA biology and human health. For example, Z-DNA has been implicated in immunity and autoimmune diseases, such as lupus and arthritis. H-DNA has been implicated in genomic instability and cancer, and G-quadruplexes have been linked to telomere maintenance, oncogene activation, and cancer. Triplexes have been associated with genetic diseases, such as fragile X syndrome and Huntington's disease. References DNA Biochemistry terminology
Non B-DNA
[ "Chemistry", "Biology" ]
705
[ "Biochemistry", "Biochemistry terminology" ]
73,392,410
https://en.wikipedia.org/wiki/Oxygen%20monofluoride
Oxygen monofluoride is an unstable binary inorganic compound radical of fluorine and oxygen with the chemical formula OF. This is the simplest of many oxygen fluorides. Synthesis OF is a radical that can be formed by thermal of photolytic decomposition of . A reaction of fluorine and ozone: Atmosphere Oxygen- and fluorine-containing radicals like and OF occur in the atmosphere. These, along with other halogen radicals, have been implicated in the destruction of ozone in the atmosphere. However, the oxygen monofluoride radicals are assumed to not play as big a role in the ozone depletion because free fluorine atoms in the atmosphere are believed to react with methane to produce hydrofluoric acid which precipitates in rain. References Oxygen fluorides Diatomic molecules Free radicals
Oxygen monofluoride
[ "Physics", "Chemistry", "Biology" ]
168
[ "Oxygen fluorides", "Molecules", "Free radicals", "Oxidizing agents", "Senescence", "Biomolecules", "Diatomic molecules", "Matter" ]
73,392,987
https://en.wikipedia.org/wiki/Iodine%20monoxide
Iodine monoxide is a binary inorganic compound of iodine and oxygen with the chemical formula IO•. A free radical, this compound is the simplest of many iodine oxides. It is similar to the oxygen monofluoride, chlorine monoxide and bromine monoxide radicals. Synthesis Iodine monoxide can be obtained by the reaction between iodine and oxygen: Chemical properties Iodine monoxide decomposes to its prime elements: Iodine monoxide reacts with nitric oxide: Atmosphere Atmospheric iodine atoms (e.g. from iodomethane) can react with ozone to produce the iodine monoxide radical: This process can contribute to ozone depletion. References Iodine compounds Diatomic molecules Oxides Free radicals
Iodine monoxide
[ "Physics", "Chemistry", "Biology" ]
154
[ "Molecules", "Free radicals", "Oxides", "Salts", "Senescence", "Biomolecules", "Diatomic molecules", "Matter" ]