id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
45,012,199 | https://en.wikipedia.org/wiki/Prism%20graph | In the mathematical field of graph theory, a prism graph is a graph that has one of the prisms as its skeleton.
Examples
The individual graphs may be named after the associated solid:
Triangular prism graph – 6 vertices, 9 edges
Cubical graph – 8 vertices, 12 edges
Pentagonal prism graph – 10 vertices, 15 edges
Hexagonal prism graph – 12 vertices, 18 edges
Heptagonal prism graph – 14 vertices, 21 edges
Octagonal prism graph – 16 vertices, 24 edges
...
Although geometrically the star polygons also form the faces of a different sequence of (self-intersecting and non-convex) prismatic polyhedra, the graphs of these star prisms are isomorphic to the prism graphs, and do not form a separate sequence of graphs.
Construction
Prism graphs are examples of generalized Petersen graphs, with parameters GP(n,1).
They may also be constructed as the Cartesian product of a cycle graph with a single edge.
As with many vertex-transitive graphs, the prism graphs may also be constructed as Cayley graphs. The order-n dihedral group is the group of symmetries of a regular n-gon in the plane; it acts on the n-gon by rotations and reflections. It can be generated by two elements, a rotation by an angle of 2/n and a single reflection, and its Cayley graph with this generating set is the prism graph. Abstractly, the group has the presentation (where r is a rotation and f is a reflection or flip) and the Cayley graph has r and f (or r, r−1, and f) as its generators.
The n-gonal prism graphs with odd values of n may be constructed as circulant graphs .
However, this construction does not work for even values of n.
Properties
The graph of an n-gonal prism has 2n vertices and 3n edges. They are regular, cubic graphs.
Since the prism has symmetries taking each vertex to each other vertex, the prism graphs are vertex-transitive graphs.
As polyhedral graphs, they are also 3-vertex-connected planar graphs. Every prism graph has a Hamiltonian cycle.
Among all biconnected cubic graphs, the prism graphs have within a constant factor of the largest possible number of 1-factorizations. A 1-factorization is a partition of the edge set of the graph into three perfect matchings, or equivalently an edge coloring of the graph with three colors. Every biconnected n-vertex cubic graph has O(2n/2) 1-factorizations, and the prism graphs have Ω(2n/2) 1-factorizations.
The number of spanning trees of an n-gonal prism graph is given by the formula
For n = 3, 4, 5, ... these numbers are
75, 384, 1805, 8100, 35287, 150528, ... .
The n-gonal prism graphs for even values of n are partial cubes. They form one of the few known infinite families of cubic partial cubes, and (except for four sporadic examples) the only vertex-transitive cubic partial cubes.
The pentagonal prism is one of the forbidden minors for the graphs of treewidth three. The triangular prism and cube graph have treewidth exactly three, but all larger prism graphs have treewidth four.
Related graphs
Other infinite sequences of polyhedral graph formed in a similar way from polyhedra with regular-polygon bases include the antiprism graphs (graphs of antiprisms) and wheel graphs (graphs of pyramids). Other vertex-transitive polyhedral graphs include the Archimedean graphs.
If the two cycles of a prism graph are broken by the removal of a single edge in the same position in both cycles, the result is a ladder graph. If these two removed edges are replaced by two crossed edges, the result is a non-planar graph called a Möbius ladder.
References
Graph families
Regular graphs
Planar graphs | Prism graph | [
"Mathematics"
] | 832 | [
"Planes (geometry)",
"Planar graphs"
] |
45,013,411 | https://en.wikipedia.org/wiki/El%20Carmol%C3%AD | El Carmolí is an area in Los Urrutias, Cartagena municipality, in the Campo de Cartagena comarca, Region of Murcia, southeastern Spain. It used to be the site of a military air base, located near a homonymous hill in the flat Mar Menor area. One noteworthy spot in the territory is a hill which name is also El Carmolí and is an ancient volcano that started erupting seven million years ago.
The El Carmolí zone is part of a protected area called Parque Natural de Espacios Abiertos e Islas del Mar Menor.
Aerodrome
A military aerodrome was built in El Carmolí before the Spanish Civil War. It was used by the Spanish Republican Air Force for its High-speed Flying School (Escuela de Vuelo de Alta Velocidad), a training facility for fighter aircraft pilots. Some of the flying instructors based at El Carmolí were from the Soviet Union.
After the Civil War the airfield was used by the Spanish Air Force as an emergency landing facility.
References
External links
Listado de aeródromos de la GCE y regiones aéreas
Defunct airports in Spain
Geography of the Region of Murcia
Cartagena, Spain | El Carmolí | [
"Chemistry"
] | 257 | [
"Eutrophication",
"Mar Menor"
] |
41,778,406 | https://en.wikipedia.org/wiki/Djehutihotep | Djehutihotep ("Thoth is satisfied") was an ancient Egyptian nomarch of the fifteenth nomos of Upper Egypt ("the Hare") during the twelfth dynasty, c. 1900 BC.
Biography
Djehutihotep lived under the reigns of Amenemhat II, Senusret II, and Senusret III and was one of the most powerful nomarchs of the Middle Kingdom. His tomb—the only one among the necropolis of Deir el-Bersha that wasn't damaged by the explosives used in recent quarrying methods—is well known for the great quality of its decorations, a work carried out by an artist named Amenaankhu. For this reason, it is believed that Djehutihotep died prior to the strict measures reducing the power of the nomarchs that were established by Senusret III. Indeed, as their charge became hereditary at the end of the Old Kingdom, the nomarchs became local rulers effectively, although not nominally, independent of the pharaohs. This situation led to excesses in the exercise of power that worsened steadily during the First Intermediate Period. It was not until Senusret's measures were imposed that such abuse of power later exercised by nomarchs, stopped posing a threat to the integrity of the Egyptian state.
Being part of the hereditary nomarch system, Djehutihotep's family held the office of local governor for several generations. Djehutihotep was the son of a woman named Satkheperka and an official named Kay. The latter was the brother of Djehutynakht VI and Amenemhat, both of whom became governors of the Hare nome, although Kay did not. Djehutihotep was married to a woman named Hathorhotep. Her parents are not recorded in known sources. Several children of Djehutihotep are known. See "Nomarchs of the Hare nome" for further notes about his genealogy.
Two limestone jambs from Djehutihotep's tomb entrance are now on display in the National Archeological Museum of Florence (inv. nos. 7596 and 7597), having been purchased by Ernesto Schiaparelli in 1891–92. The jambs list his several civil and religious titles, which include Treasurer of the King, Unique friend (of the King), Overseer of the priests, and Great overlord of the Hare nomos (i.e. nomarch). Djehutihotep was represented at the bottom of the jambs.
The "colossus on a sledge"
By far, Djehutihotep is known best for the famous decoration inside his tomb that represents the transport of a colossal statue of him that was nearly high, being transported by 172 workers using ropes and a slide, in an effort that is facilitated by pouring water in front of the slide. With an estimated weight of , it was carved by a scribe, Sipa son of Hennakhtankh. Unfortunately, no traces of this colossus have ever been found. The colossus' depiction itself was irremediably vandalized and destroyed in 1890, and all the existing drawings are based on a single photo taken the previous year by a certain Major Brown.
References
Further reading
Percy Newberry, El Bersheh, part I: The tomb of Tehuti-hetep, London, 1891.
Officials of the Twelfth Dynasty of Egypt
Nomarchs
Colossal statues
Ancient Egyptian royal sealers | Djehutihotep | [
"Physics",
"Mathematics"
] | 727 | [
"Quantity",
"Colossal statues",
"Physical quantities",
"Size"
] |
41,779,903 | https://en.wikipedia.org/wiki/Power%20management%20system | On marine vessels the Power Management System PMS is in charge of controlling the electrical system. Its task is to make sure that the electrical system is safe and efficient. If the power consumption is larger than the power production capacity, load shedding is used to avoid blackout. Other features could be to automatic start and stop consumers (e.g., diesel generators) as the load varies.
A complete switchboard and generator control system
The marine Power Management System PMS is a complete switchboard and generator control system to synchronize the auxiliary engines of the ships by implementing automatic load sharing and optimizing the efficiency of the power plant. It handles various configurations of generators driven by diesel engines, steam turbines, and main engines in combination with switchboards of various complexity.
Power Management System PMS Operation
Electrical energy in any combination of the Generators is implemented according to calculations of the electric power tables of each vessel. PMS System decides which Generators combination will be the best according to the Load Consumptions. The capacity of the Generators is such that in the event of any one generating set will be stopped then it will still be possible to supply all services necessary to provide normal operational conditions of propulsion and safety. Furthermore, it will be sufficient to start the largest motor of the ship without causing any other motor to stop or having any adverse effect on other equipment in operation. In general a PMS Power Management System performs the following functions on a Ship:
Automatic Synchronizing
Automatic Load Sharing
Automatic Start/Stop/Stby Generators according to Load Demand
Large Motors Automatic Blocking
Load Analysis and Monitoring
Three (3) Phase Management and Voltage Matching
Redundant Power Distribution
Frequency Control
Blackout Start
Selection of Generators Priority (first leading main, second and third stby generator in sequence)
Equal Load Division between generators
Tripping of non-essential load groups (load shedding in two steps)
Blocking of heavy consumers
Operation of second generator in case first generator will be loaded 80% of its capacity
Operation of standby generator, in case of malfunction in any one of the two generators
Manual, secured, semi-automatic and automatic mode operation selection of generators
Control selection for generators in engine control room
Power Management System PMS Benefits
Diesel generator monitoring and control
Diesel engine safety and start/stop
Circuit breaker synchronize & connect
Bus line voltage and frequency control
Generator voltage and frequency control
Generator load in KW and %
Symmetric or asymmetric load sharing
Load control with load shedding
Separation of alarm, control and safety
Single or multiple switchboard control
Heavy consumers logic
Automatic start and connect after blackout
Automatic line frequency adjustment
Control of diesel electric propulsion
"Take me home mode", control of PTI with clutches etc.
"One touch auto sequence", automatic mode control
Power Management System PMS Applications on Vessel Types
Tanker (ship)
Bulk Carrier
General Cargo Ship
Container Ship
LNG carrier / LPG carrier / Gas carrier
Cruise Ship
Yachts
References
3. http://www.km.kongsberg.com/ks/web/nokbg0240.nsf/AllWeb/A297BDC3A79BBB36C125726B00387597?OpenDocument
4. ABB Marine, Integrated Automation System
Electric power | Power management system | [
"Physics",
"Engineering"
] | 654 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
41,781,486 | https://en.wikipedia.org/wiki/Software-defined%20perimeter | A software-defined perimeter (SDP), sometimes referred to as a black cloud, is a method of enhancing computer security. The SDP framework was developed by the Cloud Security Alliance to control access to resources based on identity. In an SDP, connectivity follows a need-to-know model, where both device posture and identity are verified before access to application infrastructure is granted. The application infrastructure in a software-defined perimeter is effectively "black"—a term used by the Department of Defense to describe an undetectable infrastructure—lacking visible DNS information or IP addresses. Proponents of these systems claim that an SDP mitigates many common network-based attacks, including server scanning, denial-of-service, SQL injection, operating system and application vulnerability exploits, man-in-the-middle attacks, pass-the-hash, pass-the-ticket, and other attacks by unauthorized users.
Background
Software-defined perimeter
An SDP is a security methodology that controls access to resources based on user identity and device posture. It follows a zero-trust model, verifying both factors before granting access to applications. This approach aims to make internal infrastructure invisible to the internet, reducing the attack surface for threats like denial-of-service (DoS) and server scanning (Ref. [1]).
Traditional vs. software-defined perimeter
Traditional network security relies on a fixed perimeter, typically protected by firewalls. While this isolates internal services, it becomes vulnerable with the rise of:
User-managed devices: These devices bypass traditional perimeter controls.
Phishing attacks: These attacks can give unauthorized users access within the perimeter.
Cloud adoption: Applications can be hosted anywhere, making perimeter control more complex.
SDPs address these issues by:
Making applications invisible: Public internet cannot directly see internal resources.
Enforcing access control: Only authorized users and devices can connect to applications.
SDP architecture and workflow
An SDP consists of two main components:
SDP Controllers: Manage access policies and communication between devices.
SDP Hosts: These can be initiating (requesting access) or accepting (providing access) applications.
The workflow involves:
Deploying SDP controllers and connecting them to authentication services (e.g., Active Directory, multi-factor authentication).
Bringing online accepting SDP hosts, which authenticate with the controllers.
Initiating SDP hosts authenticating with the controllers.
Controllers determining authorized communication and creating secure connections between hosts.
SDP deployment models
There are several ways to deploy SDPs, each suited for specific scenarios:
Client-to-Gateway: Protects servers behind a gateway, mitigating lateral movement attacks within a network or on the internet.
Client-to-Server: Similar to client-to-gateway, but the protected server runs the SDP software directly.
Server-to-Server: Secures communication between servers offering APIs.
Client-to-Server-to-Client: Enables secure peer-to-peer connections for applications like video conferencing.
SDP applications
SDPs offer security benefits in various situations:
Enterprise application isolation: Protects sensitive applications from unauthorized access within the network.
Cloud security: Secures public, private, and hybrid cloud deployments.
Internet of Things (IoT): Protects back-end applications managing IoT devices.
Conclusion
Software-defined perimeters offer a dynamic approach to network security, aligning with zero-trust principles. They can enhance security for on-premise, cloud, and hybrid environments.
References
Cybersecurity engineering | Software-defined perimeter | [
"Technology",
"Engineering"
] | 707 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer engineering"
] |
41,782,250 | https://en.wikipedia.org/wiki/Sisu%20Nemo | Sisu Nemo is a hydraulic radial piston motor type developed and initially produced by Suomen Autoteollisuus (SAT). The system was patented in 1961.
The motor produces a high torque at low speed and it has been primarily used to power both civil and military lorry trailers. A number of other applications have been designated for various industrial applications.
Development
The idea of the motor came from DI Ilmari Louhio who worked in SAT as design engineer. The operating principle was so simple, that when Louhio presented his invention to the technical department and company management, he was not taken seriously; the answer was "if that would work, someone would have invented it for long ago". However, Louhio didn't give up and in 1959 a team was set up to develop the concept. The members were Louhio, Pentti Tarvainen, Antti Saarialho and Lasse Airola; in addition, dozens of people from different departments took part in the development work. A number of prototypes were built and put through intense testing before the design was ready for production. The first field test took place in autumn 1960, with a Fordson Major tractor pulling a hydraulically driven trailer. A patent was granted for the invention in January 1961, which was named Nemo, from nestemoottori ("fluid motor").
Operating principle
The motor contains five fixed radial hydraulic cylinders, whose pistons end in rollers that in turn press outward against a ring whose inner profile consists of eight cams. As the pistons expand in turn, the rollers force the ring to rotate, a full rotation occurring after eight strokes from each piston. The hub consists of a distribution valve that adjusts the flow in and out of each cylinder. A drum brake may be integrated with the motor and the combined unit fitted into a 20-inch rim.
Despite its small size, the system can deliver 8,000–10,000 newton metres of torque, so a separate planetary gear set is generally not required. Due to the large number (40) of piston movements per rotation, the torque is also smooth. If the pistons are all withdrawn into the hub, the rollers are not in contact with the cam ring, and the wheel can freewheel.
The system also contains a twin pump unit attached to the free end of the towing vehicle engine. One of the elements can be bypassed, which halves the transmission ratio.
Applications
Logging trucks
The first Nemo applications for vehicles to be used on roads came in 1963–1964. Nemo was successfully tested in logging vehicle trailers. In addition to improved off-road performance the system had other benefits; a trailer with a driven front axle could be driven on the lorry platform for transition and set down when the place of loading was reached. When full laden, the Nemo could be used to assist on steep uphills: in case the driver had to gear down, he could switch on the hydraulic drive that was powered directly by the engine, and use clutch and change the gear without risk that the vehicle would stop going. Nemo systems were available for example for Sisu KB-117, M-162 and K-142 logging trucks.
Military vehicles
In the 1960s the Finnish Defence Forces had field cannons. The number of suitable haulers was so small that in case of mobilisation, moving the cannons would have relied on civil lorries. The Nemo system was seen as a solution to improve the mobility.
In 1965 one of new Tampella 122 K 60 cannons was equipped with a tandem axle driven by the Nemo transmission system. This was tested with single wheels at first but later double wheels were used. The hauler was a Sisu KB-45 off-road lorry with a hydraulic system mounted on the front end and the oil container installed between the cabin and platform. This was tested in Santahamina during summer 1967, with the KB-45 hauling an SAT-produced trailer. The KB-45 had a weight of and traction of ; the trailer weighed and delivered ; the combination produced . The effect of the tyre size was not considered. Later the system was tested with a Sisu K-141 4×2 together with a field cannon 130 K 54 and Vanaja KK-69 ET 6×6 coupled with a three-axle carriage powered by a separate aggregate-run pump. The test proved even these trucks, not designed for off-road use, could haul heavy cannons in rough terrain using the Nemo system. During the trials it was observed that the trailers even pushed their haulers forward.
Equipping all the cannons and most of the haulers with Nemo was conceivable. In 1968 the price of Nemo was Finnish marks, divided in half between the hauler and carriage. This price was high compared to a normal truck (), but was cheaper than an AT-S tracked cannon hauler (). Nevertheless, only a small number of vehicles and cannons were ordered with the Nemo systemthirteen Nemo-compatible Sisu AH-45 lorries in 1970. At the same time the Nemo transmission was mounted on howitzer 152 H 38, field cannon 130 K 54 and 122 K 60; the latter one was with a tandem axle. One artillery battalion was armed with Nemo-driven 130 K 54 cannons. In 1976 the Defence Forces took a delivery of fourteen more Sisu AH-45's and next year followed a batch of thirteen vehicles more. Eventually, the Defence Forces had three artillery battalions motorised by Nemo-compatible Sisu's.
The Nemo was further tested in an ammunition trailer prototype equipped with Nemo driven posterior axle; the brakes were built into the fore axle. The motors were built in a such manner that the trailer had an extra high ground clearance. The prototype did not lead to serial orders.
Industrial applications
A number of applications were designed for machinery and industrial use; these include rear-wheel drive of an articulated dumper truck, excavator transmission, mining train transmission, telescopic crane extractor motor, mobile portal crane transmission and a ferry wire winding motor.
Production
The first motors were produced in the Sisu axle factory that was located then in Helsinki. Serial production began in 1963. In 1973 the company made a decision to transfer the Nemo production to another organisation. SAT founded a separate company Nesco Oy jointly with Multilift (40%) and investment fund Sponsor (20%). The production was moved to Iisalmi where it shared the premises with Multilift demountable skip factory. Production was sold to Partek in 1977. Later the Nemo's were produced in Valmet gear works from which it was moved on to Valmet Hydraulics Oy in Jyskä. Valmet became later a part of Metso Corporation and the company was renamed Metso Hydraulics in 2001. In 2003 Metso sold the unit to Sampo Rosenlew and it got again a new name, Sampo Hydraulics. The motors are currently produced under brand Black Bruin.
The Kelsey-Hayes Company produced the Nemo under licence in the US.
References
External links
Technical description about the Sisu Nemo system (US pat no 4 445 423); pdf
Sectional view on Sisu Nemo
Operating principle of the further developed Black Bruin motor
Nemo
Hydraulic actuators | Sisu Nemo | [
"Physics"
] | 1,497 | [
"Physical systems",
"Hydraulic actuators",
"Hydraulics"
] |
40,343,127 | https://en.wikipedia.org/wiki/Joan%20Hutchinson | Joan Prince Hutchinson (born 1945) is an American mathematician and Professor Emerita of Mathematics from Macalester College.
Education
Joan Hutchinson was born in Philadelphia, Pennsylvania; her father was a demographer and university professor, and her mother a mathematics teacher at the Baldwin School, which Joan also attended. She studied at Smith College in Northampton, Massachusetts, graduating in 1967 summa cum laude with an honors paper directed by Prof. Alice Dickinson.
After graduation she worked as a computer programmer at the Woods Hole Oceanographic Institute and at the Harvard University Computing Center then studied mathematics (and English change ringing on tower bells) at the University of Warwick in Coventry England. Returning to the United States, Hutchinson did graduate work at the University of Pennsylvania earning a Ph.D. in mathematics in 1973 under the supervision of Herbert S. Wilf.
Career
She was a John Wesley Young research instructor at Dartmouth College, 1973–1975.
She and her husband, fellow mathematician Stan Wagon, taught at Smith College, 1975–1990, and at Macalester College, 1990–2007. At both colleges they shared a full-time position in mathematics. She spent sabbaticals, taught, and held visiting positions at Tufts University, Carleton College, University of Colorado Boulder, University of Washington, University of Michigan, Mathematical Sciences Research Institute in Berkeley, California, and University of Colorado Denver.
She has served on committees of the American Mathematical Society, the Mathematical Association of America (MAA), SIAM Special Interest Group on Discrete Math (SIAM-DM), and the Association for Women in Mathematics, involved with the latter organization since a graduate student during its founding days in 1971. Mentoring women students and younger colleagues has been an important concern of her professional life. She served as the vice-chair of SIAM-DM, 2000–2002. She was a member of the editorial board of the American Mathematical Monthly, 1986–1996, and continues on the board of the Journal of Graph Theory since 1993.
Research
Her research has focused on graph theory and discrete mathematics, specializing mainly in topological and chromatic graph theory and on visibility graphs;
for overviews of this work see and .
She has published over 75 research and expository papers in graph theory, many with Michael O. Albertson, formerly of Smith College.
In one of their most cited works, Albertson and Hutchinson completed work of Gabriel Andrew Dirac related to the Heawood conjecture by proving that, on any surface other than the sphere or Klein bottle, the only graphs meeting Heawood's bound on the chromatic number of surface-embedded graphs are the complete graphs.
She has also considered algorithmic aspects in these areas, for example, generalizing the planar separator theorem to surfaces.
With S. Wagon she has co-authored papers on algorithmic aspects of the four color theorem.
Albertson and Hutchinson also wrote together the textbook Discrete Mathematics with Algorithms.
Awards and honors
In 1994 she received the Carl B. Allendoerfer Award of the Mathematical Association of America for the expository article on the Earth–Moon problem in Mathematics Magazine.
The work of this paper was also included in an issue of What’s Happening in the Mathematical Sciences and in the Mathematical Recreations column of Scientific American.
In 1998 she was a winner of the MAA North Central Section Teaching Award,
and in 1999 she was a winner of the Deborah and Franklin Haimo Award for Distinguished College or University Teaching of Mathematics.
On the occasion of her 60th birthday, she was the honoree at the Graph Theory with Altitude conference at the University of Colorado Denver, organized by her former student Ellen Gethner, professor of computer science.
Selected publications
References
1945 births
20th-century American mathematicians
21st-century American mathematicians
Graph theorists
Smith College alumni
University of Pennsylvania alumni
Dartmouth College faculty
Tufts University faculty
Carleton College faculty
University of Colorado Boulder faculty
Smith College faculty
Macalester College faculty
Living people
Mathematicians from Pennsylvania
20th-century American women mathematicians
21st-century American women mathematicians
University of Colorado Denver faculty | Joan Hutchinson | [
"Mathematics"
] | 807 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
40,347,348 | https://en.wikipedia.org/wiki/C28H32O8 | {{DISPLAYTITLE:C28H32O8}}
The molecular formula C28H32O8 (molar mass: 496.55 g/mol, exact mass: 496.2097 u) may refer to:
Arisugacin A
Bisvertinolone
Trichodimerol
Molecular formulas | C28H32O8 | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
40,347,944 | https://en.wikipedia.org/wiki/Vineridine | Vineridine (vineridin) is a vinca alkaloid.
References
Vinca alkaloids
Tryptamine alkaloids
Indolizidines | Vineridine | [
"Chemistry"
] | 35 | [
"Tryptamine alkaloids",
"Alkaloids by chemical classification"
] |
40,351,046 | https://en.wikipedia.org/wiki/Clinical%20physiology | Clinical physiology is an academic discipline within the medical sciences and a clinical medical specialty for physicians in the health care systems of Sweden, Denmark, Portugal and Finland. Clinical physiology is characterized as a branch of physiology that uses a functional approach to understand the pathophysiology of a disease.
Overview
As a specialty for medical doctors, clinical physiology is a diagnostic specialty in which patients are subjected to specialized tests for the functions of the heart, blood vessels, lungs, kidneys and gastrointestinal tract, and other organs. Testing methods include evaluation of electrical activity (e.g. electrocardiogram of the heart), blood pressure (e.g. ankle brachial pressure index), and air flow (e.g. pulmonary function testing using spirometry). In addition, Clinical Physiologists measure movements, velocities, and metabolic processes through imaging techniques such as ultrasound, echocardiography, magnetic resonance imaging (MRI), x-ray computed tomography (CT), and nuclear medicine scanners (e.g. single photon emission computed tomography (SPECT) and positron emission tomography (PET) with and without CT or MRI).
History
The field of clinical physiology was originally founded by Professor Torgny Sjöstrand in Sweden, and it continues to make its way around the world in other hospitals and academic environments. Sjöstrand was the first to establish departments for clinical physiology separate from those of physiology, during his work at the Karolinska Hospital in Stockholm. Along with Sjöstrand, another influential name in clinical physiology was P.K Anokhin. Anohkin heavily contributed to the branch of physiology where he worked diligently to use his theories of functional systems to solve medical mysteries amongst his patients.
In Sweden, clinical physiology was originally a discipline on its own, however, between 2008 and 2015, clinical physiology was categorized as a sub-discipline of radiology. For this reason, those pursuing a career in clinical physiology had to first become registered and certified radiologists before becoming clinical physiologists. Since 2015, clinical physiology has been a separate discipline, independent of radiology.
Role
Human physiology is the study of bodily functions. Clinical physiology examinations typically involve assessments of such functions as opposed to assessments of structures and anatomy. The specialty encompasses the development of new physiological tests for medical diagnostics. Using equipments to measure, monitor and record patients proves very helpful for patients in many hospitals. Moreover, it is helpful to doctors, making it possible for patients to be diagnosed correctly. Some Clinical Physiology departments perform tests from related medical specialties including nuclear medicine, clinical neurophysiology, and radiology. In the health care systems of countries that lack this specialty, the tests performed in clinical physiology are often performed by the various organ-specific specialties in internal medicine, such as cardiology, pulmonology, nephrology, and others.
In Australia, the United Kingdom, and many other commonwealth and European countries, clinical physiology is not a medical specialty for physicians. It is individually a non-medical allied health profession - scientist, physiologist or technologist - who may practice as a cardiac scientist, vascular scientist, respiratory scientist, sleep scientist or in Ophthalmic and Vision Science as an Ophthalmic Science Practitioner (UK). These professionals also aid in the diagnosis of disease and manage patients, with an emphasis on understanding physiological and pathophysiological pathways. Disciplines within clinical physiology field include audiologists, cardiac physiologists, gastro-intestinal physiologists, neurophysiologists, respiratory physiologists, and sleep physiologists.
References
External links
Scandinavian Society of Clinical Physiology and Nuclear Medicine (SSCPNM) http://www.sscpnm.com/
The official journal of the SSCPNM: Clinical Physiology and Functional Imaging http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1475-097X
Physiology
Academic disciplines
Medical specialties | Clinical physiology | [
"Biology"
] | 838 | [
"Physiology"
] |
40,351,757 | https://en.wikipedia.org/wiki/BREACH | BREACH (a backronym: Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext) is a security vulnerability against HTTPS when using HTTP compression. BREACH is built based on the CRIME security exploit. BREACH was announced at the August 2013 Black Hat conference by security researchers Angelo Prado, Neal Harris and Yoel Gluck. The idea had been discussed in community before the announcement.
Details
While the CRIME attack was presented as a general attack that could work effectively against a large number of protocols, only exploits against SPDY request compression and TLS compression were demonstrated and largely mitigated in browsers and servers. The CRIME exploits against HTTP compression has not been mitigated at all, even though the authors of CRIME have warned that this vulnerability might be even more widespread than SPDY and TLS compression combined.
BREACH is an instance of the CRIME attack against HTTP compression—the use of gzip or DEFLATE data compression algorithms via the content-encoding option within HTTP by many web browsers and servers. Given this compression oracle, the rest of the BREACH attack follows the same general lines as the CRIME exploit, by performing an initial blind brute-force search to guess a few bytes, followed by divide-and-conquer search to expand a correct guess to an arbitrarily large amount of content.
Mitigation
BREACH exploits the compression in the underlying HTTP protocol. Therefore, turning off TLS compression makes no difference to BREACH, which can still perform a chosen-plaintext attack against the HTTP payload.
As a result, clients and servers are either forced to disable HTTP compression completely (thus reducing performance), or to adopt workarounds to try to foil BREACH in individual attack scenarios, such as using cross-site request forgery (CSRF) protection.
Another suggested approach is to disable HTTP compression whenever the referrer header indicates a cross-site request, or when the header is not present. This approach allows effective mitigation of the attack without losing functionality, only incurring a performance penalty on affected requests.
Another approach is to add padding at the TLS, HTTP header, or payload level. Around 2013–2014, there was an IETF draft proposal for a TLS extension for length-hiding padding that, in theory, could be used as a mitigation against this attack. It allows the actual length of the TLS payload to be disguised by the insertion of padding to round it up to a fixed set of lengths, or to randomize the external length, thereby decreasing the likelihood of detecting small changes in compression ratio that is the basis for the BREACH attack. However, this draft has since expired without further action.
A very effective mitigation is HTB (Heal-the-BREACH) that adds random-sized padding to compressed data, providing some variance in the size of the output contents. This randomness delays BREACH from guessing the correct characters in the secret token by a factor of 500 (10-byte max) to 500,000 (100-byte max). HTB protects all websites and pages in the server with minimal CPU usage and minimal bandwidth increase.
References
External links
Official BREACH website
Tool that runs the BREACH attack demonstrated at BlackHat 2013
HEIST, a related compression-based attack on the body of the response demonstrated at BlackHat 2016
Web security exploits
Cryptography
Data compression
Chosen-plaintext attacks
Transport Layer Security | BREACH | [
"Mathematics",
"Technology",
"Engineering"
] | 688 | [
"Cybersecurity engineering",
"Cryptography",
"Applied mathematics",
"Computer security exploits",
"Web security exploits"
] |
40,354,380 | https://en.wikipedia.org/wiki/Concentrated%20solar%20still | A concentrated solar still is a system that uses the same quantity of solar heat input (same solar collection area) as a simple solar still but can produce a volume of freshwater that is many times greater. While a simple solar still is a way of distilling water by using the heat of the sun to drive evaporation from a water source and ambient air to cool a condenser film, a concentrated solar still uses a concentrated solar thermal collector to concentrate solar heat and deliver it to a multi-effect evaporation process for distillation, thus increasing the natural rate of evaporation. The concentrated solar still is capable of large-scale water production in areas with plentiful solar energy.
Performance
The concentrated solar still can produce as much as twenty times more water than the theoretical maximum of a standard solar still and in practice, can produce as much as 30x the volume.
A typically 25% efficiency standard solar still (not allowing for any recovery of rejected latent heat), as the latent heat of vaporization of water is 2.26 MJ per kilogram, should evaporate kg (or liters) of water per m2 per day in a region with an average daily solar irradiation of 21.6 MJ/m2 ( watts/m2), or liters per year (like a precipitation height of ). A twenty times more productive still would have a daily output of or yearly.
Heat integration
Multiple stage evaporation
The concentrated solar still implements a method for recovering the latent heat of the distillate vapor not captured and reused by a standard solar still. This is done by using multiple stages of evaporation in series (see multiple-effect evaporator). The latent heat of the distillate vapor produced in the n-1 stage (or effect) is recovered in the nth stage by boiling the leftover concentrated brine from the n-1 stage which produces distillate vapor whose latent heat will be recovered in the n+1 stage by boiling the leftover concentrated brine from the nth stage. Since brine is continuously concentrated in each stage, its boiling point will continue to rise under standard conditions. To overcome the boiling point elevation of the brine, each evaporator stage operates at a lower pressure than the previous stage, which effectively reduces the boiling point, allowing for sufficient heat transfer to take place in each stage. This process can be repeated until the distillate conditions are sufficiently degraded (i.e., pressure and temperature are very low and the distillate vapor volume is very large).
Heat pump
The final evaporation stage produces distillate vapor that is considered to be at very poor state conditions. This vapor can either be condensed in a final condenser, in which case its latent heat will be shed as waste, or it can be condensed by using a heat pump, in which case its latent heat (or a portion of it) can be recovered. In the latter case, the heat pump effectively "upgrades" the state conditions of the latent heat to more usable conditions (higher temperature and pressure) by performing work (e.g., compression). The conditions can be sufficiently upgraded such that the recovered heat can be used to provide additional heat for evaporation in the first effect.
References
Solar power
Water treatment
Water technology | Concentrated solar still | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 690 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering"
] |
21,929,924 | https://en.wikipedia.org/wiki/Upper%20critical%20solution%20temperature | The upper critical solution temperature (UCST) or upper consolute temperature is the critical temperature above which the components of a mixture are miscible in all proportions. The word upper indicates that the UCST is an upper bound to a temperature range of partial miscibility, or miscibility for certain compositions only. For example, hexane-nitrobenzene mixtures have a UCST of , so that these two substances are miscible in all proportions above but not at lower temperatures. Examples at higher temperatures are the aniline-water system at (at pressures high enough for liquid water to exist at that temperature), and the lead-zinc system at (a temperature where both metals are liquid).
A solid state example is the palladium-hydrogen system which has a solid solution phase (H2 in Pd) in equilibrium with a hydride phase (PdHn) below the UCST at 300 °C. Above this temperature there is a single solid solution phase.
In the phase diagram of the mixture components, the UCST is the shared maximum of the concave down spinodal and binodal (or coexistence) curves. The UCST is in general dependent on pressure.
The phase separation at the UCST is in general driven by unfavorable energetics; in particular, interactions between components favor a partially demixed state.
Polymer-solvent mixtures
Some polymer solutions also have a lower critical solution temperature (LCST) or lower bound to a temperature range of partial miscibility. As shown in the diagram, for polymer solutions the LCST is higher than the UCST, so that there is a temperature interval of complete miscibility, with partial miscibility at both higher and lower temperatures.
The UCST and LCST of polymer mixtures generally depend on polymer degree of polymerization and polydispersity.
The seminal statistical mechanical model for the UCST of polymers is the Flory–Huggins solution theory.
By adding soluble impurities the upper critical solution temperature increases and lower critical solution temperature decreases.
See also
References
Critical phenomena | Upper critical solution temperature | [
"Physics",
"Materials_science",
"Mathematics"
] | 429 | [
"Physical phenomena",
"Critical phenomena",
"Condensed matter physics",
"Statistical mechanics",
"Dynamical systems"
] |
21,932,806 | https://en.wikipedia.org/wiki/Bacterial%20initiation%20factor%201 | Bacterial initiation factor 1 is a bacterial initiation factor.
IF1 associates with the 30S ribosomal subunit in the A site and prevents an aminoacyl-tRNA from entering. It modulates IF2 binding to the ribosome by increasing its affinity. It may also prevent the 50S subunit from binding, stopping the formation of the 70S subunit. It also contains a β-domain fold common for nucleic acid binding proteins.
IF1–IF3 may also perform ribosome recycling.
References
Protein biosynthesis
Gene expression | Bacterial initiation factor 1 | [
"Chemistry",
"Biology"
] | 110 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
21,932,809 | https://en.wikipedia.org/wiki/Bacterial%20initiation%20factor%202 | Bacterial initiation factor-2 is a bacterial initiation factor.
IF2 binds to an initiator tRNA and controls the entry of tRNA onto the ribosome. IF2, bound to GTP, binds to the 30S P site. After associating with the 30S subunit, fMet-tRNAf binds to the IF2 then IF2 transfers the tRNA into the partial P site. When the 50S subunit joins, it hydrolyzes GTP to GDP and Pi, causing a conformational change in the IF2 that causes IF2 to release and allow the 70S ribosome to form.
Human mitochondria use a nuclear-encoded homolog, MTIF2, for translation initiation.
References
Protein biosynthesis
Gene expression | Bacterial initiation factor 2 | [
"Chemistry",
"Biology"
] | 159 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
37,585,910 | https://en.wikipedia.org/wiki/X-ray%20detector | X-ray detectors are devices used to measure the flux, spatial distribution, spectrum, and/or other properties of X-rays.
Detectors can be divided into two major categories: imaging detectors (such as photographic plates and X-ray film (photographic film), now mostly replaced by various digitizing devices like image plates or flat panel detectors) and dose measurement devices (such as ionization chambers, Geiger counters, and dosimeters used to measure the local radiation exposure, dose, and/or dose rate, for example, for verifying that radiation protection equipment and procedures are effective on an ongoing basis).
X-ray imaging
To obtain an image with any type of image detector the part of the patient to be X-rayed is placed between the X-ray source and the image receptor to produce a shadow of the internal structure of that particular part of the body. X-rays are partially blocked ("attenuated") by dense tissues such as bone, and pass more easily through soft tissues. Areas where the X-rays strike darken when developed, causing bones to appear lighter than the surrounding soft tissue.
Contrast compounds containing barium or iodine, which are radiopaque, can be ingested in the gastrointestinal tract (barium) or injected in the artery or veins to highlight these vessels. The contrast compounds have high atomic numbered elements in them that (like bone) essentially block the X-rays and hence the once hollow organ or vessel can be more readily seen. In the pursuit of nontoxic contrast materials, many types of high atomic number elements were evaluated. Some elements chosen proved to be harmful – for example, thorium was once used as a contrast medium (Thorotrast) – which turned out to be toxic, causing a very high incidence of cancer decades after use. Modern contrast material has improved and, while there is no way to determine who may have a sensitivity to the contrast, the incidence of serious allergic reactions is low.
X-ray film
Mechanism
Typical x-ray film contains silver halide crystal "grains", typically primarily silver bromide. Grain size and composition can be adjusted to affect the film properties, for example to improve resolution in the developed image. When the film is exposed to radiation the halide is ionised and free electrons are trapped in crystal defects (forming a latent image). Silver ions are attracted to these defects and reduced, creating clusters of transparent silver atoms. In the developing process these are converted to opaque silver atoms which form the viewable image, darkest where the most radiation was detected. Further developing steps stabilise the sensitised grains and remove unsensitised grains to prevent further exposure (e.g. from visible light).
Replacement
The first radiographs (X-ray images) were made by the action of X-rays on sensitized glass photographic plates. X-ray film (photographic film) soon replaced the glass plates, and film has been used for decades to acquire (and display) medical and industrial images. Gradually, digital computers gained the ability to store and display enough data to make digital imaging possible. Since the 1990s, computerized radiography and digital radiography have been replacing photographic film in medical and dental applications, though film technology remains in widespread use in industrial radiography processes (e.g. to inspect welded seams). The metal silver (formerly necessary to the radiographic & photographic industries) is a non-renewable resource although silver can easily be reclaimed from spent X-ray film. Where X-ray films required wet processing facilities, newer digital technologies do not. Digital archiving of images also saves physical storage space.
Photostimulable phosphors
Phosphor plate radiography is a method of recording X-rays using photostimulated luminescence (PSL), pioneered by Fuji in the 1980s. A photostimulable phosphor plate (PSP) is used in place of the photographic plate. After the plate is X-rayed, excited electrons in the phosphor material remain 'trapped' in 'colour centres' in the crystal lattice until stimulated by a laser beam passed over the plate surface. The light given off during laser stimulation is collected by a photomultiplier tube, and the resulting signal is converted into a digital image by computer technology. The PSP plate can be reused, and existing X-ray equipment requires no modification to use them. The technique may also be known as computed radiography (CR).
Image intensifiers
X-rays are also used in "real-time" procedures such as angiography or contrast studies of the hollow organs (e.g. barium enema of the small or large intestine) using fluoroscopy. Angioplasty, medical interventions of the arterial system, rely heavily on X-ray-sensitive contrast to identify potentially treatable lesions.
Semiconductor detectors
Solid state detectors use semiconductors to detect x-rays. Direct digital detectors are so-called because they directly convert x-ray photons to electrical charge and thus a digital image. Indirect systems may have intervening steps for example first converting x-ray photons to visible light, and then an electronic signal. Both systems typically use thin film transistors to read out and convert the electronic signal to a digital image. Unlike film or CR no manual scanning or development step is required to obtain a digital image, and so in this sense both systems are "direct". Both types of system have considerably higher quantum efficiency than CR.
Direct detectors
Since the 1970s, silicon or germanium doped with lithium (Si(Li) or Ge(Li)) semiconductor detectors have been developed. X-ray photons are converted to electron-hole pairs in the semiconductor and are collected to detect the X-rays. When the temperature is low enough (the detector is cooled by Peltier effect or even cooler liquid nitrogen), it is possible to directly determine the X-ray energy spectrum; this method is called energy-dispersive X-ray spectroscopy (EDX or EDS); it is often used in small X-ray fluorescence spectrometers. Silicon drift detectors (SDDs), produced by conventional semiconductor fabrication, provide a cost-effective and high resolving power radiation measurement. Unlike conventional X-ray detectors, such as Si(Li), they do not need to be cooled with liquid nitrogen. These detectors are rarely used for imaging and are only efficient at low energies.
Practical application in medical imaging started in the early 2000s. Amorphous selenium is used in commercial large area flat panel X-ray detectors for mammography and general radiography due to its high spatial resolution and x-ray absorbing properties. However Selenium's low atomic number means a thick layer is required to achieve sufficient sensitivity.
Cadmium telluride (CdTe), and its alloy with zinc, cadmium zinc telluride, is considered one of the most promising semiconductor materials for x-ray detection due to its wide band-gap and high quantum number resulting in room temperature operation with high efficiency. Current applications include bone densitometry and SPECT but flat panel detectors suitable for radiographic imaging are not yet in production. Current research and development is focused around energy resolving pixel detectors, such as CERN's Medipix detector and Science and Technology Facilities Council's HEXITEC detector.
Common semiconductor diodes, such as PIN photodiodes or a 1N4007, will produce a small amount of current in photovoltaic mode when placed in an X-ray beam.
Indirect detectors
Indirect detectors are made up of a scintillator to convert x-rays to visible light, which is read by a TFT array. This can provide sensitivity advantages over current (amorphous selenium) direct detectors, albeit with a potential trade-off in resolution. Indirect flat panel detectors (FPDs) are in widespread use today in medical, dental, veterinary, and industrial applications.
The TFT array consists of a sheet of glass covered with a thin layer of silicon that is in an amorphous or disordered state. At a microscopic scale, the silicon has been imprinted with millions of transistors arranged in a highly ordered array, like the grid on a sheet of graph paper. Each of these thin-film transistors (TFTs) is attached to a light-absorbing photodiode making up an individual pixel (picture element). Photons striking the photodiode are converted into two carriers of electrical charge, called electron-hole pairs. Since the number of charge carriers produced will vary with the intensity of incoming light photons, an electrical pattern is created that can be swiftly converted to a voltage and then a digital signal, which is interpreted by a computer to produce a digital image. Although silicon has outstanding electronic properties, it is not a particularly good absorber of X-ray photons. For this reason, X-rays first impinge upon scintillators made from such materials as gadolinium oxysulfide or caesium iodide. The scintillator absorbs the X-rays and converts them into visible light photons that then pass onto the photodiode array.
Dose measurement
Gas detectors
X-rays going through a gas will ionize it, producing positive ions and free electrons. An incoming photon will create a number of such ion pairs proportional to its energy. If there is an electric field in the gas chamber ions and electrons will move in different directions and thereby cause a detectable current. The behaviour of the gas will depend on the applied voltage and the geometry of the chamber. This gives rise to a few different types of gas detectors described below.
Ionization chambers use a relatively low electric field of about 100 V/cm to extract all ions and electrons before they recombine. This gives a steady current proportional to the dose rate the gas is exposed to. Ion chambers are widely used as hand held radiation survey meters to check radiation dose levels.
Proportional counters use a geometry with a thin positively charged anode wire in the center of a cylindrical chamber. Most of the gas volume will act as an ionization chamber, but in the region closest to the wire the electric field is high enough to make the electrons ionize gas molecules. This will create an avalanche effect greatly increasing the output signal. Since every electron cause an avalanche of approximately the same size the collected charge is proportional to the number of ion pairs created by the absorbed x-ray. This makes it possible to measure the energy of each incoming photon.
Geiger–Müller counters use an even higher electric field so that UV-photons are created. These start new avalanches, eventually resulting in a total ionization of the gas around the anode wire. This makes the signal very strong, but causes a dead time after each event and makes it impossible to measure the X-ray energies.
Gas detectors are usually single pixel detectors measuring only the average dose rate over the gas volume or the number of interacting photons as explained above, but they can be made spatially resolving by having many crossed wires in a wire chamber.
Silicon PN solar cells
It was demonstrated in the 1960s that silicon PN solar cells are suitable for detection of all forms of ionizing radiation including extreme UV, soft X-rays, and hard X-rays. This form of detection operates via photoionization, a process where ionizing radiation strikes an atom and releases a free electron. This type of broadband ionizing radiation sensor requires a solar cell, an ammeter, and a visible light filter on top of the solar cell that allows the ionizing radiation to hit the solar cell while blocking unwanted wavelengths.
Radiochromic film
Self-developing radiochromic film can provide very high resolution measurements, for dosimetry and profiling purposes, particularly in radiotherapy physics.
References
Radiography
X-ray instrumentation
Ionising radiation detectors
Medical imaging
Detectors
X-rays | X-ray detector | [
"Physics",
"Technology",
"Engineering"
] | 2,439 | [
"Radioactive contamination",
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Measuring instruments",
"X-ray instrumentation",
"Ionising radiation detectors"
] |
37,587,527 | https://en.wikipedia.org/wiki/Colocalization%20Benchmark%20Source | The Colocalization Benchmark Source (CBS) is a free collection of downloadable images to test and validate the degree of colocalization of markers in any fluorescence microscopy studies. Colocalization is a visual phenomenon when two molecules of interest are associated with the same structures in the cells and potentially share common functional characteristics.
CBS provides researchers with reference tools to verify the results of quantitative colocalization measurements. It serves as a specialised bioimage informatics database of computer-simulated images with exactly known (pre-defined) values of colocalization. They were created using image simulation algorithm. These benchmark images can be downloaded as sets as well as separately. By calculating and comparing the values of coefficients on their images versus benchmark images, researchers can validate the results of quantitative colocalization studies. The use of CBS images was described in a number of studies.
Examples
Researchers can submit examples of custom images when the benchmark images were used to validate colocalization on them. Submitted images are then posted on the site of CBS together with description of their properties and the values of coefficients, such as Pearson's correlation coefficient (Rr), overlap coefficient (R), and others. The template for submitting information about custom images can be downloaded from CBS site.
See also
Colocalization
Fluorescence microscopy
Bioimage informatics
Biological database
References
External links
Colocalization Benchmark Source home page
Biological databases
Microscopy
Fluorescence
Medical imaging | Colocalization Benchmark Source | [
"Chemistry",
"Biology"
] | 293 | [
"Luminescence",
"Fluorescence",
"Bioinformatics",
"Microscopy",
"Biological databases"
] |
37,587,539 | https://en.wikipedia.org/wiki/Langmuir%20turbulence | In fluid dynamics, and oceanography, Langmuir turbulence is a turbulent flow with coherent Langmuir circulation structures that exist and evolve over a range of spatial and temporal scales. These structures arise through an interaction between the ocean surface waves and the currents.
In the upper ocean Langmuir circulations are a special case where the turbulent structures exhibit a dominant cell size. In general it is expected that Langmuir turbulence is a global ocean phenomenon and not confined to gentle wind conditions or shallow water ways (as with most observations of Langmuir circulation).
An important consequence of the Langmuir turbulence are deeply penetrating jets. These features occur between counter-rotating Langmuir circulations and can inject turbulent kinetic energy to depths well below the depth scale for the surface waves (Stokes drift depth scale). Langmuir turbulence could have an important impact on our understanding of climate. In particular, Langmuir turbulence could affect the global ocean's sea surface temperature as the deeply penetrating Langmuir jets modify the depth of the ocean mixed layer.
See also
Coriolis–Stokes force
Notes
Fluid dynamics
Water waves
pl:Siła Coriolisa-Stokesa | Langmuir turbulence | [
"Physics",
"Chemistry",
"Engineering"
] | 238 | [
"Physical phenomena",
"Water waves",
"Chemical engineering",
"Waves",
"Piping",
"Fluid dynamics"
] |
37,589,990 | https://en.wikipedia.org/wiki/Matsushima%27s%20formula | In mathematics, Matsushima's formula, introduced by , is a formula for the Betti numbers of a quotient of a symmetric space G/H by a discrete group, in terms of unitary representations of the group G.
The Matsushima–Murakami formula is a generalization giving dimensions of spaces of automorphic forms, introduced by .
References
Differential geometry
Algebraic topology
Topological graph theory
Generating functions | Matsushima's formula | [
"Mathematics"
] | 85 | [
"Graph theory stubs",
"Sequences and series",
"Mathematical structures",
"Graph theory",
"Algebraic topology",
"Topology stubs",
"Fields of abstract algebra",
"Topology",
"Mathematical relations",
"Generating functions",
"Topological graph theory"
] |
37,591,496 | https://en.wikipedia.org/wiki/C8H14O | {{DISPLAYTITLE:C8H14O}}
The molecular formula C8H14O (molar mass: 126.20 g/mol, exact mass: 126.1045 u) may refer to:
Cyclooctanone
Filbertone
Oct-1-en-3-one, or 1-octen-3-one
Sulcatone, or 6-methyl-5-hepten-2-one
Molecular formulas | C8H14O | [
"Physics",
"Chemistry"
] | 97 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
37,591,765 | https://en.wikipedia.org/wiki/Fuel%20factor | The fuel factor, fo, is the ratio of created CO2 to depleted oxygen in a combustion reaction, used to check the accuracy of an emission measurement system. It can be calculated using the equation
fo = (20.9 - %O2)/%CO2,
Where %O2 is the percent O2 by volume, dry basis, %CO2 is the percent CO2 by volume, dry basis, and 20.9 is the percent O2 by volume in ambient air. The Fuel factor can be corrected for the amount of CO, by adding the percent CO on a dry basis to the CO2, and subtracting half of the percent CO from the O2.
See also
Portable emissions measurement system
Air–fuel ratio
References
Fuels | Fuel factor | [
"Chemistry"
] | 155 | [
"Fuels",
"Chemical reaction stubs",
"Chemical process stubs",
"Chemical energy sources"
] |
33,520,674 | https://en.wikipedia.org/wiki/Software-defined%20networking | Software-defined networking (SDN) is an approach to network management that uses abstraction to enable dynamic and programmatically efficient network configuration to create grouping and segmentation while improving network performance and monitoring in a manner more akin to cloud computing than to traditional network management. SDN is meant to improve the static architecture of traditional networks and may be employed to centralize network intelligence in one network component by disassociating the forwarding process of network packets (data plane) from the routing process (control plane). The control plane consists of one or more controllers, which are considered the brains of the SDN network, where the whole intelligence is incorporated. However, centralization has certain drawbacks related to security, scalability and elasticity.
SDN was commonly associated with the OpenFlow protocol for remote communication with network plane elements to determine the path of network packets across network switches since OpenFlow's emergence in 2011. However, since 2012, proprietary systems have also used the term. These include Cisco Systems' Open Network Environment and Nicira's network virtualization platform.
SD-WAN applies similar technology to a wide area network (WAN).
History
The history of SDN principles can be traced back to the separation of the control and data plane first used in public switched telephone networks. This provided a manner of simplifying provisioning and management years before the architecture was used in data networks.
The Internet Engineering Task Force (IETF) began considering various ways to decouple the control and data forwarding functions in a proposed interface standard published in 2004 named Forwarding and Control Element Separation (ForCES). The ForCES Working Group also proposed a companion SoftRouter architecture. Additional early standards from the IETF that pursued separating control from data include the Linux Netlink as an IP services protocol and a path computation element (PCE)-based architecture.
These early attempts failed to gain traction. One reason is that many in the Internet community viewed separating control from data to be risky, especially given the potential for failure in the control plane. Another reason is that vendors were concerned that creating standard application programming interfaces (APIs) between the control and data planes would result in increased competition.
The use of open-source software in these separated architectures traces its roots to the Ethane project at Stanford's computer science department. Ethane's simple switch design led to the creation of OpenFlow, and an API for OpenFlow was first created in 2008. In that same year, NOX, an operating system for networks, was created.
SDN research included emulators such as vSDNEmul, EstiNet, and Mininet.
Work on OpenFlow continued at Stanford, including with the creation of testbeds to evaluate the use of the protocol in a single campus network, as well as across the WAN as a backbone for connecting multiple campuses. In academic settings, there were several research and production networks based on OpenFlow switches from NEC and Hewlett-Packard, as well as those based on Quanta Computer whiteboxes starting in about 2009.
Beyond academia, the first deployments were by Nicira in 2010 to control OVS from Onix, codeveloped with NTT and Google. A notable deployment was Google's B4 in 2012. Later, Google announced the first OpenFlow/Onix deployments in is datacenters. Another large deployment exists at China Mobile.
The Open Networking Foundation was founded in 2011 to promote SDN and OpenFlow.
At the 2014 Interop and Tech Field Day, software-defined networking was demonstrated by Avaya using shortest-path bridging (IEEE 802.1aq) and OpenStack as an automated campus, extending automation from the data center to the end device and removing manual provisioning from service delivery.
Concept
SDN architectures decouple network control (control plane) and forwarding (data plane) functions, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services.
The OpenFlow protocol can be used in SDN technologies. The SDN architecture is:
Directly programmable: Network control is directly programmable because it is decoupled from forwarding functions.
Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs.
Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software.
Open standards-based and vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols.
New network architecture
The explosion of mobile devices and content, server virtualization, and the advent of cloud services are among the trends driving the networking industry to re-examine traditional network architectures. Many conventional networks are hierarchical, built with tiers of Ethernet switches arranged in a tree structure. This design made sense when client-server computing was dominant, but such a static architecture may be ill-suited to the dynamic computing and storage needs of today's enterprise data centers, campuses, and carrier environments. Some of the key computing trends driving the need for a new network paradigm include:
Changing traffic patterns
Within the enterprise data center, traffic patterns have changed significantly. In contrast to client-server applications where the bulk of the communication occurs between one client and one server, today's applications access different databases and servers, creating a flurry of east-west machine-to-machine traffic before returning data to the end user device in the classic north-south traffic pattern. At the same time, users are changing network traffic patterns as they push for access to corporate content and applications from any type of device, connecting from anywhere, at any time. Finally, many enterprise data center managers are deploying a utility computing model, which may include a private cloud, public cloud, or some mix of both, resulting in additional traffic across the wide-area network.
The consumerization of IT
Users are increasingly employing mobile personal devices such as smartphones, tablets, and notebooks to access the corporate network. IT is under pressure to accommodate these personal devices in a fine-grained manner while protecting corporate data and intellectual property and meeting compliance mandates.
The rise of cloud services
Enterprises have enthusiastically embraced both public and private cloud services, resulting in unprecedented growth of these services. Many enterprise businesses want the agility to access applications, infrastructure and other IT resources on demand and discretely. IT planning for cloud services must be performed in an environment of increased security, compliance and auditing requirements, along with business reorganizations, consolidations and mergers that can rapidly change assumptions. Providing self-service provisioning, whether in a private or public cloud, requires elastic scaling of computing, storage and network resources, ideally from a common viewpoint and with a common suite of tools.
Big data means more bandwidth
Handling today's big data requires massive parallel processing on thousands of servers, all of which need direct connections to each other. The rise of these large data sets is fueling a constant demand for additional network capacity in the data center. Operators of hyperscale data center networks face the daunting task of scaling the network to previously unimaginable size, maintaining any-to-any connectivity within a limited budget.
Energy use in large data centers
As Internet of things, cloud computing and SaaS emerged, the need for larger data centers has increased the energy consumption of those facilities. Many researchers have improved SDN's energy efficiency applying existing routing techniques to dynamically adjust the network data plane to save energy. Also techniques to improve control plane energy efficiency are being researched.
Architectural components
The following list defines and explains the SDN architectural components:
SDN application
SDN applications are programs that communicate their network requirements and desired network behavior to the SDN controller via a northbound interface (NBI). In addition, they may consume an abstracted view of the network for their internal decision-making purposes. An SDN Application consists of SDN application logic and one or more NBI drivers. SDN applications may themselves expose another layer of abstracted network control, thus offering one or more higher-level NBIs through respective NBI agents.
SDN Controller
The SDN Controller is a logically centralized entity in charge of (i) translating the requirements from the SDN Application layer down to the SDN Datapaths and (ii) providing the SDN Applications with an abstract view of the network (which may include statistics and events). An SDN Controller consists of one or more NBI Agents, the SDN Control Logic, and the Control to Data-Plane Interface (CDPI) driver. Definition as a logically centralized entity neither prescribes nor precludes implementation details such as the federation of multiple controllers, the hierarchical connection of controllers, communication interfaces between controllers, nor virtualization or slicing of network resources.
SDN Datapath
The SDN Datapath is a logical network device that exposes visibility and uncontested control over its advertised forwarding and data processing capabilities. The logical representation may encompass all or a subset of the physical substrate resources. An SDN Datapath comprises a CDPI agent and a set of one or more traffic forwarding engines and zero or more traffic processing functions. These engines and functions may include simple forwarding between the datapath's external interfaces or internal traffic processing or termination functions. One or more SDN Datapaths may be contained in a single (physical) network element—an integrated physical combination of communications resources, managed as a unit. An SDN Datapath may also be defined across multiple physical network elements. This logical definition neither prescribes nor precludes implementation details such as the logical to physical mapping, management of shared physical resources, virtualization or slicing of the SDN Datapath, interoperability with non-SDN networking, nor the data processing functionality, which can include OSI layer 4-7 functions.
SDN Control to Data-Plane Interface (CDPI)
The SDN CDPI is the interface defined between an SDN Controller and an SDN Datapath, which provides at least (i) programmatic control of all forwarding operations, (ii) capabilities advertisement, (iii) statistics reporting, and (iv) event notification. One value of SDN lies in the expectation that the CDPI is implemented in an open, vendor-neutral and interoperable way.
SDN Northbound Interfaces (NBI)
SDN NBIs are interfaces between SDN Applications and SDN Controllers and typically provide abstract network views and enable direct expression of network behavior and requirements. This may occur at any level of abstraction (latitude) and across different sets of functionality (longitude). One value of SDN lies in the expectation that these interfaces are implemented in an open, vendor-neutral and interoperable way.
SDN Control Plane
Centralized - Hierarchical - Distributed
The implementation of the SDN control plane can follow a centralized, hierarchical, or decentralized design. Initial SDN control plane proposals focused on a centralized solution, where a single control entity has a global view of the network. While this simplifies the implementation of the control logic, it has scalability limitations as the size and dynamics of the network increase. To overcome these limitations, several approaches have been proposed in the literature that fall into two categories, hierarchical and fully distributed approaches. In hierarchical solutions, distributed controllers operate on a partitioned network view, while decisions that require network-wide knowledge are taken by a logically centralized root controller. In distributed approaches, controllers operate on their local view or they may exchange synchronization messages to enhance their knowledge. Distributed solutions are more suitable for supporting adaptive SDN applications.
Controller Placement
A key issue when designing a distributed SDN control plane is to decide on the number and placement of control entities. An important parameter to consider while doing so is the propagation delay between the controllers and the network devices, especially in the context of large networks. Other objectives that have been considered involve control path reliability, fault tolerance, and application requirements.
SDN Data Plane
In SDN, the data plane is responsible for processing data-carrying packets using a set of rules specified by the control plane. The data plane may be implemented in physical hardware switches or in software implementations, such as Open vSwitch. The memory capacity of hardware switches may limit the number of rules that can be stored where as software implementations may have higher capacity.
The location of the SDN data plane and agent can be used to classify SDN implementations:
Hardware Switch-based SDNs: This approach implements the data plane processing inside a physical device. OpenFlow switches may use TCAM tables to route packet sequences (flows). These switches may use an ASIC for its implementation.
Software Switch-Based SDNs: Some physical switches may implement SDN support using software on the device, such as Open vSwitch, to populate flow tables and to act as the SDN agent when communicating with the controller. Hypervisors may likewise use software implementations to support SDN protocols in the virtual switches used to support their virtual machines.
Host-Based SDNs: Rather than deploying the data plane and SDN agent in network infrastructure, host-based SDNs deploy the SDN agent inside the operating system of the communicating endpoints. Such implementations can provide additional context about the application, user, and activity associated with network flows. To achieve the same traffic engineering capabilities of switch-based SDNs, host-based SDNs may require the use of carefully designed VLAN and spanning tree assignments.
Flow table entries may be populated in a proactive, reactive, or hybrid fashion. In the proactive mode, the controller populates flow table entries for all possible traffic matches possible for this switch in advance. This mode can be compared with typical routing table entries today, where all static entries are installed ahead of time. Following this, no request is sent to the controller since all incoming flows will find a matching entry. A major advantage in proactive mode is that all packets are forwarded in line rate (considering all flow table entries in TCAM) and no delay is added. In the reactive mode, entries are populated on demand. If a packet arrives without a corresponding match rule in the flow table, the SDN agent sends a request to the controller for further instruction it the reactive mode. The controller examines the SDN agent requests and provides instructions, installing a rule in the flow table for the corresponding packet if necessary. The hybrid mode uses the low-latency proactive forwarding mode for a portion of traffic while relying on the flexibility of reactive mode processing for the remaining traffic.
Applications
SDMN
Software-defined mobile networking (SDMN) is an approach to the design of mobile networks where all protocol-specific features are implemented in software, maximizing the use of generic and commodity hardware and software in both the core network and radio access network. It is proposed as an extension of SDN paradigm to incorporate mobile network specific functionalities. Since 3GPP Rel.14, a Control User Plane Separation was introduced in the Mobile Core Network architectures with the PFCP protocol.
SD-WAN
An SD-WAN is a WAN managed using the principles of software-defined networking. The main driver of SD-WAN is to lower WAN costs using more affordable and commercially available leased lines, as an alternative or partial replacement of more expensive MPLS lines. Control and management is administered separately from the hardware with central controllers allowing for easier configuration and administration.
SD-LAN
An SD-LAN is a Local area network (LAN) built around the principles of software-defined networking, though there are key differences in topology, network security, application visibility and control, management and quality of service. SD-LAN decouples control management, and data planes to enable a policy driven architecture for wired and wireless LANs. SD-LANs are characterized by their use of a cloud management system and wireless connectivity without the presence of a physical controller.
Security using the SDN paradigm
SDN architecture may enable, facilitate or enhance network-related security applications due to the controller's central view of the network, and its capacity to reprogram the data plane at any time. While the security of SDN architecture itself remains an open question that has already been studied a couple of times in the research community, the following paragraphs only focus on the security applications made possible or revisited using SDN.
Several research works on SDN have already investigated security applications built upon the SDN controller, with different aims in mind. Distributed Denial of Service (DDoS) detection and mitigation, as well as botnet and worm propagation, are some concrete use-cases of such applications: basically, the idea consists in periodically collecting network statistics from the forwarding plane of the network in a standardized manner (e.g. using Openflow), and then apply classification algorithms on those statistics in order to detect any network anomalies. If an anomaly is detected, the application instructs the controller how to reprogram the data plane in order to mitigate it.
Another kind of security application leverages the SDN controller by implementing some moving target defense (MTD) algorithms. MTD algorithms are typically used to make any attack on a given system or network more difficult than usual by periodically hiding or changing key properties of that system or network. In traditional networks, implementing MTD algorithms is not a trivial task since it is difficult to build a central authority able of determining - for each part of the system to be protected - which key properties are hidden or changed. In an SDN network, such tasks become more straightforward thanks to the centrality of the controller. One application can for example periodically assign virtual IPs to hosts within the network, and the mapping virtual IP/real IP is then performed by the controller. Another application can simulate some fake opened/closed/filtered ports on random hosts in the network in order to add significant noise during reconnaissance phase (e.g. scanning) performed by an attacker.
Additional value regarding security in SDN enabled networks can also be gained using FlowVisor and FlowChecker respectively. The former tries to use a single hardware forwarding plane sharing multiple separated logical networks. Following this approach the same hardware resources can be used for production and development purposes as well as separating monitoring, configuration and internet traffic, where each scenario can have its own logical topology which is called slice. In conjunction with this approach FlowChecker realizes the validation of new OpenFlow rules that are deployed by users using their own slice.
SDN controller applications are mostly deployed in large-scale scenarios, which requires comprehensive checks of possible programming errors. A system to do this called NICE was described in 2012. Introducing an overarching security architecture requires a comprehensive and protracted approach to SDN. Since it was introduced, designers are looking at possible ways to secure SDN that do not compromise scalability. One architecture called SN-SECA (SDN+NFV) Security Architecture.
Group Data Delivery Using SDN
Distributed applications that run across datacenters usually replicate data for the purpose of synchronization, fault resiliency, load balancing and getting data closer to users (which reduces latency to users and increases their perceived throughput). Also, many applications, such as Hadoop, replicate data within a datacenter across multiple racks to increase fault tolerance and make data recovery easier. All of these operations require data delivery from one machine or datacenter to multiple machines or datacenters. The process of reliably delivering data from one machine to multiple machines is referred to as Reliable Group Data Delivery (RGDD).
SDN switches can be used for RGDD via installation of rules that allow forwarding to multiple outgoing ports. For example, OpenFlow provides support for Group Tables since version 1.1 which makes this possible. Using SDN, a central controller can carefully and intelligently setup forwarding trees for RGDD. Such trees can be built while paying attention to network congestion/load status to improve performance. For example, MCTCP is a scheme for delivery to many nodes inside datacenters that relies on regular and structured topologies of datacenter networks while DCCast and QuickCast are approaches for fast and efficient data and content replication across datacenters over private WANs.
Relationship to NFV
Network Function Virtualization, or NFV for short, is a concept that complements SDN. Thus, NFV is not dependent on SDN or SDN concepts. NFV separates software from hardware to enable flexible network deployment and dynamic operation. NFV deployments typically use commodity servers to run network services software versions that previously were hardware-based. These software-based services that run in an NFV environment are called Virtual Network Functions (VNF). SDN-NFV hybrid program was provided for high efficiency, elastic and scalable capabilities NFV aimed at accelerating service innovation and provisioning using standard IT virtualization technologies. SDN provides the agility of controlling the generic forwarding devices such as the routers and switches by using SDN controllers. On the other hand, NFV agility is provided for the network applications by using virtualized servers. It is entirely possible to implement a virtualized network function (VNF) as a standalone entity using existing networking and orchestration paradigms. However, there are inherent benefits in leveraging SDN concepts to implement and manage an NFV infrastructure, particularly when looking at the management and orchestration of VNFs, and that's why multivendor platforms are being defined that incorporate SDN and NFV in concerted ecosystems.
Relationship to DPI
Deep packet inspection (DPI) provides network with application-awareness, while SDN provides applications with network-awareness. Although SDN will radically change the generic network architectures, it should cope with working with traditional network architectures to offer high interoperability. The new SDN based network architecture should consider all the capabilities that are currently provided in separate devices or software other than the main forwarding devices (routers and switches) such as the DPI, security appliances
Quality of Experience (QoE) estimation using SDN
When using an SDN based model for transmitting multimedia traffic, an important aspect to take account is the QoE estimation. To estimate the QoE, first we have to be able to classify the traffic and then, it's recommended that the system can solve critical problems on its own by analyzing the traffic.
See also
Active networking
Frenetic (programming language)
IEEE 802.1aq
Intel Data Plane Development Kit (DPDK)
List of SDN controller software
Network functions virtualization
Network virtualization
ONOS
OpenDaylight Project
SD-WAN
Software-defined data center
Software-defined mobile network
Software-defined protection
Virtual Distributed Ethernet
References
Configuration management
Network architecture | Software-defined networking | [
"Engineering"
] | 4,720 | [
"Network architecture",
"Systems engineering",
"Configuration management",
"Computer networks engineering"
] |
33,521,256 | https://en.wikipedia.org/wiki/Yogo%20sapphire | Yogo sapphires are blue sapphires, a colored variety of corundum, found in Montana, primarily in Yogo Gulch (part of the Little Belt Mountains) in Judith Basin County, Montana. Yogo sapphires are typically cornflower blue, a result of trace amounts of iron and titanium. They have high uniform clarity and maintain their brilliance under artificial light. Because Yogo sapphires occur within a vertically dipping resistive igneous dike, mining efforts have been sporadic and rarely profitable. It is estimated that at least 28 million carats () of Yogo sapphires are still in the ground. Jewelry containing Yogo sapphires was given to First Ladies Florence Harding and Bess Truman; in addition, many gems were sold in Europe, though promoters' claims that Yogo sapphires are in the crown jewels of England or the engagement ring of Princess Diana are dubious. Today, several Yogo sapphires are part of the Smithsonian Institution's gem collection.
Yogo sapphires were not initially recognized or valued. Gold was discovered at Yogo Creek in 1866, and though "blue pebbles" were noticed alongside gold in the stream alluvium by 1878, it was not until 1894 that the "blue pebbles" were recognized as sapphires. Sapphire mining began in 1895 after a local rancher named Jake Hoover sent a cigar box of gems he had collected to an assay office, which in turn sent them to Tiffany's in New York, where an appraiser pronounced them "the finest precious gemstones ever found in the United States". Hoover then purchased the original mother lode from a sheepherder, later selling it to other investors. This became the highly profitable "English Mine", which flourished from 1899 until the 1920s. A second operation, the "American Mine", was owned by a series of investors in the western section of the Yogo dike, but was less profitable and bought out by the syndicate that owned the English Mine. In 1984, a third set of claims, known as the Vortex mine, opened.
"Yogo sapphire" is the preferred term for gems found in the Yogo Gulch, whereas "Montana sapphire" generally refers to gems found in other Montana locations. More gem-quality sapphires are produced in Montana than anywhere else in North America. Sapphires were first discovered in Montana in 1865, in alluvium along the Missouri River. Finds in other locations in the western half of the state occurred in 1889, 1892, and 1894. The Rock Creek location, near Phillipsburg, is the most productive site in Montana, and its gems inspired the name of the nearby Sapphire Mountains. In 1969, the sapphire was co-designated along with the agate as Montana's state gemstones.
In the early 1980s, Intergem Limited, which controlled most of the Yogo sapphire mining at the time, rocked the gem world by marketing Yogo sapphires as the world's only guaranteed "untreated" sapphire, exposing a practice of the time wherein 95 percent of all the world's sapphires were heat-treated to enhance their natural color. Although Intergem went out of business, the gems it mined appeared on the market through the 1990s because the company had paid its salesmen in sapphires during its financial demise. Citibank had obtained a large stock of Yogo sapphires as a result of Intergem's collapse, and after keeping them in a vault for nearly a decade, sold its collection in 1994 to a Montana jeweler. Mining activity today is largely confined to hobby miners in the area; the major mines are currently inactive.
Location
Yogo sapphires are mined in Montana at Yogo Gulch (), which is in Judith Basin County, Montana, southwest of Utica, west-southwest of Lewistown, and east of Great Falls. The site was in Fergus County when Yogo sapphires were discovered, but in 1920, because of the re-designation of county boundaries, Judith Basin County was carved out from parts of western Fergus County and eastern Cascade County.
Yogo Gulch and the corresponding natural features of Yogo Peak (), Yogo Creek, and the Yogo dike, where the gems are mined, are all in the Little Belt Mountains within Judith Basin County. The Gulch is located along the lower reaches of Yogo Creek and west of the Judith River. The west end of the Yogo dike outcrops just southwest of Yogo Creek, about north of Yogo Creek's confluence with the Middle Fork of the Judith River; from there it runs east-northeast and ends about from the Judith River. Yogo Creek starts just south of Yogo Peak, which is about west of the Judith River. From there the creek flows southeast into the Middle Fork of the Judith River. The Judith River then flows northeast from the Little Belts toward Utica. East of the Judith River is Pig-Eye Basin, where Jake Hoover, credited as the person who discovered Yogo sapphires, owned a ranch.
Etymology
Because Yogo Gulch lies in a region historically inhabited by the Piegan Blackfeet people, promoters of Yogo sapphires claim that yogo may mean "romance" or "blue sky" in the Blackfoot language, although there is little evidence to support this claim. Other meanings for yogo have been suggested, including "Going over the hill". The meaning of the word "Yogo" had been lost by 1878, when placer gold was found in Yogo Creek. Thus, its true meaning is uncertain.
Mineralogy and geology
Sapphires are a color variety of corundum, a crystalline form of aluminium oxide (). Corundum is one of the hardest minerals, rating 9 on the Mohs scale. Corundum gems of most colors are called sapphires, except for red ones, which are called rubies. The term "Yogo sapphire" refers only to sapphires from the Yogo Gulch. The cornflower blue color of the Yogo results from trace amounts of iron and titanium. Yogo sapphires are unique in that they are free of cavities and inclusions, have high uniform clarity, lack color zoning, and do not need heat treating because their cornflower blue coloring is uniform and deep. Unlike Asian sapphires, they maintain their brilliance in artificial light. Yogo sapphires present an advantage to gemcutters: since they are found as primary constituent minerals within an igneous bedrock rather than in sedimentary alluvial deposits where most other sapphires are located, they retain a perfect or near perfect crystalline shape, making cutting much easier, as does their lack of inclusions, color zoning, or cloudiness. Yogo sapphires also exhibit a triangular pattern on the basal plane of the flattened crystals, with thin rhombohedral crystal faces, a feature absent in sapphires from other parts of Montana.
Yogo sapphires tend to be beautiful, small, and very expensive. The United States Geological Survey and many gem experts have stated that Yogo sapphires are "among the world's finest sapphires." The roughs tend to be small and flat, so cut Yogo gems heavier than are rare. Only about 10 percent of cut pieces are over . The largest recorded Yogo rough, found in 1910, weighed and was cut into an gem. The largest cut Yogo is . Because of the rarity of large rough Yogo sapphires, Yogo gem prices begin rising sharply when they are over , and skyrocket when they are over .
Montana sapphires in general come in a variety of colors, but Yogo sapphires are almost always blue. About two percent of Yogo sapphires are purple, due to trace amounts of chromium. A very small number of rubies have been found at Yogo Gulch.
Yogo sapphires were first discovered in alluvial streambed sediments during gold mining operations in Yogo Gulch downstream from the Yogo dike, but were later traced to their source within igneous bedrock. Worldwide, other than the Yogo Gulch deposit and one small site in the Kashmir region, most other corundum is mined from the sand and gravel created by the weathering of metamorphic rock. Alluvial sapphires are found in the Far East, Australia, and in three other Montana locations—the upper Missouri River, Rock Creek, and Dry Cottonwood Creek. The location of most Yogo sapphires within igneous rock rather than from alluvial placer deposits requires difficult hard rock mining. Coupled with American labor costs, this makes their extraction fairly expensive. At least are estimated to still be in the ground. The Yogo dike is "the only known igneous rock from which sapphire is mined".
The sapphire bearing Yogo dike is a dark gray to green intrusive rock known as a lamprophyre. The lamprophyre is an unusual igneous rock that contains a low content of silica. The rock has a porphyritic texture with large crystals of orthopyroxene and phlogopite set in a fine grained matrix. The phlogopite crystals have been used to determine the age of the dike and its crystallization temperature (900 °C (1,650 °F)). The dike also contains fragments of other rock types. These xenoliths include pieces of limestone, clastic sedimentary rocks, and gneiss. In some locations, due to the abundance of xenoliths, the dike has the appearance of a limestone breccia in an igneous matrix. One gneiss fragment found as a xenolith contains corundum. The Yogo sapphires themselves are rimmed with a reaction layer of spinel and are etched, indicating that the sapphires were not in chemical equilibrium with their host, the lamprophyre magma. This suggests the sapphire crystals may have originated in an earlier rock, such as a corundum-bearing gneiss, later assimilated by the lamprophyre magma at depth. Earlier investigators had assumed that the sapphire had crystallized from the magma with the necessary high aluminium content provided by assimilation of clay rich shales of the Proterozoic Belt Supergroup sediments which are known to be present at depth in the region.
The Yogo dike is a narrow subvertical sheet-like igneous body. It varies from thick and extends for , striking at an azimuth of 255°. The dike is broken into three offset en echelon segments, and dates to 48.6 mya using Ar dating on phlogopite. The dike intrudes Mississippian age (360 to 325 mya) limestone and other sedimentary rocks of the Madison and Big Snowy Groups.
There has been considerable debate over the years as to the depth of the Yogo dike and how many ounces of rough sapphires per ton it contains. In the late 1970s and early 1980s, Delmer L. Brown, a geological engineer and gemologist, conducted the most thorough scientific exploration up to that time, concluding that the dike was at least deep and that the concentration of rough sapphires was not constant throughout the deposit. Brown found that the dike had intruded into a pre-existing fault that had been a conduit for groundwater circulation. The overlying shale, the Kibbey Formation, was deposited on an unconformity, an ancient Mississippian-age karst erosion surface, and was not intruded by the dike. This groundwater action produced collapsed zones which were intruded by the dike to form breccia zones. Recent erosion in the area removed the overlying shales and again exposed the limestone to groundwater action which produced collapse breccias which include fragments of the dike rock. He determined that the erosion of the dike in the current erosion cycle was minimal.
Brown also showed that the unique characteristics of the Yogo sapphires are related to their geological history. Most sapphires are formed under low pressure and temperature over geologically short periods of time, and this is why most non-Yogo sapphires have imperfections and inconsistent coloring. Yogo sapphires show crystalline formation under very high temperatures and pressures corresponding to a great depth, over geologically long periods of time. Brown also showed that distribution of gem rough through the dike was not consistent, so using an average "ounces per ton" was misleading. For example, the section which, despite several ownership and name changes over the years, is generally known as the "American Mine," was developed in an area dominated by post-dike breccia with significantly lower ounces per ton than the English Mine.
Montana sapphires
"Yogo sapphire" is the preferred term for gems found in the Yogo Gulch, whereas "Montana sapphire" generally refers to gems found in other Montana locations. More gem-quality sapphires are produced in Montana than anywhere else in North America. Montana sapphires come in a variety of colors, though rubies are rare.
The first sapphires found in the United States were discovered on May 5, 1865, along the Missouri River, about east of Helena, in Lewis and Clark County, by Ed "Sapphire" Collins. Collins sent the sapphires to Tiffany's in New York City, and to Amsterdam for evaluation; however, those sapphires were of poor coloring and low overall quality, garnering little notice and giving Montana sapphires a poor reputation. Corundum was also found at Dry Cottonwood Creek near Butte in 1889, Rock Creek near Philipsburg in 1892, and Quartz Gulch near Bozeman in 1894. By 1890, the English-owned Sapphire and Ruby Mining Company had bought several thousand acres of land where Montana sapphires were found, but the venture failed after a few years because of fraudulent practices by the owners.
Sapphires from these three sites are routinely heat-treated to enhance color. While millions of carats of sapphires have been mined from the Missouri River deposits, there has been little commercial activity there since the 1990s because of the high cost of recovery and environmental concerns. Production at Dry Cottonwood Creek has been sporadic and low-yielding. The Rock Creek area, also known as Gem Mountain, continues to be the most productive site in Montana, even more so than Yogo Gulch, producing over of sapphires since its inception in 1906. Other than Yogo, Montana sapphire mines have been less successful because they have few blue sapphires and non-blue sapphires have low profit margins.
These gems inspired the names of features: the mountains near Rock Creek are known as the Sapphire Mountains. Garnets are also found at some Montana sapphire sites, inspiring the name of the Garnet Range, which lies to the north of the Sapphire Mountains. In 1969, the sapphire and agate were jointly declared Montana's two official state gemstones.
History
Mining of Yogo sapphires was exceptionally difficult and remains sporadic today. Even so, Yogo sapphire mining turned out to be more valuable than several gold strikes. The Yogo area also produced small amounts of silver, copper, and iron.
Yogo Gulch lies in a region originally inhabited by the Piegan Blackfeet people. Gold was first discovered at Yogo Creek in 1866, but the small numbers of early prospectors were driven off by local Native Americans. During a Gold Rush in 1878, about a thousand miners came to Yogo Creek, which was one of the gold-bearing streams in Montana not yet actively mined. "Blue pebbles" were noted along with small quantities of gold. The mining camp at Yogo City only flourished for roughly three years, and eventually the population dwindled to only a few people.
Yogo City was briefly known as Hoover City, after Jake Hoover. Hoover was part of a partnership that had been placer mining for gold and is credited as the discoverer of Yogo Sapphires. For several years, he also owned a ranch in nearby Pig-Eye Basin. He later prospected for gold in Alaska and was a deep-sea fishing guide in Seattle before eventually returning to the Judith Basin. Western painter C.M. Russell arrived in the area in 1880 as a young cowhand and was hired by Hoover. Russell stated that he learned most of his frontier skills from Hoover, and the two men remained lifelong friends. Millie Ringold, a former slave born in 1845, settled in Fort Benton, Montana after having worked as a nurse and servant for an army general. When gold was discovered at Yogo Creek, Ringold sold her boarding house in Fort Benton and left for the Yogo gold fields, setting up a hotel, restaurant, and saloon in Yogo City where she sang and played music. Ringold later cooked for the English mine, but also worked her own gold claims, even after gold mining was on the decline. She was known as a superb cook and ultimately died in Yogo City in 1906, the last resident of the community. The nearby town of Utica was featured in Russell's 1907 painting A Quiet Day In Utica, which was originally known as Tinning a Dog. Hoover, Ringold, store owner Charles Lehman, and Russell himself are all depicted in the painting, placed between the hitching post and door of the general store.
Discovery
In 1894, the "blue pebbles" were recognized as sapphires. One story credits a local school teacher for recognizing the blue pebbles as sapphires. A variation is that the teacher lived in Maine, but was a friend of a local miner, who had mailed her a small box with some gold and a few "blue pebbles" in it. Another story credits a miner named S.S. Hobson for surmising that the blue stones might be sapphires, and his guess was confirmed by a jeweler in Helena. Ultimately, in 1895, Jake Hoover sent a cigar box containing those he had collected while mining gold to an assay office, which in turn sent them via regular, uninsured mail to Tiffany's in New York City for appraisal by Dr. George Frederick Kunz, the leading American gemologist of the time. Impressed by their quality and color, Kunz pronounced them "the finest precious gemstones ever found in the United States". Tiffany's sent Hoover a check for $3,750 (approximately $ as of ), along with a letter that described the blue pebbles as "sapphires of unusual quality".
Early mining
Yogo sapphires were ultimately traced from the alluvium to their source. In February 1896, a sheepherder named Jim Ettien found the sapphire mother lode: the Yogo dike. Ettien was prospecting for gold, and found sapphires after washing gravel he found in a fissure within a limestone outcrop. Ettien staked two claims. The vein turned out to be long and several other miners promptly staked claims along it. Ettien sold his claims to Hoover; Hoover in turn sold his interest in eight original mining stakes, known as the "New Mine Sapphire Syndicate", to his two partners for $5,000 (approximately $ as of ). This site was from Yogo City. In 1899, Johnson, Walker and Tolhurst, Ltd. of London purchased the New Mine Sapphire Syndicate for $100,000 (approximately $ million as of ). At that point, the operation became unofficially known as the "English Mine".
On July 4, 1896, two other Americans, John Burke and Pat Sweeney, staked six mining claims on the western portion of the Yogo dike—areas Hoover had deemed unfit for mining. These claims were collectively known as the "Fourth of July Claim", and became known as the "American Mine". In 1904, the mine was bought by the American Gem Syndicate, and it sold in 1907 to the American Sapphire Company.
One of the Englishmen who came to the area was Charles Gadsden of Berkhamsted, Hertfordshire. By 1902, Gadsden was promoted to resident supervisor of the English Mine, and he quickly turned its focus from gold to sapphires. Gadsden's security measures were very tight, as weight-for-weight, rough sapphires were and continue to be worth much more than gold. The English Mine flourished until the 1920s, but floods on July 26, 1923, so severely damaged the mines that they never fully recovered. Between the aftermath of flooding and hard economic times, the English Mine finally failed in 1929. It had recovered more than of rough sapphires that produced of finished gems valued at $25 million in 1929 dollars (approximately $ million as of ). A series of other firms mined sapphires there, but with marginal success. For much of the 1930s and 1940s Gadsden worked the mine alone and used his own money to pay its property taxes. He remained caretaker of the mines until shortly before his death on March 11, 1954.
The American Mine operations were less profitable than those of the English Mine. While the English Mine used superior mining and management techniques on a richer lode, the American Mine suffered from insufficient space and lack of water for ore weathering. Roughs from the English Mine were shipped to London and sold in Europe, often with claims they were sapphires from the Far East, while the American Mine had difficulty marketing its gems within the United States. The American Sapphire Company, which used local gemcutters from Great Falls, went bankrupt in 1909; a new firm, the Yogo American Sapphire Company, bought the American Mine, but was bankrupt by 1913. Gadsden and his wife had convinced the New Mine Sapphire Syndicate to buy out the Yogo American Sapphire Company in 1914, and in doing so, the English syndicate gained control of all known Yogo deposits. They quickly recouped the purchase price by washing the tailings left behind by previous operators of the American Mine.
1940s–1970s
Montana sapphires were heavily mined during for industrial abrasive and cutting purposes. As the Yogo mines were still owned by the English, the United States government could not control those operations, so the mines were little affected by the war, even though industrial sapphires were critical to the war effort. The Yogo Sapphire Mining Corporation of Billings, Montana, was the next company to try to run the English Mine. They made an initial offer in 1946, and reached a deal by 1949, but the purchase was not complete until 1956 because of legal issues. The sale was finally completed for $65,000 cash and some stock considerations because the company's capital was exhausted, similar to previous Yogo ventures. The Yogo Sapphire Mining Corporation then changed its name to be the same as the former English firm's name: New Mine Sapphire Syndicate. It became informally known as the "American Syndicate" to distinguish it from the previous "English Syndicate". Production was poor and mining ceased in September 1959. From 1959 to 1963, the mine itself was left unattended and unsecured, resulting in hobbyists, picnickers, and rockhounds' coming from all over the US and Canada to gather loose rough sapphires. The American Syndicate took action to stop this in 1963, with fences and threats of prosecution. The American Syndicate then tried leasing the mine to several operators. One of these was Siskon, Inc. of Nevada, which lost a significant amount of money. They sued, and in May 1965 the Montana Supreme Court ruled in Siskon's favor. Siskon bought the mine at a sheriff's sale and in turn leased it to a group headed by Arnold Baron, who had a background in gemcutting and jewelry. Baron organized German and Thai gemcutters and had success in marketing Yogo sapphires in America—the first such success in 50 years. However, owing to the difficulty in mining the hard rock site, he did not exercise his option to buy the mine, and Siskon sold it in August 1968 to Herman Yaras of Oxnard, California, for $585,000.
In 1969, Yaras' Sapphire Village, Inc. created the Sapphire Village, a nearby homesite development offering buyers limited mining rights to gather their own sapphires with hand tools. Having done no significant mining or marketing, Sapphire Village, Inc. sold in 1973 to one of its investors, Chikara Kunisaki, a celery farmer from Oxnard, California. Kunisaki renamed the business Sapphire International Corporation and attempted to create a commercial mining operation. He built a modern tunnel at the site of the old American Mine, named the "Kunisaki Tunnel". But operation costs were so high that Sapphire International Corporation shut down in late 1976. This was the last actual attempt to mine the American Mine section of the Yogo dike, and today, only the locked portal to the tunnel still exists.
In January 1977, Victor di Suvero and his firm Sapphire-Yogo Mines became the next owner to tackle the Yogo dike. Di Suvero was a native-born Italian who grew up in Tianjin, China, and had been successful with a jade mine in California. Di Suvero's expertise was in marketing: he formed a company called Sapphire Trading to cut and market the Yogo sapphires. He had novel marketing ideas but was not knowledgeable about the mining side of the business. Unable to make payments, his venture folded in late 1979.
By 1980, only four American owners had been successful at Yogo Gulch, all early in its mining history. The English syndicate had been the most profitable of any venture, and even that venture was short-lived. At least thirteen American-owned Yogo mining efforts had failed. Besides inherent difficulties with financing and the challenges of hard rock mining, the American owners generally did not understand how to effectively market the gems.
1980s and beyond
Kunisaki put his mine up for sale, asking $6 million to recoup his expenses. Even though mine profits had been poor over the decades, prices of precious gems were very high at the time due to the worldwide oil crises of the 1970s and early 1980s. Four individuals or groups seriously considered Kunisaki's offer. Relying heavily upon Delmer Brown's expertise, Harry C. Bullock and J. R. Edington formed the limited partnership American Yogo Sapphire Limited, becoming the 14th American company to work the Yogo dike. Bullock and Brown had Yogo mine experience, as they had worked with di Suvero. Bullock's plan included mining, cutting, making jewelry, and marketing—the whole spectrum of the business. They paid the $6 million asked by Kunisaki and then raised another $7.2 million in funding by October 1981. Brown located quality gemcutters in Thailand, and set up the American Yogo Sapphire Company there. Brown also set up a thorough, computerized security system that tracked gems from the mine to the gemcutters. Bigger roughs were sent to American cutters, specialty cuts were done in Germany, a few cuts were done in Hong Kong, and the vast majority were done in Thailand. American Yogo Sapphire Limited secured a $5 million line of credit with Citibank. Desiring a more modern name, American Yogo Sapphire Limited changed its name to Intergem Limited in early 1982. Intergem marketed the Yogo as the "Royal American Sapphire." Their first line of jewelry appeared in mid-1982, first marketed regionally in the American west and later at the national level. Intergem also developed a system of authorized dealers, and found success in its first four years, with sales over $3 million in 1984 alone.
Intergem rocked the gem trade by marketing the Yogo as the world's only guaranteed untreated sapphire. By 1982, the practice of routinely heat treating gems had become a major issue in the industry. At the time, 95 percent of all the world's sapphires were being heated to enhance their natural color. Thai traders had even purchased large quantities of naturally colorless Sri Lankan sapphires, known as geuda, and heated them to turn them into a marketable range of blue colors. Intergem's marketing of guaranteed untreated Yogo sapphires set them against many in the gem industry. In 1985 there was a movement in Pennsylvania to require disclosure that a gem had been treated. Intergem's strategy resulted in large numbers of gem professionals visiting Yogo Gulch.
Intergem began planning to dig even deeper into the Yogo dike, which held more known reserves than all the world's other known sapphire deposits combined, albeit deep underground rather than near the surface in the manner of the other known deposits. They also set up a washing plant and maintenance sheds at the site of the former American mine. Intergem had made a $1.5 million down payment and agreed to make semi-annual payments to Kunisaki's Sapphire International Corporation, which had been renamed to Roncor. Intergem also had loan and interest payments on the $7.2 million loan to make to Citibank. While the company's sales were steadily increasing, their profits were still too low and in May 1985 they missed a $250,000 payment to Roncor. Simultaneously, their collateral of gems, held by Citibank, declined because the value of their collateral was declining; as a result, Citibank called in its loan. Intergem had over $1 million in sales lined up for the 1985 Christmas season, but could only fill a tiny portion because they did not have enough operating capital to manufacture the Yogo jewelry. In mid-1986, Roncor regained full ownership even though Intergem had sold loose gems and jewelry worth millions of dollars.
Various companies attempted to lease the mine from Roncor, but in the meantime, two local couples, Lanny and Joy Perry and Chuck and Marie Ridgeway, discovered a new site at Yogo Gulch in January 1984 by following a trail to an unused section of the dike that had previously been deemed unsuitable. They began mining the site and named it the "Vortex Mine", forming a company named Vortex Mining. The mine shaft was deep and contained two Yogo ore-bearing veins. The portion of the dike they had mined was an extension of the main dike. The Vortex Mine, renamed Yogo Creek Mining, was successful for years but eventually declined and closed in 2004.
In 1992, Roncor found an rough. AMAX Exploration, operating as the Yogo Sapphire Project, signed a 22-month lease with Roncor in March 1993 and had some success in the middle and eastern portions of the dike; it decided not to continue after the end of its lease due to the cost of underground mining, depletion of easily accessible Yogo sapphires, and the relatively small size of Yogo sapphires then easily accessible. During this time, additional dikes were found in the area using geophysical magnetometer surveys. Low-grade sapphire rough was found in the Eastern Flats Dike, a parallel dike some 500 feet northeast of the main dike. Pacific Cascade Sapphires, a Canadian company, had a mining lease with Roncor in 2000 and 2001 but ran out of funds and their option expired. By this time, most of the easily accessible Yogo sapphires had been mined and miners had to dig deeper, further increasing costs.
In 1995, Intergem's stock of gems began to reappear on the market because the company had paid its salesmen in sapphires during its financial demise. After Intergem collapsed, many of its salesmen continued to sell Yogo sapphires, especially after AMAX ceased operations. Citibank also had obtained a large stock of Yogo sapphires, reputedly worth $3.5 million (approximately $ as of ), as a result of Intergem's collapse: of rough, of cut gems, and 2,000 pieces of jewelry, all of which sat in the bank's vaults until 1991 when Sofus Michelsen, director of the Center for Gemstone Evaluation and creator of the Michelsen Gemstone Index, became interested. In 1992, he and Jim Adair, a Missoula, Montana, jeweler who is the world's largest retailer of Yogo sapphires, got together, and by October 1994 Adair had purchased Citibank's four sealed bags of Yogo material. However, only one of the bags was truly valuable. Adair and Michelsen designed custom cutting techniques for Yogo sapphires.
A new owner, Michael Duane Roberts, bought the Vortex Mine in 2008. Its operations were designed to be environmentally friendly, using methods such as recycling all water and not using other chemicals. Roberts died in a mining accident in 2012. , there was also mining activity by individual hobby miners on small parcels at Sapphire Village, but the Roncor mines remained inactive. In 2017, Vortex Mines was sold to Don Baide who plans to continue operations.
Notable specimens
Several Yogo sapphires are kept at the Smithsonian Institution. The earliest donations were noted in the museum's annual report on June 30, 1899, when the institution reported that Dr. L. T. Chamberlain gave them two cut Yogo sapphires and 21 other sapphires for their Dr. Isaac Lea gem and mineral collection. The record-setting cut Yogo is also held by the Smithsonian. In 2006, gemologist Robert Kane of Fine Gems International in Helena, donated 333 Montana sapphires, weighing a total of , to the Smithsonian's Gem and Mineral Collection, along with 98.48 grams of 18K yellow gold for the creation of a piece of jewelry. A representative of the Smithsonian asked Paula Crevoshay, a jewelry designer from Albuquerque, New Mexico, to create a piece of finished jewelry from these gems. Crevoshay felt that a butterfly motif would best represent America's natural beauty, honor her mother's love of butterflies, and display the wide range of colors found in Montana sapphires. Crevoshay named the brooch "Conchita" in honor of her mother; it is also referred to as the "Sapphire Butterfly Brooch", "Conchita Sapphire Butterfly", and the "Montana Butterfly Brooch". Two of the sapphires used are cabochon cut and the rest are brilliant cut. The majority are from the Rock Creek deposit. The largest one, however, is a blue Yogo used for the butterfly's head. Other sapphires used included yellow, purple, pink, and orange gems. Crevoshay completed the brooch in 2007; she and Kane presented the finished brooch to Smithsonian curator Jeffrey Post on May 7, 2007, in Washington, DC.
In the earliest years of Yogo sapphire mining, before Yogo sapphires achieved their own reputation, Oriental sapphires were sold in Montana with claims they were Yogo sapphires, while in Europe, Yogo sapphires were sold as Oriental sapphires. However, Yogo sapphires became notable in their own right. Paulding Farnham (1859–1927) used Yogo sapphires in several jewelry pieces he designed for the 1900 Exposition Universelle in Paris, where Yogo sapphires received a silver medal among all gems for color and clarity. An entry of uncut loose Yogo sapphires also won a bronze medal at the 1904 Louisiana Purchase Exposition in St. Louis, Missouri. Farnham was the creator of the most elaborate piece of jewelry ever made with Yogo sapphires, the life-size Tiffany Iris Brooch, a brooch ornament, which contains 120 Yogo sapphires set in platinum, and sold on March 17, 1900, for $6,906.84. In 1923, First Lady Florence Harding was given an "all Montana" ring made from a Yogo sapphire and Montana gold. In 1952, Gadsden gave cut Yogo sapphires to President Harry Truman, his wife Bess, and their daughter Margaret. Many Yogo sapphires were also sold in Europe, as some Yogo mining was conducted by British interests. Yogo sapphires may have been in the personal collections of some members of the British royal family in the 1910s, but promotional claims that Yogo sapphires are in any of the crown jewels of England cannot be conclusively proven or disproven. Claims that the gem in the engagement ring of Lady Diana Spencer and Kate Middleton is a Yogo are dubious; the gem is thought to be of Sri Lankan origin. The story that the gem is a Yogo can be traced to a 1984 Los Angeles Times article that described the ring as a sapphire, and quoted Intergem president Dennis Brown's claim that the gem may have come from a British-owned Yogo mine.
See also
Bismarck Sapphire Necklace
Hall Sapphire and Diamond Necklace
Logan sapphire
Notes
Footnotes
References
External links
Birth of a Yogo sapphire (photo sequence showing cutting of 7.73 rough to a 2.62 carat finished gem)
Development of Montana Sapphire Industry
New Mine Sapphire Syndicate Records, 1889-1967 (University of Montana Archives)
Aluminium minerals
Dielectrics
Corundum gemstones
Geology of Montana
Optical materials
Oxide minerals
Superhard materials
Transparent materials
Trigonal minerals
Minerals in space group 167 | Yogo sapphire | [
"Physics"
] | 7,586 | [
"Physical phenomena",
"Optical phenomena",
"Materials",
"Superhard materials",
"Optical materials",
"Transparent materials",
"Dielectrics",
"Matter"
] |
33,523,958 | https://en.wikipedia.org/wiki/Palandomus | The Palandomus invented in 1919 by architect Mario Palanti, consists of a cement block of 18x18x36cm made with the vibration system, to serve as the cellular element of construction, being designed with a particular shape "hermaphrodite", which allows placement in any sense, without the constraints of location if not horizontal. In fact the thin ledge, ribs protrusion allow to leave the walls without plaster, but at the same time, ensure the maximum bonding of the elements. The Palandomus is sufficient to withstand up to safety limit of 70 meters in elevation, allowing, without special precautions, the installation of jack arch to openings doors and windows and of dry archivolt.
Bibliography
Mario Palanti, Architettura per tutti, editore E. Bestetti, 1946
Eleonora Trivellin, Storia della tecnica edilizia in Italia: dall'unità ad oggi, Alinea editore, 2006
Ramón Gutiérrez, Architettura e società: l'América Latina nel XX secolo, Jaca Book, 1996
Virginia Bonicatto, “Reason, Economy and Technique”. The Palandomus Constructive System and its Ephemeral Application in Housing (December, 2018)
See also
Mario Palanti
External links
U. S. Patent for a Palanti block in 1923
Mario Palanti architectural records, Montevideo, Uruguay, 1919-1946. Archival materials at the Getty Library.
CA000000237760A
CH000000106720A
FR000000854704A
FR000000833295A
FR000000566823A
GB000000515842A
GB000000205031A
US000002271030A
US000001552077A
Building materials
1919 introductions
Masonry | Palandomus | [
"Physics",
"Engineering"
] | 390 | [
"Masonry",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
33,528,422 | https://en.wikipedia.org/wiki/Europe%20Card%20Bus | The Europe Card Bus (ECB or ECB-bus) is a computer bus developed in 1977 by the company Kontron, mainly for the 8-bit Zilog Z80, Intel 8080 and Intel 8085 microprocessor families.
Physical format
Mechanically, the ECB is usually implemented as a backplane circuit board installed in a 19-inch rack chassis.
ECB cards have 3U Eurocard format (100 mm × 160 mm).
Connector
Use two or three-row versions of DIN 41612 connectors, 0.1" pitch. Original Kontron ECB, supported 64 pins, using "a" and "c" rows, ”b” row tied to "C' row.
ECB boards are NOT compatible with STEbus or VMEbus P2 connector (while STEbus does not use the “b” column; VME does define specific signals on the ‘b’ row).
Pinout
Active low signals indicated by slash.
GND: Ground reference voltage
+5 V: Powers most logic.
+12 V; −12 V: +15 V; −15 V Legacy power inputs, primarily useful for RS232 buffer power or ADU. The +12 V used for programming voltage generators. Both can be used in analogue circuitry, but note that these are primarily power rails for digital circuitry, so decoupling or local regulation is recommended for analogue circuitry.
+5 V Bat: Standby voltage. Optional.
This line is reserved for carrying a battery backup voltage to boards that supply or consume it. NiCad batteries are common source. The ECBbus spec is not rigid about where this should be sourced from. In practice, this means that most boards requiring backup power tend to play safe and have a battery on board, often with a link to allow it to supply or accept power from +5 V Bat. You can end up with more batteries in your system than you need, so care must be taken that no more than one battery is driving the +5 V Bat line.
D0...7: Data bus.
This is only 8 bits wide, but most I/O or memory-mapped peripherals are byte-oriented.
A0...19: Address bus.
This allows up to 1 MB of memory to be addressed. Current technology is such that processor requiring large amounts of memory has this on the processor board, so this is not a great limitation. I/O space is limited to 4K, to simplify I/O address decoding to a practical level. A 74LS688 can decode A11...4 to locate I/O slave boards at 16-byte boundaries.
BUSRQ/ and BUSAK/: Bus Requests and Bus Acknowledge. Optional, used by multi-master systems.
The number of Attention Requests reflects that the ECB-bus aims to be simple.
Single-master systems are the norm, but these signals allow systems to have secondary bus masters if needed.
HALT/: CPU Stopped.
BAI 1; BAO 1: Bus Priority In; Bus Priority Out.
IEI; IEO: Interrupt Enable In; Interrupt Enable Out.
IORQ/: In / Out Request
MREQ/: Memory Request
PHI; nPHI: System Clock; nx Clock.
RESET/: System Reset.
Technical notes
Signal inputs must be Schmitt trigger.
Signal outputs must have a fan-out of 20
Backplane can have up to ?? sockets
Active bus-termination recommended
Notable uses
The DIN 41612 connector has different pin assignments assigned by various manufacturers, such as Kontron, J&K, ELZET80, Conitec, etc.
N8VEM homebrew computing project uses ECB and provides a large number of various ECB cards and a couple of ECB backplanes along with Z80 processor socket shim adapters to allow a great number of retro-computers access to the ECB bus without the need for major system modification.
The Retrobrew Computer Group has expanded the definition of ECB Pinouts as well as I/O Port usage guidelines.
References
External links
The Hardware Book - ECB-Bus
Collection of ECB related photos
The N8VEM home brew computer project
Retrobrew Computers - current home for ECB
Computer buses
Digital electronics
Motherboard | Europe Card Bus | [
"Engineering"
] | 885 | [
"Electronic engineering",
"Digital electronics"
] |
33,531,341 | https://en.wikipedia.org/wiki/Oral%20mucosa%20tissue%20engineering | Tissue engineering of oral mucosa combines cells, materials and engineering to produce a three-dimensional reconstruction of oral mucosa. It is meant to simulate the real anatomical structure and function of oral mucosa. Tissue engineered oral mucosa shows promise for clinical use, such as the replacement of soft tissue defects in the oral cavity. These defects can be divided into two major categories: the gingival recessions (receding gums) which are tooth-related defects, and the non tooth-related defects. Non tooth-related defects can be the result of trauma, chronic infection or defects caused by tumor resection or ablation (in the case of oral cancer). Common approaches for replacing damaged oral mucosa are the use of autologous grafts and cultured epithelial sheets.
Autologous grafts
Autologous grafts are used to transfer tissue from one site to another on the same body. The use of autologous grafts prevents transplantation rejection reactions.
Grafts used for oral reconstruction are preferably taken from the oral cavity itself (such as gingival and palatal grafts). However, their limited availability and small size leads to the use of either skin transplants or intestinal mucosa to be able to cover bigger defects.
Other than tissue shortage, donor site morbidity is a common problem that may occur when using autologous grafts. When tissue is obtained from somewhere other than the oral cavity (such as the intestine or skin) there is a risk of the graft not being able to lose its original donor tissue characteristics. For example, skin grafts are often taken from the radial forearm or lateral upper arm when covering more extensive defects. A positive aspect of using skin grafts is the large availability of skin. However, skin grafts differ from oral mucosa in: consistency, color and keratinization pattern. The transplanted skin graft often continues to grow hair in the oral cavity.
Normal oral mucosa
To better understand the challenges for building full-thickness engineered oral mucosa it is important to first understand the structure of normal oral mucosa. Normal oral mucosa consists of two layers, the top stratified squamous epithelial layer and the bottom lamina propria. The epithelial layer consists of four layers:
Stratum basale (basal layer)
Stratum spinosum (spinous layer)
Stratum granulosum (granular layer)
Stratum corneum (keratinized/superficial layer)
Depending on the region of the mouth the epithelium may be keratinized or non-keratinized. Non-keratinized squamous epithelium covers the soft palate, lips, cheeks and the floor of the mouth. Keratinized squamous epithelium is present in the gingiva and hard palate. Keratinization is the differentiation of keratinocytes in the granular layer into dead surface cells to form the stratum corneum. The cells terminally differentiate as they migrate to the surface (from the basal layer where the progenitor cells are located to the dead superficial surface).
The lamina propria is a fibrous connective tissue layer that consists of a network of type I and III collagen and elastin fibers. The main cells of the lamina propria are the fibroblasts, which are responsible for the production of the extracellular matrix. The basement membrane forms the border between the epithelial layer and the lamina propria.
Tissue engineered oral mucosa
Partial-thickness engineered oral mucosa
Cell culture techniques make it possible to produce epithelial sheets for the replacement of damaged oral mucosa. Partial-thickness tissue engineering uses one type of cell layer, this can be in monolayers or multilayers. Monolayer epithelial sheets suffice for the study of the basic biology of oral mucosa, for example its responses to stimuli such as mechanical stress, growth factor addition and radiation damage. Oral mucosa, however, is a complex multilayer structure with proliferating and differentiating cells and monolayer epithelial sheets have been shown to be fragile, difficult to handle and likely to contract without a supporting extracellular matrix. Monolayer epithelial sheets can be used to manufacture multilayer cultures. These multilayer epithelial sheets show signs of differentiation such as the formation of a basement membrane and keratinization. Fibroblasts are the most common cells in extracellular matrix and are important for epithelial morphogenesis. If fibroblasts are absent from the matrix, the epithelium stops proliferating but continues to differentiate. The structures obtained by partial-thickness oral mucosa engineering form the basis for full-thickness oral mucosa engineering.
Full-thickness tissue engineered oral mucosa
With the advancement of tissue engineering an alternative approach was developed: the full-thickness engineered oral mucosa. Full-thickness engineered oral mucosa is a better simulation of the in vivo situation because they take the anatomical structure of native oral mucosa into account. Problems, such as tissue shortage and donor site morbidity, do not occur when using full-thickness engineered oral mucosa.
The main goal when producing full-thickness engineered oral mucosa is to make it resemble normal oral mucosa as much as possible. This is achieved by using a combination of different cell types and scaffolds.
Lamina propria: is mimicked by seeding oral fibroblasts, producing extracellular matrix, into a biocompatible (porous) scaffold and culturing them in a fibroblast differentiation medium.
Basement membrane: containing type IV collagen, laminin, fibronectin and integrins. Ideally, the basement membrane must contain a lamina lucida and a lamina densa.
Stratified squamous epithelium: is simulated by oral keratinocytes cultured in a medium containing keratinocyte growth factors such as the epidermal growth factor (EGF).
To obtain the best results, the type and origin of the fibroblasts and keratinocytes used in oral mucosa tissue engineering are important factors to hold into account. Fibroblasts are usually taken from the dermis of the skin or oral mucosa. Kertinocytes can be isolated from different areas of the oral cavity (such as the palate or gingiva). It is important that the fibroblasts and keratinocytes are used in the earliest stage possible as the function of these cells decreases with time. The transplanted keratinocytes and fibroblasts should adapt to their new environment and adopt their function. There is a risk of losing the transplanted tissue if the cells do not adapt properly. This adaptation goes more smoothly when the donor tissue cells resemble the cells of the native tissue.
Scaffolds
A scaffold or matrix serves as a temporary supporting structure (extracellular matrix), the initial architecture, on which the cells can grow three-dimensionally into the desired tissue. A scaffold must provide the environment needed for cellular growth and differentiation; it must provide the strength to withstand mechanical stress and guide their growth. Moreover, scaffolds should be biodegradable and degrade at the same rate as the tissue regenerates to be optimally replaced by the host tissue. There are numerous scaffolds to choose from and when choosing a scaffold biocompatibility, porosity and stability should also be held into account. Available scaffolds for oral mucosa tissue engineering are:
Naturally derived scaffolds
Acellular Dermis. An acellular dermis is made by removing the cells (epidermis and dermal fibroblasts) from split-thickness skin. It has two sides: one side has a basal lamina suitable for the epithelial cells, and the other is suitable for fibroblast infiltration because it has intact vessel channels. It is durable, able to keep its structure and does not trigger immune reactions (non-immunogenic).
Amniotic Membrane. The amniotic membrane, the inner part of the placenta, has a thick basement membrane of collagen type IV and laminin and avascular connective tissue.
Fibroblast-populated skin substitutes
Fibroblast-populated Skin Substitutes are scaffolds which contain fibroblasts that are able to proliferate and produce extracellular matrix and growth factors within 2 to 3 weeks. This creates a matrix similar to that of a dermis.
Commercially available types are for example:
Dermagraft
Apligraf
Orcel
Polyactive
Hyalograf 3D
Gelatin-based scaffolds
Gelatin is the denatured form of collagen. Gelatin possesses several advantages for tissue-engineering application: they attract fibroblasts, are non-immunogenic, easy to manipulate and boost the formation of epithelium. There are three types of gelatin-based scaffolds:
Gelatin-oxidized dextran matrix
Gelatin-chitosan-oxidized dextran matrix
Gelatin-glucan matrix
Gelatin-hyaluronate matrix
Gelatin-chitosan hyaluronic acid matrix.
Glucan is a polysaccharide with antibacterial, antiviral and anticoagulant properties. Hyaluronic acid is added to improve the biological and mechanical properties of the matrix.
Collagen-based scaffolds
Pure collagen scaffolds
Collagen is the primary component of the extracellular matrix. Collagen scaffolds efficiently support fibroblast growth, which in turn allows keratinocytes to grow nicely into multilayers. Collagen (mainly collagen type I) is often used as a scaffold because it is biocompatible, non-immunogenic and available. However, collagen biodegrades relatively rapidly and is not good at withstanding mechanical forces. Improved characteristics can be created by cross-linking collagen-based matrices: this is an effective method to correct the instability and mechanical properties.
Compound collagen scaffolds
Compound collagen-based scaffolds have been developed in an attempt to improve the function of these scaffolds for tissue engineering. An example of a compound collagen scaffold is the collagen-chitosan matrix. Chitosan is a polysaccharide that is chemically similar to cellulose. Unlike collagen, chitosan biodegrades relatively slowly. However, chitosan is not very biocompatible with fibroblasts. To improve the stability of scaffolds containing gelatin or collagen and the biocompatibility of chitosan is made by crosslinking the two; they compensate for each other's shortcomings.
Collagen-elastine membrane, collagen-glycosaminoglycane (C-GAG) matrix, cross-linked collagen matrix Integra and Terudermis are other examples of compound collagen scaffolds.
Allogeneic cultured keratinocytes and fibroblasts in bovine collagen (Gintuit) is the first cell-based product made from allogeneic human cells and bovine collagen approved by the US Food and Drug Administration (FDA). It is an allogeneic cellularized scaffold product and was approved for medical use in the United States in March 2012.
Fibrin-based scaffolds
Fibrin-based scaffolds contain fibrin which gives the keratinocytes stability. Moreover, they are simple to reproduce and handle.
Hybrid scaffolds
A hybrid scaffold is a skin substitute based on a combination of synthetic and natural materials. Examples of hybrid scaffolds are HYAFF and Laserskin. These hybrid scaffolds have been shown to have good in-vitro and in-vivo biocompatibilities and their biodegradability is controllable.
Synthetic scaffolds
The use of natural materials in scaffolds has its disadvantages. Usually, they are expensive, not available in large quantities and they have the risk of disease transmission. This has led to the development of synthetic scaffolds.
When producing synthetic scaffolds there is full control over their properties. For example, they can be made to have good mechanical properties and the right biodegradability. When it comes to synthetic scaffolds thickness, porosity and pore size are important factors for controlling connective tissue formation.
Examples of synthetic scaffolds are:
Polyethylene terephthalate membranes (PET membranes)
Polycarbonate-permeable membranes (PC membranes)
Porous polylactic glycolic acid (PLGA)
Historical use of electrospinning to produce synthetic scaffolds dates back to at least the late 1980s when Simon showed that technology could be used to produce nano- and submicron-scale fibrous scaffolds from polymer solutions specifically intended for use as in vitro cell and tissue substrates. This early use of electrospun lattices for cell culture and tissue engineering showed that various cell types would adhere to and proliferate upon polycarbonate fibers. It was noted that as opposed to the flattened morphology typically seen in 2D culture, cells grown on the electrospun fibers exhibited a more rounded 3-dimensional morphology generally observed of tissues in vivo.
Clinical applications: full-thickness engineered oral mucosa
Although it has not yet been commercialized for clinical use clinical studies have been done on intra- and extra-oral treatments with full-thickness engineered oral mucosa.
Full-thickness engineered oral mucosa is mainly used in maxillofacial reconstructive surgery and periodontal peri-implant reconstruction. Good clinical and histological results have been obtained. For example, there is vascular ingrowth and the transplanted keratinocytes integrate well into the native epithelium. Full-thickness engineered oral mucosa has also shown good results for extra-oral applications such as urethral reconstruction, ocular surface reconstruction and eyelid reconstruction.
References
Tissue engineering | Oral mucosa tissue engineering | [
"Chemistry",
"Engineering",
"Biology"
] | 3,014 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
23,421,676 | https://en.wikipedia.org/wiki/Cyclo%286%29carbon | Cyclo[6]carbon is an allotrope of carbon with molecular formula . The molecule is a ring of six carbon atoms, connected by alternating double bonds. It is, therefore, a member of the cyclo[n]carbon family.
There have been a few attempts to synthesize cyclo[6]carbon, e.g. by pyrolysis of mellitic anhydride, but without success until 2023, when it was successfully synthesized by atom manipulation of hexachlorobenzene.
Calculations suggest that the alternative cyclic cumulene structure, called cyclohexahexaene, is the potential energy minimum of the cyclo[6]carbon framework.
References
Cyclocarbons
Polyynes
Six-membered rings
Cycloalkynes | Cyclo(6)carbon | [
"Chemistry"
] | 172 | [
"Inorganic compounds",
"Inorganic compound stubs",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
23,422,002 | https://en.wikipedia.org/wiki/Fast%20Sulphon%20Black%20F | Fast Sulphon Black F is a complexometric indicator used with EDTA, almost exclusively used in copper complexation determination.
Application
Fast Sulphon Black is purple when complexed with copper, and turns green when titrated against EDTA, as the EDTA displaces it, being the better complexing agent due to the chelate effect.
References
Handbook of copper compounds and applications by H. Wayne Richardson (1997) ()
Complexometric indicators
Naphthalenesulfonic acids
1-Naphthols
2-Naphthols | Fast Sulphon Black F | [
"Chemistry",
"Materials_science"
] | 112 | [
"Chromism",
"Organic compounds",
"Complexometric indicators",
"Organic compound stubs",
"Organic chemistry stubs"
] |
23,422,531 | https://en.wikipedia.org/wiki/Latimer%20diagram | A Latimer diagram of a chemical element is a summary of the standard electrode potential data of that element. This type of diagram is named after Wendell Mitchell Latimer (1893–1955), an American chemist.
Construction
In a Latimer diagram, because by convention redox reactions are shown in the direction of reduction (gain of electrons), the most highly oxidized form of the element is on the left side, with successively lower oxidation states to the right side. The species are connected by arrows, and the numerical value of the standard potential (in volts) for the reduction is written at each arrow. For example, for oxygen, the species would be in the order O2 (0), H2O2 (–1), H2O (-2):
The arrow between O2 and H2O2 has a value +0.68 V over it, it indicates that the standard electrode potential for the reaction:
O2(g) + 2H+ + 2e− ⇄ H2O2(aq)
is 0.68 volts.
Application
Latimer diagrams can be used in the construction of Frost diagrams, as a concise summary of the standard electrode potentials relative to the element. Since , the electrode potential is a representation of the Gibbs energy change for the given reduction. The sum of the Gibbs energy changes for subsequent reductions (e.g. from O2 to H2O2, then from H2O2 to H2O) is the same as the Gibbs energy change for the overall reduction (i.e. from O2 to H2O), in accordance with Hess's law. This can be used to find the electrode potential for non-adjacent species, which gives all the information necessary for the Frost diagram.
It must be stressed that standard reduction potentials are not additive values. They cannot be directly summed up, or subtracted, from the values in volt indicated in a Latimer diagram. If needed, their calculation must be performed via the difference in Gibbs free energies. The easiest way to proceed is simply to use energies (nE) directly expressed in electron-volt (eV), because the Faraday constant F and the sign minus simplifies on both side of the equation. So, the values of E in volt must be simply multiplied by the number (n) of electron transferred in the considered half-reaction. Since the Faraday constant can disappear from the equation, no need to calculate expressed in joule.
A simple examination of a Latimer diagram can also indicate if a species will disproportionate in solution under the conditions for which the electrode potentials are given: if the potential to the right of the species is higher than the potential on the left, it will disproportionate. Therefore, hydrogen peroxide is unstable and will disproportionate in and .
See also
Frost diagram
Pourbaix diagram
Ellingham diagram
References
Electrochemistry
Potentials | Latimer diagram | [
"Chemistry"
] | 617 | [
"Electrochemistry"
] |
23,423,919 | https://en.wikipedia.org/wiki/Marine%20chemistry | Marine chemistry, also known as ocean chemistry or chemical oceanography, is the study of the chemical composition and processes of the world’s oceans, including the interactions between seawater, the atmosphere, the seafloor, and marine organisms. This field encompasses a wide range of topics, such as the cycling of elements like carbon, nitrogen, and phosphorus, the behavior of trace metals, and the study of gases and nutrients in marine environments. Marine chemistry plays a crucial role in understanding global biogeochemical cycles, ocean circulation, and the effects of human activities, such as pollution and climate change, on oceanic systems. It is influenced by plate tectonics and seafloor spreading, turbidity, currents, sediments, pH levels, atmospheric constituents, metamorphic activity, and ecology.
The impact of human activity on the chemistry of the Earth's oceans has increased over time, with pollution from industry and various land-use practices significantly affecting the oceans. Moreover, increasing levels of carbon dioxide in the Earth's atmosphere have led to ocean acidification, which has negative effects on marine ecosystems. The international community has agreed that restoring the chemistry of the oceans is a priority, and efforts toward this goal are tracked as part of Sustainable Development Goal 14.
Due to the interrelatedness of the ocean, chemical oceanographers frequently work on problems relevant to physical oceanography, geology and geochemistry, biology and biochemistry, and atmospheric science. Many of them are investigating biogeochemical cycles, and the marine carbon cycle in particular attracts significant interest due to its role in carbon sequestration and ocean acidification. Other major topics of interest include analytical chemistry of the oceans, marine pollution, and anthropogenic climate change.
Organic compounds in the oceans
Dissolved Organic Matter (DOM)
DOM is a critical component of the ocean's carbon pool and includes many molecules such as amino acids, sugars, and lipids. It represents about 90% of the total organic carbon in marine environments. Colored dissolved organic matter (CDOM) is estimated to range from 20-70% of the carbon content of the oceans, being higher near river outlets and lower in the open ocean. DOM can be recycled and put back into the food web through a process called microbial loop which is essential for nutrient cycling and supporting primary productivity. It also plays a vital role in the global regulation of oceanic carbon storage, as some forms resist microbial degradation and may exist within the ocean for centuries. Marine life is similar mainly in biochemistry to terrestrial organisms, and is the most prolific source of halogenated organic compounds.
Particulate Organic Matter (POM)
POM includes of large organic particles, such as organisms, fecal pellets, and detritus, which settle through the water column. It is a major component of the biological pump, a process by which carbon is transferred from the surface ocean to the deep sea. As POM sinks, it decomposes by bacterial activity , releasing nutrients and carbon dioxide. The refractory POM fraction can settle on the ocean floor and make relevant contributions to carbon sequestration over a very long period of time
Chemical ecology of extremophiles
The ocean is home to a variety of marine organisms known as extremophiles – organisms that thrive in extreme conditions of temperature, pressure, and light availability. Extremophiles inhabit many unique habitats in the ocean, such as hydrothermal vents, black smokers, cold seeps, hypersaline regions, and sea ice brine pockets. Some scientists have speculated that life may have evolved from hydrothermal vents in the ocean.In hydrothermal vents and similar environments, many extremophiles acquire energy through chemoautotrophy, using chemical compounds as energy sources, rather than light as in photoautotrophy. Hydrothermal vents enrich the nearby environment in chemicals such as elemental sulfur, H2, H2S, Fe2+, and methane. Chemoautotrophic organisms, primarily prokaryotes, derive energy from these chemicals through redox reactions. These organisms then serve as food sources for higher trophic levels, forming the basis of unique ecosystems.
Several different metabolisms are present in hydrothermal vent ecosystems. Many marine microorganisms, including Thiomicrospira, Halothiobacillus, and Beggiatoa, are capable of oxidizing sulfur compounds, including elemental sulfur and the often toxic compound H2S. H2S is abundant in hydrothermal vents, formed through interactions between seawater and rock at the high temperatures found within vents. This compound is a major energy source, forming the basis of the sulfur cycle in hydrothermal vent ecosystems. In the colder waters surrounding vents, sulfur-oxidation can occur using oxygen as an electron acceptor; closer to the vents, organisms must use alternate metabolic pathways or utilize another electron acceptor, such as nitrate. Some species of Thiomicrospira can utilize thiosulfate as an electron donor, producing elemental sulfur. Additionally, many marine microorganisms are capable of iron-oxidation, such as Mariprofundus ferrooxydans. Iron-oxidation can be oxic, occurring in oxygen-rich parts of the ocean, or anoxic, requiring either an electron acceptor such as nitrate or light energy. In iron-oxidation, Fe(II) is used as an electron donor; conversely, iron-reducers utilize Fe(III) as an electron acceptor. These two metabolisms form the basis of the iron-redox cycle and may have contributed to banded iron formations.
At another extreme, some marine extremophiles inhabit sea ice brine pockets where temperature is very low and salinity is very high. Organisms trapped within freezing sea ice must adapt to a rapid change in salinity up to 3 times higher than that of regular seawater, as well as the rapid change to regular seawater salinity when ice melts. Most brine-pocket dwelling organisms are photosynthetic, therefore, these microenvironments can become hyperoxic, which can be toxic to its inhabitants. Thus, these extremophiles often produce high levels of antioxidants.
Plate tectonics
Seafloor spreading on mid-ocean ridges is a global scale ion-exchange system. Hydrothermal vents at spreading centers introduce various amounts of iron, sulfur, manganese, silicon and other elements into the ocean, some of which are recycled into the ocean crust. Helium-3, an isotope that accompanies volcanism from the mantle, is emitted by hydrothermal vents and can be detected in plumes within the ocean.
Spreading rates on mid-ocean ridges vary between 10 and 200 mm/yr. Rapid spreading rates cause increased basalt reactions with seawater. The magnesium/calcium ratio will be lower because more magnesium ions are being removed from seawater and consumed by the rock, and more calcium ions are being removed from the rock and released to seawater. Hydrothermal activity at ridge crest is efficient in removing magnesium. A lower Mg/Ca ratio favors the precipitation of low-Mg calcite polymorphs of calcium carbonate (calcite seas).
Slow spreading at mid-ocean ridges has the opposite effect and will result in a higher Mg/Ca ratio favoring the precipitation of aragonite and high-Mg calcite polymorphs of calcium carbonate (aragonite seas).
Experiments show that most modern high-Mg calcite organisms would have been low-Mg calcite in past calcite seas, meaning that the Mg/Ca ratio in an organism's skeleton varies with the Mg/Ca ratio of the seawater in which it was grown.
The mineralogy of reef-building and sediment-producing organisms is thus regulated by chemical reactions occurring along the mid-ocean ridge, the rate of which is controlled by the rate of sea-floor spreading.
Human impacts
Marine pollution
Climate change
Increased carbon dioxide levels, mostly from burning fossil fuels, are changing ocean chemistry. Global warming and changes in salinity have significant implications for the ecology of marine environments.
Acidification
Deoxygenation
History
Early inquiries about marine chemistry usually concerned the origin of salinity in the ocean, including work by Robert Boyle. Modern chemical oceanography began as a field with the 1872–1876 Challenger expedition, led by the British Royal Navy which made the first systematic measurements of ocean chemistry. The chemical analysis of these samples providing the first systematic study of the composition of seawater was conducted by John Murray and George Forchhammer, leading to a better understanding of elements like chloride, sodium, and sulfate in ocean waters
The early 20th century saw significant advancements in marine chemistry, particularly with more accurate analytical techniques. Scientists like Martin Knudsen created the Knudsen Bottle, an instrument used to collect water samples from different ocean depths. Over the past three decades (1970s, 19802, and 1990s), a comprehensive evaluation of advancements in chemical oceanography was compiled through a National Science Foundation initiative known as Futures of Ocean Chemistry in the United States (FOCUS). This project brought together numerous prominent chemical oceanographers, marine chemists, and geochemists to contribute to the FOCUS report.
After World War II, advancements in geochemical techniques propelled marine chemistry into a new era. Researchers began using isotopic analysis to study ocean circulation and the carbon cycle. Roger Revelle and Hans Suess pioneered using radiocarbon dating to investigate oceanic carbon reservoirs and their exchange with the atmosphere.
Since the 1970s, the development of highly sophisticated instruments and computational models has revolutionized marine chemistry. Scientists can now measure trace metals, organic compounds, and isotopic ratios with unprecedented precision. Studies of marine biogeochemical cycles, including the carbon, nitrogen, and sulfur cycles, have become an area of interest to understand global climate change. The use of remote sensing technology and global ocean observation programs, such as the International Geosphere-Biosphere Programme (IGBP), has provided large-scale data on ocean chemistry, allowing scientists to monitor ocean acidification, deoxygenation, and other critical issues affecting the marine environment.
Tools used for analysis
Chemical oceanographers collect and measure chemicals in seawater, using the standard toolset of analytical chemistry as well as instruments like pH meters, electrical conductivity meters, fluorometers, and dissolved CO₂ meters. Most data are collected through shipboard measurements and from autonomous floats or buoys, but remote sensing is used as well. On an oceanographic research vessel, a CTD is used to measure electrical conductivity, temperature, and pressure, and is often mounted on a rosette of Nansen bottles to collect seawater for analysis. Sediments are commonly studied with a box corer or a sediment trap, and older sediments may be recovered by scientific drilling.
Advanced analytical equipment such as mass spectrometers and chromatographs are applied to detect trace elements, isotopes, and organic compounds. This allows for precisely measuring nutrients, gases, and pollutants in marine environments. In recent years, autonomous underwater vehicles (AUVs) and remote sensing technology have enabled continuous, large-scale ocean chemistry monitoring, particularly for tracking changes in ocean acidification and nutrient cycles.
Marine chemistry on other planets and their moons
The chemistry of the subsurface ocean of Europa may be Earthlike. The subsurface ocean of Enceladus vents hydrogen and carbon dioxide to space.
See also
Global Ocean Data Analysis Project
Oceanography
Physical oceanography
World Ocean Atlas
Seawater
RISE project
References
Chemical oceanography
Oceanographical terminology
Geochemistry | Marine chemistry | [
"Chemistry"
] | 2,365 | [
"Chemical oceanography",
"nan"
] |
23,428,795 | https://en.wikipedia.org/wiki/C20H30O2 | {{DISPLAYTITLE:C20H30O2}}
The molecular formula C20H30O2 (molar mass : 302.45 g/mol, exact mass : 302.22458) may refer to:
Abietic acid, a resin acid
BNN-20, a steroid
Bosseopentaenoic acid, a conjugated polyunsaturated fatty acid
Dimethandrolone, an anabolic steroid
Eicosapentaenoic acid, an omega-3 fatty acid
Hexahydrocannabutol
Isopimaric acid, a resin acid
Levopimaric acid, a resin acid
Metenolone, an anabolic steroid
Methyl-1-testosterone, an anabolic steroid
Methyltestosterone, an anabolic steroid
18-Methyltestosterone, an anabolic steroid
Metogest, a steroidal antiandrogen
Mibolerone, an anabolic steroid
Norethandrolone, an anabolic steroid
Oxendolone, a steroidal antiandrogen
Oxogestone
Palustric acid
Pimaric acid, a resin acid
Stenbolone, an anabolic steroid | C20H30O2 | [
"Chemistry"
] | 250 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
46,402,713 | https://en.wikipedia.org/wiki/Avant-garde%20architecture | Avant-garde architecture is architecture which is innovative and radical. There have been a variety of architects and movements whose work has been characterised in this way, especially Modernism. Other examples include Constructivism, Neoplasticism (De Stijl), Neo-futurism, Deconstructivism, Parametricism and Expressionism.
Concept
Avant-garde architecture has been described as progressive in terms of aesthetics. However, it is noted for covering a broad range of aesthetic and political spectrum. It is associated with the liberal left but also cited as apolitical, right-wing, and conservative in its politics and aesthetics. It is also considered a stream within modernism that is anti-elitist and open to the contamination of mass culture.
The concept draws from the idea of integration of life and art. In the De Stijl Manifesto V, it was stated that art and life are not separate domains, hence, the argument that art is not an illusion or disconnected from reality. This view pushed for the construction of an environment that is according to the creative laws derived from a fixed principle.
A conceptualization by Le Corbusier described avant-garde architecture as constructed for the pleasure of the eye and comes with "inner cleanness, for the course adopted leads to a refusal to allow anything at all which is not correct, authorised, intended, desired, thought-out."
Criticism
Critics note that avant-garde architecture contradicts the very definition of architecture because its position is contrary to its most specific characteristics. There are critics who state that it stands in opposition to the architecture of the classical antiquity. Its importance is said to be exaggerated since it is always marginal to any decisive change. It has been described as part of modern architecture that is the most rarefied and the least social in terms of orientation. It is also noted that many avant-garde architectural projects do not fare well once evaluated according to suitability principles. According to Eileen Gray, it is obsessed with the external at the expense of the interior.
Another argument states that avant-garde architecture is an experiment or that a project is a vehicle for research so that it leads to a built manifesto. For this reason, the avant-garde architect exploits the resources of his clients to achieve his purposes, which go beyond his client's narrow and private interests.
Architects
Cedric Price
Daniel Libeskind
Frank Gehry
Frei Otto
Greg Lynn
Oscar Niemeyer
Peter Eisenman
Rem Koolhaas
Wolf D. Prix
Zaha Hadid
Walter Gropius
Schools and movements
Archigram
Bauhaus
Brutalist architecture
Constructivist architecture
Metabolism (architecture)
Neofuturism
Neoplasticism
Rationalism (architecture)
Russian avant-garde architects and their work
Situationist International
See also
References
Architectural design | Avant-garde architecture | [
"Engineering"
] | 560 | [
"Design",
"Architectural design",
"Architecture"
] |
46,405,505 | https://en.wikipedia.org/wiki/Beatmapping | Beatmapping is the detection of a beat or tempo in music using software. Beatmapping visually lays out/displays the tempo (speed) of music throughout the entirety or portion of a song or music piece. This "mapping" is done with software specifically designed for beatmapping.
Beatmapping software is often a component of music editing and mixing software like Apple's Logic Pro 9 or Mix Meister. A benefit of beatmapping when mixing music is that it keeps the project in time with the metronome tempo which is the steady underlying base beat of the music. Beatmapping software is also often used to help develop a beat to use underlying with a live music performance and "objects" are added to the map of beats that set a change in tempo matching changes in music during the live performance.
References
Acoustics
Acoustics software
Interference
Oscillation
Time–frequency analysis | Beatmapping | [
"Physics"
] | 181 | [
"Spectrum (physical sciences)",
"Time–frequency analysis",
"Frequency-domain analysis",
"Classical mechanics",
"Acoustics",
"Mechanics",
"Oscillation"
] |
47,906,524 | https://en.wikipedia.org/wiki/GADV-protein%20world%20hypothesis | GADV-protein world is a hypothetical stage of abiogenesis. GADV stands for the one letter codes of four amino acids, namely, glycine (G), alanine (A), aspartic acid (D) and valine (V), the main components of GADV proteins. In the GADV-protein world hypothesis, it is argued that the prebiotic chemistry before the emergence of genes involved a stage where GADV-proteins were able to pseudo-replicate. This hypothesis is contrary to the RNA world hypothesis.
Description
The GADV-protein world hypothesis was first proposed by Kenji Ikehara at Nara Women's University. It is supported by GNC-SNS primitive gene code hypothesis (GNC hypothesis) also formulated by him. In the GNC hypothesis, the origin of the present standard genetic code is considered to be the GNC genetic code that includes the codons GGC, GCC, GAC, GUC, respectively coding glycine, alanine, aspartic acid, and valine; it also follows the SNS primitive genetic code that codes ten amino acids, where N denotes arbitrary four RNA bases and S denotes guanine (G) and cytosine (C).
The GADV hypothesis proposes these mechanisms:
Analysis on present proteins and simulation using chemical factors of amino acid shows GADV-proteins that contains almost the same amount of the four amino acids can form four basic structures of protein, namely, hydrophobic and hydrophilic structures, α-helices and β-sheets.
Therefore, GADV-proteins polymerized from randomly chosen amino acids from the four choices, probably becoming globular and water-soluble like some present proteins.
Proteins generated like this have different primary structures. However, their simple composition leads to the formation of similar spherical and water-soluble proteins that have bulky and hydrophobic valines inside and hydrophilic aspartic acids outside.
GADV-peptides can polymerize by simple cycles of evaporation and hydration. This gives a rationale for the production of GADV-peptides in tide pools on the early Earth. Moreover, GADV-peptides randomly polymerized as above have the catalytic activity to hydrolyze peptide bonds in bovine serum albumin. Therefore, they can catalyze the formation of peptide bonds as the reverse reaction.
GADV-proteins can multiply by pseudo-replication in the absence of genes, considering the features above.
See also
Alternative abiogenesis scenarios
References
Related literatures
External links
Kenji Ikehara's GADV-protein world laboratory (only in Japanese)
Origin of life
Amino acids
Prebiotic chemistry | GADV-protein world hypothesis | [
"Chemistry",
"Biology"
] | 555 | [
"Biomolecules by chemical classification",
"Origin of life",
"Prebiotic chemistry",
"Amino acids",
"Biological hypotheses"
] |
43,275,893 | https://en.wikipedia.org/wiki/Lamellar%20phase | Lamellar phase refers generally to packing of polar-headed, long chain, nonpolar-tailed molecules (amphiphiles) in an environment of bulk polar liquid, as sheets of bilayers separated by bulk liquid. In biophysics, polar lipids (mostly, phospholipids, and rarely, glycolipids) pack as a liquid crystalline bilayer, with hydrophobic fatty acyl long chains directed inwardly and polar headgroups of lipids aligned on the outside in contact with water, as a 2-dimensional flat sheet surface. Under transmission electron microscopy (TEM), after staining with polar headgroup reactive chemical osmium tetroxide, lamellar lipid phase appears as two thin parallel dark staining lines/sheets, constituted by aligned polar headgroups of lipids. 'Sandwiched' between these two parallel lines, there exists one thicker line/sheet of non-staining closely packed layer of long lipid fatty acyl chains. This TEM-appearance became famous as Robertson's unit membrane - the basis of all biological membranes, and structure of lipid bilayer in unilamellar liposomes. In multilamellar liposomes, many such lipid bilayer sheets are layered concentrically with water layers in between.
In lamellar lipid bilayers, polar headgroups of lipids align together at the interface of water and hydrophobic fatty-acid acyl chains align parallel to one another 'hiding away' from water. The lipid head groups are somewhat more 'tightly' packed than relatively 'fluid' hydrocarbon fatty acyl long chains. The lamellar lipid bilayer organization, thus reveals a 'flexibility gradient' of increasing freedom of motions from near the head-groups towards the terminal fatty-acyl chain methyl groups. Existence of such a dynamic organization of lamellar phase in liposomes as well as biological membranes can be confirmed by spin label electron paramagnetic resonance and high resolution nuclear magnetic resonance spectroscopy studies of biological membranes and liposomes.
In 'soft matter science', where physics and chemistry meet biological science, a bilayer lamellar phase has been recently created from fluorinated silica, and it has been projected for use as a shear-thinning lubricant.
See also
History of cell membrane theory
Lipid polymorphism
Lipid bilayer
Micelle
Unilamellar liposome
References
External links
Bilayer formation through molecular self-assembly - YouTube
Cell Membrane - The Lipid Bilayer - YouTube
Liquid crystals
Membrane biology
Surfactants
Colloidal chemistry
Biophysics | Lamellar phase | [
"Physics",
"Chemistry",
"Biology"
] | 532 | [
"Colloidal chemistry",
"Applied and interdisciplinary physics",
"Membrane biology",
"Colloids",
"Surface science",
"Biophysics",
"Molecular biology"
] |
43,278,241 | https://en.wikipedia.org/wiki/Knuth%27s%20Simpath%20algorithm | Simpath is an algorithm introduced by Donald Knuth that constructs a zero-suppressed decision diagram (ZDD) representing all simple paths between two vertices in a given graph.
References
External links
Graphillion library which implements the algorithm for manipulating large sets of paths and other structures.
A CWEB implementation by Donald Knuth.
Computer arithmetic algorithms
Donald Knuth
Graph algorithms
Mathematical logic
Theoretical computer science | Knuth's Simpath algorithm | [
"Mathematics"
] | 80 | [
"Theoretical computer science",
"Applied mathematics",
"Mathematical logic"
] |
43,278,998 | https://en.wikipedia.org/wiki/Water%20level | Water level, also known as gauge height or stage, is the elevation of the free surface of a sea, stream, lake or reservoir relative to a specified vertical datum.
See also
Water level (device), device utilizing the surface of liquid water to establish a local horizontal plane of reference
Flood stage
Hydraulic head
Stream gauge
Water level gauges
Tide gauge
Level sensor
Liquid level
Reference water level
Stage (hydrology)
Sea level
References
Hydrology
Vertical position | Water level | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 91 | [
"Vertical position",
"Hydrology",
"Physical quantities",
"Distance",
"Hydrology stubs",
"Environmental engineering"
] |
43,280,816 | https://en.wikipedia.org/wiki/Orthodox%20semigroup | In mathematics, an orthodox semigroup is a regular semigroup whose set of idempotents forms a subsemigroup. In more recent terminology, an orthodox semigroup is a regular E-semigroup. The term orthodox semigroup was coined by T. E. Hall and presented in a paper published in 1969. Certain special classes of orthodox semigroups had been studied earlier. For example, semigroups that are also unions of groups, in which the sets of idempotents form subsemigroups were studied by P. H. H. Fantham in 1960.
Examples
Consider the binary operation on the set S = { a, b, c, x } defined by the following Cayley table :
Then S is an orthodox semigroup under this operation, the subsemigroup of idempotents being { a, b, c }.
Inverse semigroups and bands are examples of orthodox semigroups.
Some elementary properties
The set of idempotents in an orthodox semigroup has several interesting properties. Let S be a regular semigroup and for any a in S let V(a) denote the set of inverses of a. Then the following are equivalent:
S is orthodox.
If a and b are in S and if x is in V(a) and y is in V(b) then yx is in V(ab).
If e is an idempotent in S then every inverse of e is also an idempotent.
For every a, b in S, if V(a) ∩ V(b) ≠ ∅ then V(a) = V(b).
Structure
The structure of orthodox semigroups have been determined in terms of bands and inverse semigroups. The Hall–Yamada pullback theorem describes this construction. The construction requires the concepts of pullbacks (in the category of semigroups) and Nambooripad representation of a fundamental regular semigroup.
See also
Catholic semigroup
Special classes of semigroups
References
Semigroup theory | Orthodox semigroup | [
"Mathematics"
] | 417 | [
"Semigroup theory",
"Fields of abstract algebra",
"Mathematical structures",
"Algebraic structures"
] |
43,281,585 | https://en.wikipedia.org/wiki/Zebra%20striping | Zebra striping is the coloring of every other row of a table to improve readability. Although zebra striping has been used for a long time to improve readability, there is relatively little data on how much it helps.
Implementation
In HTML documents, zebra striping can be implemented using the Cascading Style Sheets :nth-child(even) pseudo-selector.
The Bootstrap CSS framework features zebra striping through the .table-striped class.
See also
Green bar paper, continuous sheets pre-printed with green rows, once-common stationery used when physically printing tabular data
References
Tables (information) | Zebra striping | [
"Technology"
] | 128 | [
"Computing stubs"
] |
43,282,531 | https://en.wikipedia.org/wiki/Industrial%20and%20production%20engineering | Industrial and production engineering (IPE) is an interdisciplinary engineering discipline that includes manufacturing technology, engineering sciences, management science, and optimization of complex processes, systems, or organizations. It is concerned with the understanding and application of engineering procedures in manufacturing processes and production methods. Industrial engineering dates back all the way to the industrial revolution, initiated in 1700s by Sir Adam Smith, Henry Ford, Eli Whitney, Frank Gilbreth and Lilian Gilbreth, Henry Gantt, F.W. Taylor, etc. After the 1970s, industrial and production engineering developed worldwide and started to widely use automation and robotics. Industrial and production engineering includes three areas: Mechanical engineering (where the production engineering comes from), industrial engineering, and management science.
The objective is to improve efficiency, drive up effectiveness of manufacturing, quality control, and to reduce cost while making their products more attractive and marketable. Industrial engineering is concerned with the development, improvement, and implementation of integrated systems of people, money, knowledge, information, equipment, energy, materials, as well as analysis and synthesis. The principles of IPE include mathematical, physical and social sciences and methods of engineering design to specify, predict, and evaluate the results to be obtained from the systems or processes currently in place or being developed. The target of production engineering is to complete the production process in the smoothest, most-judicious and most-economic way. Production engineering also overlaps substantially with manufacturing engineering and industrial engineering. The concept of production engineering is interchangeable with manufacturing engineering.
As for education, undergraduates normally start off by taking courses such as physics, mathematics (calculus, linear analysis, differential equations), computer science, and chemistry. Undergraduates will take more major specific courses like production and inventory scheduling, process management, CAD/CAM manufacturing, ergonomics, etc., towards the later years of their undergraduate careers. In some parts of the world, universities will offer Bachelor's in Industrial and Production Engineering. However, most universities in the U.S. will offer them separately. Various career paths that may follow for industrial and production engineers include: Plant Engineers, Manufacturing Engineers, Quality Engineers, Process Engineers and industrial managers, project management, manufacturing, production and distribution, From the various career paths people can take as an industrial and production engineer, most average a starting salary of at least $50,000.
History
Industrial Revolution
The roots of the industrial engineering profession date back to the Industrial Revolution. The technologies that helped mechanize traditional manual operations in the textile industry including the Flying shuttle, the Spinning jenny, and perhaps most importantly the Steam engine generated Economies of scale that made Mass production in centralized locations attractive for the first time. The concept of the production system had its genesis in the factories created by these innovations.
Specialization of labor
Adam Smith's concepts of division of labour and the "invisible hand" of capitalism introduced in his treatise "The Wealth of Nations" motivated many of the technological innovators of the Industrial Revolution to establish and implement factory systems. The efforts of James Watt and Matthew Boulton led to the first integrated machine manufacturing facility in the world, including the implementation of concepts such as cost control systems to reduce waste and increase productivity and the institution of skills training for craftsmen.
Charles Babbage became associated with industrial engineering because of the concepts he introduced in his book "On the Economy of Machinery and Manufacturers" which he wrote as a result of his visits to factories in England and the United States in the early 1800s. The book includes subjects such as the time required to perform a specific task, the effects of subdividing tasks into smaller and less detailed elements, and the advantages to be gained from repetitive tasks.
Interchangeable parts
Eli Whitney and Simeon North proved the feasibility of the notion of interchangeable parts in the manufacture of muskets and pistols for the US Government. Under this system, individual parts were mass-produced to tolerances to enable their use in any finished product. The result was a significant reduction in the need for skill from specialized workers, which eventually led to the industrial environment to be studied later.
Modern development
Industrial engineering
In 1960 to 1975, with the development of decision support systems in supply such as the Material requirements planning (MRP), people can emphasize the timing issue (inventory, production, compounding, transportation, etc.) of industrial organization. Israeli scientist Dr. Jacob Rubinovitz installed the CMMS program developed in IAI and Control-Data (Israel) in 1976 in South Africa and worldwide.
In the seventies, with the penetration of Japanese management theories such as Kaizen and Kanban, Japan realized very high levels of quality and productivity. These theories improved issues of quality, delivery time, and flexibility. Companies in the west realized the great impact of Kaizen and started implementing their own Continuous improvement programs.
In the nineties, following the global industry globalization process, the emphasis was on supply chain management, and customer-oriented business process design. Theory of constraints developed by an Israeli scientist Eliyahu M. Goldratt (1985) is also a significant milestone in the field.
Manufacturing (production) engineering
Modern manufacturing engineering studies include all intermediate processes required for the production and integration of a product's components.Some industries, such as semiconductor and steel manufacturers use the term "fabrication" for these processes.
Automation is used in different processes of manufacturing such as machining and welding. Automated manufacturing refers to the application of automation to produce goods in a factory. The main advantages of automated manufacturing for the manufacturing process are realized with effective implementation of automation and include: higher consistency and quality, reduction of lead times, simplification of production, reduced handling, improved work flow, and improved worker morale.
Robotics is the application of mechatronics and automation to create robots, which are often used in manufacturing to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot). Robots are used extensively in manufacturing engineering.
Robots allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform economically, and to ensure better quality. Many companies employ assembly lines of robots, and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications.
Overview
Industrial engineering
Industrial engineering is the branch of engineering that involves figuring out how to make or do things better. Industrial engineers are concerned with reducing production costs, increasing efficiency, improving the quality of products and services, ensuring worker health and safety, protecting the environment and complying with government regulations.
The various fields and topics that industrial engineers are involved with include:
Manufacturing engineering
Engineering management
Process engineering: design, operation, control, and optimization of chemical, physical, and biological processes.
Systems engineering: an interdisciplinary field of engineering that focuses on how to design and manage complex engineering systems over their life cycles.
Software engineering: an interdisciplinary field of engineering that focusing on design, development, maintenance, testing, and evaluation of the software that make computers or other devices containing software work
Safety engineering: an engineering discipline which assures that engineered systems provide acceptable levels of safety.
Data science: the science of exploring, manipulating, analyzing, and visualizing data to derive useful insights and conclusions
Machine learning: the automation of learning from data using models and algorithms
Analytics and data mining: the discovery, interpretation, and extraction of patterns and insights from large quantities of data
Cost engineering: practice devoted to the management of project cost, involving such activities as cost- and control- estimating, which is cost control and cost forecasting, investment appraisal, and risk analysis.
Value engineering: a systematic method to improve the "value" of goods or products and services by using an examination of function.
Predetermined motion time system: a technique to quantify time required for repetitive tasks.
Quality engineering: a way of preventing mistakes or defects in manufactured products and avoiding problems when delivering solutions or services to customers.
Project management: is the process and activity of planning, organizing, motivating, and controlling resources, procedures and protocols to achieve specific goals in scientific or daily problems.
Supply chain management: the management of the flow of goods. It includes the movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption.
Ergonomics: the practice of designing products, systems or processes to take proper account of the interaction between them and the people that use them.
Operations research, also known as management science: discipline that deals with the application of advanced analytical methods to help make better decisions
Operations management: an area of management concerned with overseeing, designing, and controlling the process of production and redesigning business operations in the production of goods or services.
Job design: the specification of contents, methods and relationship of jobs in order to satisfy technological and organizational requirements as well as the social and personal requirements of the job holder.
Financial engineering: the application of technical methods, especially from mathematical finance and computational finance, in the practice of finance
Industrial plant configuration: sizing of necessary infrastructure used in support and maintenance of a given facility.
Facility management: an interdisciplinary field devoted to the coordination of space, infrastructure, people and organization
Engineering design process: formulation of a plan to help an engineer build a product with a specified performance goal.
Logistics: the management of the flow of goods between the point of origin and the point of consumption in order to meet some requirements, of customers or corporations.
Accounting: the measurement, processing and communication of financial information about economic entities
Capital projects: the management of activities in capital projects involves the flow of resources, or inputs, as they are transformed into outputs. Many of the tools and principles of industrial engineering can be applied to the configuration of work activities within a project. The application of industrial engineering and operations management concepts and techniques to the execution of projects has been thus referred to as Project Production Management. Traditionally, a major aspect of industrial engineering was planning the layouts of factories and designing assembly lines and other manufacturing paradigms. And now, in lean manufacturing systems, industrial engineers work to eliminate wastes of time, money, materials, energy, and other resources.
Examples of where industrial engineering might be used include flow process charting, process mapping, designing an assembly workstation, strategizing for various operational logistics, consulting as an efficiency expert, developing a new financial algorithm or loan system for a bank, streamlining operation and emergency room location or usage in a hospital, planning complex distribution schemes for materials or products (referred to as supply-chain management), and shortening lines (or queues) at a bank, hospital, or a theme park.
Modern industrial engineers typically use predetermined motion time system, computer simulation (especially discrete event simulation), along with extensive mathematical tools for modeling, such as mathematical optimization and queueing theory, and computational methods for system analysis, evaluation, and optimization. Industrial engineers also use the tools of data science and machine learning in their work owing to the strong relatedness of these disciplines with the field and the similar technical background required of industrial engineers (including a strong foundation in probability theory, linear algebra, and statistics, as well as having coding skills).
Manufacturing (production) engineering
Manufacturing Engineering is based on core industrial engineering and mechanical engineering skills, adding important elements from mechatronics, commerce, economics and business management. This field also deals with the integration of different facilities and systems for producing quality products (with optimal expenditure) by applying the principles of physics and the results of manufacturing systems studies, such as the following:
Craft or Guild
Putting-out system
British factory system
American system of manufacturing
Soviet collectivism in manufacturing
Mass production
Computer integrated manufacturing
Computer-aided technologies in manufacturing
Just in time manufacturing
Lean manufacturing
Flexible manufacturing
Mass customization
Agile manufacturing
Rapid manufacturing
Prefabrication
Ownership
Fabrication
Publication
Manufacturing engineers develop and create physical artifacts, production processes, and technology. It is a very broad area which includes the design and development of products. Manufacturing engineering is considered to be a sub-discipline of industrial engineering/systems engineering and has very strong overlaps with mechanical engineering. Manufacturing engineers' success or failure directly impacts the advancement of technology and the spread of innovation. This field of manufacturing engineering emerged from tool and die discipline in the early 20th century. It expanded greatly from the 1960s when industrialized countries introduced factories with:
1. Numerical control machine tools and automated systems of production.
2. Advanced statistical methods of quality control: These factories were pioneered by the American electrical engineer William Edwards Deming, who was initially ignored by his home country. The same methods of quality control later turned Japanese factories into world leaders in cost-effectiveness and production quality.
3. Industrial robots on the factory floor, introduced in the late 1970s: These computer-controlled welding arms and grippers could perform simple tasks such as attaching a car door quickly and flawlessly 24 hours a day. This cut costs and improved production speed.
Education
Industrial engineering
Undergraduate curriculum
In the United States the undergraduate degree earned is the Bachelor of Science (B.S.) or Bachelor of Science and Engineering (B.S.E.) in Industrial Engineering (IE). Variations of the title include Industrial & Operations Engineering (IOE), and Industrial & Systems Engineering (ISE). The typical curriculum includes a broad math and science foundation spanning chemistry, physics, mechanics (i.e., statics, kinematics, and dynamics), materials science, computer science, electronics/circuits, engineering design, and the standard range of engineering mathematics (i.e. calculus, linear algebra, differential equations, statistics). For any engineering undergraduate program to be accredited, regardless of concentration, it must cover a largely similar span of such foundational work – which also overlaps heavily with the content tested on one or more engineering licensure exams in most jurisdictions.
The coursework specific to IE entails specialized courses in areas such as optimization, applied probability, stochastic modeling, design of experiments, statistical process control, simulation, manufacturing engineering, ergonomics/safety engineering, and engineering economics. Industrial engineering elective courses typically cover more specialized topics in areas such as manufacturing, supply chains and logistics, analytics and machine learning, production systems, human factors and industrial design, and service systems.
Certain business schools may offer programs with some overlapping relevance to IE, but the engineering programs are distinguished by a much more intensely quantitative focus, required engineering science electives, and the core math and science courses required of all engineering programs.
Graduate curriculum
The usual graduate degree earned is the Master of Science (MS) or Master of Science and Engineering (MSE) in Industrial Engineering or various alternative related concentration titles. Typical MS curricula may cover:
Operations research and optimization techniques
Engineering economics
Supply chain management and logistics
Systems simulation and stochastic processes
Analytics and machine learning
Manufacturing systems/manufacturing engineering
Human factors engineering and ergonomics (safety engineering)
Production planning and control
System analysis and techniques
Management sciences
Computer-aided manufacturing
Lean Six Sigma
Financial engineering
Facilities design and work-space design
Quality engineering
Reliability engineering and life testing
Statistical process control or quality control
Time and motion study
Predetermined motion time system and computer use for IE
Operations management
Project management
Productivity improvement
Materials management
Robotics
Product development
System dynamics and policy planning
Manufacturing (production) engineering
Degree certification programs
Manufacturing engineers possess an associate's or bachelor's degree in engineering with a major in manufacturing engineering. The length of study for such a degree is usually two to five years followed by five more years of professional practice to qualify as a professional engineer. Working as a manufacturing engineering technologist involves a more applications-oriented qualification path.
Academic degrees for manufacturing engineers are usually the Associate or Bachelor of Engineering, [BE] or [BEng], and the Associate or Bachelor of Science, [BS] or [BSc]. For manufacturing technologists the required degrees are Associate or Bachelor of Technology [B.TECH] or Associate or Bachelor of Applied Science [BASc] in Manufacturing, depending upon the university. Master's degrees in engineering manufacturing include Master of Engineering [ME] or [MEng] in Manufacturing, Master of Science [M.Sc] in Manufacturing Management, Master of Science [M.Sc] in Industrial and Production Management, and Master of Science [M.Sc] as well as Master of Engineering [ME] in Design, which is a subdiscipline of manufacturing. Doctoral [PhD] or [DEng] level courses in manufacturing are also available depending on the university.
The undergraduate degree curriculum generally includes courses in physics, mathematics, computer science, project management, and specific topics in mechanical and manufacturing engineering. Initially such topics cover most, if not all, of the subdisciplines of manufacturing engineering. Students then choose to specialize in one or more sub disciplines towards the end of their degree work.
Specific to Industrial Engineers, people will see courses covering ergonomics, scheduling, inventory management, forecasting, product development, and in general courses that focus on optimization. Most colleges breakdown the large sections of industrial engineering into Healthcare, Ergonomics, Product Development, or Consulting sectors. This allows for the student to get a good grasp on each of the varying sub-sectors so they know what area they are most interested about pursuing a career in.
Undergraduate curriculum
The Foundational Curriculum for a bachelor's degree of Manufacturing Engineering or Production Engineering includes below mentioned Syllabus. This Syllabus is closely related to Industrial Engineering and Mechanical Engineering. But it Differs by Placing more Emphasis on Manufacturing Science or Production Science. It includes following:
Mathematics (Calculus, Differential Equations, Statistics and Linear Algebra)
Mechanics (Statics & Dynamics)
Solid Mechanics
Fluid Mechanics
Materials Science
Strength of Materials
Fluid Dynamics
Hydraulics
Pneumatics
HVAC (Heating, Ventilation & Air Conditioning)
Heat Transfer
Applied Thermodynamics
Energy conversion
Instrumentation and Measurement
Engineering Drawing (Drafting) & Engineering Design
Engineering Graphics
Mechanism Design including Kinematics and Dynamics
Manufacturing Processes
Mechatronics
Circuit analysis
Lean manufacturing
Automation
Reverse Engineering
Quality Control
CAD (Computer aided Design which includes Solid Modelling) and CAM (Computer aided Manufacturing)
A degree in Manufacturing Engineering versus Mechanical Engineering will typically differ only by a few specialized classes. Mechanical Engineering degree focuses more on the Product Design Process and on Complex Products which requires more Mathematics Expertise.
Manufacturing engineering certification
Professional engineering license
A Professional Engineer, PE, is a licensed engineer who is permitted to offer professional services to the public. Professional Engineers may prepare, sign, seal, and submit engineering plans to the public. Before a candidate can become a professional engineer, they will need to receive a bachelor's degree from an ABET recognized university in the US, take and pass the Fundamentals of Engineering exam to become an "engineer-in-training", and work four years under the supervision of a professional engineer. After those tasks are complete the candidate will be able to take the PE exam. Upon receiving a passing score on the test, the candidate will receive their PE License .
Society of Manufacturing Engineers (SME) certifications (USA)
The SME (society) administers qualifications specifically for the manufacturing industry. These are not degree level qualifications and are not recognized at the professional engineering level. The SME offers two certifications for Manufacturing engineers: Certified Manufacturing Technologist Certificate (CMfgT) and Certified Manufacturing Engineer (CMfgE).
Certified manufacturing technologist
Qualified candidates for the Certified Manufacturing Technologist Certificate (CMfgT) must pass a three-hour, 130-question multiple-choice exam. The exam covers math, manufacturing processes, manufacturing management, automation, and related subjects. A score of 60% or higher must be achieved to pass the exam. Additionally, a candidate must have at least four years of combined education and manufacturing-related work experience. The CMfgT certification must be renewed every three years in order to stay certified.
Certified manufacturing engineer
Certified Manufacturing Engineer (CMfgE) is an engineering qualification administered by the Society of Manufacturing Engineers, Dearborn, Michigan, USA. Candidates qualifying for a Certified Manufacturing Engineer credential must pass a four-hour, 180 question multiple-choice exam which covers more in-depth topics than does the CMfgT exam. A score of 60% or higher must be achieved to pass the exam. CMfgE candidates must also have eight years of combined education and manufacturing-related work experience, with a minimum of four years of work experience. The CMfgT certification must be renewed every three years in order to stay certified.
Research
Industrial engineering
Human factors
The human factors area specializes in exploring how systems fit the people who must operate them, determining the roles of people with the systems, and selecting those people who can best fit particular roles within these systems. Students who focus on Human Factors will be able to work with a multidisciplinary team of faculty with strengths in understanding cognitive behavior as it relates to automation, air and ground transportation, medical studies, and space exploration.
Production systems
The production systems area develops new solutions in areas such as engineering design, supply chain management (e.g. supply chain system design, error recovery, large scale systems), manufacturing (e.g. system design, planning and scheduling), and medicine (e.g. disease diagnosis, discovery of medical knowledge). Students who focus on production systems will be able to work on topics related to computational intelligence theories for applications in industry, healthcare, and service organizations.
Reliability systems
The objective of the reliability systems area is to provide students with advanced data analysis and decision making techniques that will improve quality and reliability of complex systems. Students who focus on system reliability and uncertainty will be able to work on areas related to contemporary reliability systems including integration of quality and reliability, simultaneous life cycle design for manufacturing systems, decision theory in quality and reliability engineering, condition-based maintenance and degradation modeling, discrete event simulation and decision analysis.
Wind power management
The Wind Power Management Program aims at meeting the emerging needs for graduating professionals involved in design, operations, and management of wind farms deployed in massive numbers all over the country. The graduates will be able to fully understand the system and management issues of wind farms and their interactions with alternative and conventional power generation systems.
Production (manufacturing) engineering
Flexible manufacturing systems
A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react to changes, whether predicted or unpredicted. This flexibility is generally considered to fall into two categories, both of which have numerous subcategories. The first category, machine flexibility, covers the system's ability to be changed to produce new product types and the ability to change the order of operations executed on a part. The second category, called routing flexibility, consists of the ability to use multiple machines to perform the same operation on a part, as well as the system's ability to absorb large-scale changes, such as in volume, capacity, or capability.
Most FMS systems comprise three main systems. The work machines, which are often automated CNC machines, are connected by a material handling system to optimize parts flow, and to a central control computer, which controls material movements and machine flow. The main advantages of an FMS is its high flexibility in managing manufacturing resources like time and effort in order to manufacture a new product. The best application of an FMS is found in the production of small sets of products from a mass production.
Computer integrated manufacturing
Computer-integrated manufacturing (CIM) in engineering is a method of manufacturing in which the entire production process is controlled by computer. Traditionally separated process methods are joined through a computer by CIM. This integration allows the processes to exchange information and to initiate actions. Through this integration, manufacturing can be faster and less error-prone, although the main advantage is the ability to create automated manufacturing processes. Typically CIM relies on closed-loop control processes based on real-time input from sensors. It is also known as flexible design and manufacturing.
Friction stir welding
Friction stir welding was discovered in 1991 by The Welding Institute (TWI). This innovative steady state (non-fusion) welding technique joins previously un-weldable materials, including several aluminum alloys. It may play an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include: welding the seams of the aluminum main space shuttle external tank, the Orion Crew Vehicle test article, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket; armor plating for amphibious assault ships; and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation, among an increasingly growing range of uses.
Employment
Industrial engineering
The total number of engineers employed in the US in 2015 was roughly 1.6 million. Of these, 272,470 were industrial engineers (16.92%), the third most popular engineering specialty. The median salaries by experience level are $62,000 with 0–5 years experience, $75,000 with 5–10 years experience, and $81,000 with 10–20 years experience. The average starting salaries were $55,067 with a bachelor's degree, $77,364 with a master's degree, and $100,759 with a doctorate degree. This places industrial engineering at 7th of 15 among engineering bachelor's degrees, 3rd of 10 among master's degrees, and 2nd of 7 among doctorate degrees in average annual salary. The median annual income of industrial engineers in the U.S. workforce is $83,470.
Production (manufacturing) engineering
Manufacturing engineering is just one facet of the engineering industry. Manufacturing engineers enjoy improving the production process from start to finish. They have the ability to keep the whole production process in mind as they focus on a particular portion of the process. Successful students in manufacturing engineering degree programs are inspired by the notion of starting with a natural resource, such as a block of wood, and ending with a usable, valuable product, such as a desk, produced efficiently and economically.
Manufacturing engineers are closely connected with engineering and industrial design efforts. Examples of major companies that employ manufacturing engineers in the United States include General Motors Corporation, Ford Motor Company, Chrysler, Boeing, Gates Corporation and Pfizer. Examples in Europe include Airbus, Daimler, BMW, Fiat, Navistar International, and Michelin Tyre.
Related industries
Industries where industrial and production engineers are generally employed include:
Aerospace industry
Automotive industry
Chemical industry
Computer industry
Electronics industry
Food processing industry
Garment industry
Pharmaceutical industry
Plastic packaging
Pulp and paper industry
Toy industry
Modern tools
Many manufacturing companies, especially those in industrialized nations, have begun to incorporate computer-aided engineering (CAE) programs, such as SolidWorks and AutoCAD, into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and ease of use in designing mating interfaces and tolerances.
SolidWorks
SolidWorks is an example of a CAD modeling computer program developed by Dassault Systèmes. SolidWorks is an industry standard for drafting designs and specifications for physical objects and has been used by more than 165,000 companies as of 2013.
AutoCAD
AutoCAD is an example of a CAD modeling computer program developed by Autodesk. AutoCad is also widely used for CAD modeling and CAE.
Other CAE programs commonly used by product manufacturers include product life cycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM). Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. There is no need to create a physical prototype until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of relatively few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows.
Just as manufacturing engineering is linked with other disciplines, such as mechatronics, multidisciplinary design optimization (MDO) is also being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes by automating the process of trial and error method used by classical engineers. MDO uses a computer based algorithm that will iteratively seek better alternatives from an initial guess within given constants. MDO uses this procedure to determine the best design outcome and lists various options as well.
Sub-disciplines
Mechanics
Classical Mechanics, attempts to use Newtons basic laws of motion to describe how a body will react when that body undergoes a force. However modern mechanics includes the rather recent quantum theory. Sub disciplines of mechanics include:
Classical Mechanics:
Statics, the study of non-moving bodies at equilibrium.
Kinematics, is the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion.
Dynamics (or kinetics), the study of how forces affect moving bodies.
Mechanics of materials, the study of how different materials deform under various types of stress.
Fluid mechanics, the study of how the principles of classical mechanics are observed with liquids and gases.
Continuum mechanics, a method of applying mechanics that assumes that objects are continuous (rather than discrete)
Quantum:
Quantum mechanics, the study of atoms, molecules, electrons, protons, and neutrons on a sub atomic scale. This type of mechanics attempts to explain their motion and physical properties within an atom.
If the engineering project were to design a vehicle, statics might be employed to design the frame of the vehicle in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the manufacture of the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle or to design the intake system for the engine.
Drafting
Drafting or technical drawing is the means by which manufacturers create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions. Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Programs such as SolidWorks and AutoCAD are examples of programs used to draft new parts and products under development.
Optionally, an engineer may also manually manufacture a part using the technical drawings, but this is becoming an increasing rarity with the advent of computer numerically controlled (CNC) manufacturing. Engineers primarily manufacture parts manually in the areas of applied spray coatings, finishes, and other processes that cannot economically or practically be done by a machine.
Drafting is used in nearly every sub discipline of mechanical and manufacturing engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD).
Metal fabrication and machine tools
Metal fabrication is the building of metal structures by cutting, bending, and assembling processes. Technologies such as electron beam melting, laser engineered net shape, and direct metal laser sintering has allowed for the production of metal structures to become much less difficult when compared to other conventional metal fabrication methods. These help to alleviate various issues when the idealized CAD structures do not align with the actual fabricated structure.
Machine tools employ many types of tools that do the cutting or shaping of materials. Machine tools usually include many components consisting of motors, levers, arms, pulleys, and other basic simple systems to create a complex system that can build various things. All of these components must work correctly in order to stay on schedule and remain on task. Machine tools aim to efficiently and effectively produce good parts at a quick pace with a small amount of error.
Computer integrated manufacturing
Computer-integrated manufacturing (CIM) is the manufacturing approach of using computers to control the entire production process. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries. Computer-integrated manufacturing allows for data, through various sensing mechanisms to be observed during manufacturing. This type of manufacturing has computers controlling and observing every part of the process. This gives CIM a unique advantage over other manufacturing processes.
Mechatronics
Mechatronics is an engineering discipline that deals with the convergence of electrical, mechanical and manufacturing systems. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various aircraft and automobile subsystems. A mechatronic system typically includes a mechanical skeleton, motors, controllers, sensors, actuators, and digital hardware. Mechatronics is greatly used in various applications of industrial processes and in automation.
The term mechatronics is typically used to refer to macroscopic systems, but futurists have predicted the emergence of very small electromechanical devices. Already such small devices, known as Microelectromechanical systems (MEMS), are used in automobiles to initiate the deployment of airbags, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high-definition printing. In future it is hoped that such devices will be used in tiny implantable medical devices and to improve optical communication.
Textile engineering
Textile engineering courses deal with the application of scientific and engineering principles to the design and control of all aspects of fiber, textile, and apparel processes, products, and machinery. These include natural and man-made materials, interaction of materials with machines, safety and health, energy conservation, and waste and pollution control. Additionally, students are given experience in plant design and layout, machine and wet process design and improvement, and designing and creating textile products. Throughout the textile engineering curriculum, students take classes from other engineering and disciplines including: mechanical, chemical, materials and industrial engineering.
Advanced composite materials
Advanced composite materials (engineering) (ACMs) are also known as advanced polymer matrix composites. These are generally characterized or determined by unusually high strength fibres with unusually high stiffness, or modulus of elasticity characteristics, compared to other materials, while bound together by weaker matrices. Advanced composite materials have broad, proven applications, in the aircraft, aerospace, and sports equipment sectors. Even more specifically ACMs are very attractive for aircraft and aerospace structural parts. Manufacturing ACMs is a multibillion-dollar industry worldwide. Composite products range from skateboards to components of the space shuttle. The industry can be generally divided into two basic segments, industrial composites and advanced composites.
See also
Washington Accord
Automotive engineering
Computer-aided design
Computer numerically controlled
Engineering
Industrial Revolution
Kinematics
Manufacturing
Manufacturing engineering education
Mechatronics
Mechanical engineering
Mechanics
Occupational health and safety
Package engineering
Robotics
Second Industrial Revolution
Surface-mount technology
Technical drawing
Associations
American Society for Engineering Education
American Society for Quality
European Students of Industrial Engineering and Management (ESTIEM)
Indian Institution of Industrial Engineering
Institute for Operations Research and the Management Sciences (INFORMS)
Institute of Industrial Engineers
Institution of Electrical Engineers
Society of Manufacturing Engineers
References
Engineering disciplines
Manufacturing
Management science | Industrial and production engineering | [
"Engineering",
"Biology"
] | 7,336 | [
"Behavior",
"Manufacturing",
"Industrial engineering",
"Behavioural sciences",
"Management science",
"nan",
"Mechanical engineering"
] |
39,047,203 | https://en.wikipedia.org/wiki/Lactate%20shuttle%20hypothesis | The lactate shuttle hypothesis describes the movement of lactate intracellularly (within a cell) and intercellularly (between cells). The hypothesis is based on the observation that lactate is formed and utilized continuously in diverse cells under both anaerobic and aerobic conditions. Further, lactate produced at sites with high rates of glycolysis and glycogenolysis can be shuttled to adjacent or remote sites including heart or skeletal muscles where the lactate can be used as a gluconeogenic precursor or substrate for oxidation. The hypothesis was proposed in 1985 by George Brooks of the University of California at Berkeley.
In addition to its role as a fuel source predominantly in the muscles, heart, brain, and liver, the lactate shuttle hypothesis also relates the role of lactate in redox signalling, gene expression, and lipolytic control. These additional roles of lactate have given rise to the term "lactormone", pertaining to the role of lactate as a signalling hormone.
Lactate and the Cori cycle
Prior to the formation of the lactate shuttle hypothesis, lactate had long been considered a byproduct resulting from glucose breakdown through glycolysis in times of anaerobic metabolism. As a means of regenerating oxidized NAD+, lactate dehydrogenase catalyzes the conversion of pyruvate to lactate in the cytosol, oxidizing NADH to NAD+, regenerating the necessary substrate needed to continue glycolysis. Lactate is then transported from the peripheral tissues to the liver by means of the Cori Cycle where it is reformed into pyruvate through the reverse reaction using lactate dehydrogenase. By this logic, lactate was traditionally considered a toxic metabolic byproduct that could give rise to fatigue and muscle pain during times of anaerobic respiration. Lactate was essentially payment for ‘oxygen debt’ defined by Hill and Lupton as the ‘total amount of oxygen used, after cessation of exercise in recovery therefrom’.
Cell-cell role of the lactate shuttle
In addition to Cori Cycle, the lactate shuttle hypothesis proposes complementary functions of lactate in multiple tissues. Contrary to the long-held belief that lactate is formed as a result of oxygen-limited metabolism, substantial evidence exists that suggests lactate is formed under both aerobic and anaerobic conditions, as a result of substrate supply and equilibrium dynamics.
Tissue use (brain, heart, muscle)
During physical exertion or moderate intensity exercise lactate released from working muscle and other tissue beds is the primary fuel source for the heart, exiting the muscles through monocarboxylate transport protein (MCT). This evidence is supported by an increased amount of MCT shuttle proteins in the heart and muscle in direct proportion to exertion as measured through muscular contraction.
Furthermore, both neurons and astrocytes have been shown to express MCT proteins, suggesting that the lactate shuttle may be involved in brain metabolism. Astrocytes express MCT4, a low affinity transporter for lactate (Km = 35mM), suggesting its function is to export lactate produced by glycolysis. Conversely, neurons express MCT2, a high affinity transporter for lactate (Km = 0.7mM). Thus, it is hypothesized that the astrocytes produce lactate which is then taken up by the adjacent neurons and oxidized for fuel.
Intracellular role of the lactate shuttle
The lactate shuttle hypothesis also explains the balance of lactate production in the cytosol, via glycolysis or glycogenolysis, and lactate oxidation in the mitochondria (described below).
Peroxisomes
MCT2 transporters within the peroxisome function to transport pyruvate into the peroxisome where it is reduced by peroxisomal LDH (pLDH) to lactate. In turn, NADH is converted to NAD+, regenerating this necessary component for subsequent β-oxidation. Lactate is then shuttled out of the peroxisome via MCT2, where it is oxidized by cytoplasmic LDH (cLDH) to pyruvate, generating NADH for energy use and completing the cycle (see figure).
Mitochondria
While the cytosolic fermentation pathway of lactate is well established, a novel feature of the lactate shuttle hypothesis is the oxidation of lactate in the mitochondria. Baba and Sherma (1971) were the first to identify the enzyme lactate dehydrogenase (LDH) in the mitochondrial inner membrane and matrix of rat skeletal and cardiac muscle. Subsequently, LDH was found in the rat liver, kidney, and heart mitochondria. It was also found that lactate could be oxidized as quickly as pyruvate in rat liver mitochondria. Because lactate can either be oxidized in the mitochondria (back to pyruvate for entry into the Krebs cycle, generating NADH in the process), or serve as a gluconeogenic precursor, the intracellular lactate shuttle has been proposed to account for the majority of lactate turnover in the human body (as evidenced by the slight increases in arterial lactate concentration). Brooks et al. confirmed this in 1999, when they found that lactate oxidation exceeded that of pyruvate by 10-40% in rat liver, skeletal, and cardiac muscle.
In 1990, Roth and Brooks found evidence for the facilitated transporter of lactate, monocarboxylate transport protein (MCT), in the sarcolemma vesicles of rat skeletal muscle. Later, MCT1 was the first of the MCT super family to be identified. The first four MCT isoforms are responsible for pyruvate/lactate transport. MCT1 was found to be the predominant isoform in many tissues including skeletal muscle, neurons, erythrocytes, and sperm. In skeletal muscle, MCT1 is found in the membranes of the sarcolemma, peroxisome, and mitochondria. Because of the mitochondrial localization of MCT (to transport lactate into the mitochondria), LDH (to oxidize the lactate back to pyruvate), and COX (cytochrome c oxidase, the terminal element of the electron transport chain), Brooks et al. proposed the possibility of a mitochondrial lactate oxidation complex in 2006. This is supported by the observation that the ability of muscle cells to oxidize lactate was related to the density of mitochondria. Furthermore, it was shown that training increases MCT1 protein levels in skeletal muscle mitochondria, and that corresponded with an increase in the ability of muscle to clear lactate from the body during exercise. The affinity of MCT for pyruvate is greater than lactate, however two reactions will ensure that lactate will be present in concentrations that are orders of magnitude greater than pyruvate: first, the equilibrium constant of LDH (3.6 × 104) greatly favors the formation of lactate. Secondly, the immediate removal of pyruvate from the mitochondria (either via the Krebs cycle or gluconeogenesis) ensures that pyruvate is not present in great concentrations within the cell.
LDH isoenzyme expression is tissue-dependent. It was found that in rats, LDH-1 was the predominant form in the mitochondria of myocardium, but LDH-5 was predominant in the liver mitochondria. It is suspected that this difference in isoenzyme is due to the predominant pathway the lactate will take – in liver it is more likely to be gluconeogenesis, whereas in the myocardium it is more likely to be oxidation. Despite these differences, it is thought that the redox state of the mitochondria dictates the ability of the tissues to oxidize lactate, not the particular LDH isoform.
Lactate as a signaling molecule: "lactormone"
Redox signaling
As illustrated by the peroxisomal intracellular lactate shuttle described above, the interconversion of lactate and pyruvate between cellular compartments plays a key role in the oxidative state of the cell. Specifically, the interconversion of NAD+ and NADH between compartments has been hypothesized to occur in the mitochondria. However, the evidence for this is lacking, as both lactate and pyruvate are quickly metabolized inside the mitochondria. However, the existence of the peroxisomal lactate shuttle suggests that this redox shuttle could exist for other organelles.
Gene expression
Increased intracellular levels of lactate can act as a signalling hormone, inducing changes in gene expression that will upregulate genes involved in lactate removal. These genes include MCT1, cytochrome c oxidase (COX), and other enzymes involved in the lactate oxidation complex. Additionally, lactate will increase levels of peroxisome proliferator activated receptor gamma coactivator 1-alpha (PGC1-α), suggesting that lactate stimulates mitochondrial biogenesis.
Control of lipolysis
In addition to the role of the lactate shuttle in supplying NAD+ substrate for β-oxidation in the peroxisomes, the shuttle also regulates FFA mobilization by controlling plasma lactate levels. Research has demonstrated that lactate functions to inhibit lipolysis in fat cells through activation of an orphan G-protein couple receptor (GPR81) that acts as a lactate sensor, inhibiting lipolysis in response to lactate .
Role of lactate during exercise
As found by Brooks, et al., while lactate is disposed of mainly through oxidation and only a minor fraction supports gluconeogenesis, lactate is the main gluconeogenic precursor during sustained exercise.
Brooks demonstrated in his earlier studies that little difference in lactate production rates were seen in trained and untrained subjects at equivalent power outputs. What was seen, however, was more efficient clearance rates of lactate in the trained subjects suggesting an upregulation of MCT protein.
Local lactate use depends on exercise exertion. During rest, approximately 50% of lactate disposal take place through lactate oxidation whereas in time of strenuous exercise (50-75% VO2 max) approximately 75-80% of lactate is used by the active cell, indicating lactate's role as a major contributor to energy conversion during increased exercise exertion.
Clinical significance
Highly malignant tumors rely heavily on anaerobic glycolysis (metabolism of glucose to lactic acid even under ample tissue oxygen; Warburg effect) and thus need to efflux lactic acid via MCTs to the tumor micro-environment to maintain a robust glycolytic flux and to prevent the tumor from being "pickled to death". The MCTs have been successfully targeted in pre-clinical studies using RNAi and a small-molecule inhibitor alpha-cyano-4-hydroxycinnamic acid (ACCA; CHC) to show that inhibiting lactic acid efflux is a very effective therapeutic strategy against highly glycolytic malignant tumors.
In some tumor types, growth and metabolism relies on the exchange of lactate between glycolytic and rapidly respiring cells. This is of particular importance during tumor cell development when cells often undergo anaerobic metabolism, as described by the Warburg effect. Other cells in the same tumor may have access to or recruit sources of oxygen (via angiogenesis), allowing it to undergo aerobic oxidation. The lactate shuttle could occur as the hypoxic cells anaerobically metabolize glucose and shuttle the lactate via MCT to the adjacent cells capable of using the lactate as a substrate for oxidation. Investigation into how MCT-mediated lactate exchange in targeted tumor cells can be inhibited, therefore depriving cells of key energy sources, could lead to promising new chemotherapeutics.
Additionally, lactate has been shown to be a key factor in tumor angiogenesis. Lactate promotes angiogenesis by upregulating HIF-1 in endothelial cells. Thus a promising target of cancer therapy is the inhibition of lactate export, through MCT-1 blockers, depriving developing tumors of an oxygen source.
References
Biochemical reactions
Cancer research
Cellular respiration
Chemical pathology
Gene expression
Hormones
Metabolic pathways
Respiratory physiology | Lactate shuttle hypothesis | [
"Chemistry",
"Biology"
] | 2,629 | [
"Cellular respiration",
"Biochemistry",
"Gene expression",
"Biochemical reactions",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Metabolic pathways",
"Chemical pathology",
"Metabolism"
] |
39,050,345 | https://en.wikipedia.org/wiki/Electrospark%20deposition | Electrospark deposition is a micro-welding manufacturing process typically used to repair damage to precision or valuable mechanical components such as injection moulding tools. This process may also be referred to as "spark hardening", "electrospark toughening" or "electrospark alloying".
References
Welding
Coatings | Electrospark deposition | [
"Chemistry",
"Engineering"
] | 66 | [
"Coatings",
"Welding",
"Mechanical engineering"
] |
39,050,866 | https://en.wikipedia.org/wiki/Protoflight | Protoflight is a portmanteau of "prototype" and "flight hardware". As defined by NASA Technical Standard NASA-STD-7002A, it refers to a strategy where no test-dedicated qualification article exists and all production (flight) hardware is intended for flight. An example of a program using protoflight methods is the Mars Orbiter Laser Altimeter project.
A protoflight approach carries a higher technical risk approach compared to a full qualification test program since it has no demonstrated life capability over the anticipated life cycle of the hardware, but is a technology development design process that utilizes higher risk tolerances, agile management practices, and quick responsiveness that are needed for certain prototype flight projects or missions.
Examples
Atmospheric Reentry Demonstrator
References
Aerospace | Protoflight | [
"Physics"
] | 155 | [
"Spacetime",
"Space",
"Aerospace"
] |
39,051,035 | https://en.wikipedia.org/wiki/Neutron%20Star%20Interior%20Composition%20Explorer | The Neutron Star Interior Composition ExploreR (NICER) is a NASA telescope on the International Space Station, designed and dedicated to the study of the extraordinary gravitational, electromagnetic, and nuclear physics environments embodied by neutron stars, exploring the exotic states of matter where density and pressure are higher than in atomic nuclei. As part of NASA's Explorer program, NICER enabled rotation-resolved spectroscopy of the thermal and non-thermal emissions of neutron stars in the soft X-ray (0.2–12 keV) band with unprecedented sensitivity, probing interior structure, the origins of dynamic phenomena, and the mechanisms that underlie the most powerful cosmic particle accelerators known. NICER achieved these goals by deploying, following the launch, and activation of X-ray timing and spectroscopy instruments. NICER was selected by NASA to proceed to formulation phase in April 2013.
NICER-SEXTANT uses the same instrument to test X-ray timing for positioning and navigation, and MXS is a test of X-ray timing communication. In January 2018, X-ray navigation was demonstrated using NICER on ISS.
In May 2023, NICER's thermal shields developed a leak that allowed stray light to enter the telescope. A repair kit containing specialized patches was delivered to the station by the Cygnus NG-21 resupply mission in August 2024; the patches will be applied to the shields by astronauts on a future spacewalk.
Launch
By May 2015, NICER was on track for a 2016 launch, having passed its critical design review (CDR) and resolved an issue with the power being supplied by the ISS. Following the loss of SpaceX CRS-7 in June 2015, which delayed future missions by several months, NICER was finally launched on 3 June 2017, with the SpaceX CRS-11 ISS resupply mission aboard a Falcon 9 v1.2 launch vehicle.
Science instrument
NICER's primary science instrument, called the X-ray Timing Instrument (XTI), is an array of 56 X-ray photon detectors. These detectors record the energies of the collected photons as well as with their time of arrival. A Global Positioning System (GPS) receiver enables accurate timing and positioning measurements. X-ray photons can be time-tagged with a precision of less than 300 ns. In August 2022 a fast X-ray follow-up observation program was started with the MAXI instrument named "OHMAN (On-orbit Hookup of MAXI and NICER)" to detect sudden bursts in X-ray phenomena.
During each ISS orbit, NICER will observe two to four targets. Gimbaling and a star tracker allow NICER to track specific targets while collecting science data. In order to achieve its science objectives, NICER will take over 15 million seconds of exposures over an 18-month period.
X-ray navigation and communication experiments
An enhancement to the NICER mission, the Station Explorer for X-ray Timing and Navigation Technology (SEXTANT), will act as a technology demonstrator for X-ray pulsar-based navigation (XNAV) techniques that may one day be used for deep-space navigation.
XCOM
As part of NICER testing, a rapid-modulation X-ray device was developed called Modulated X-ray Source (MXS), which is being used to create an X-ray communication system (XCOM) demonstration. If approved and installed on the ISS, XCOM will transmit data encoded into X-ray bursts to the NICER platform, which may lead to the development of technologies that allow for gigabit bandwidth communication throughout the Solar System. the XCOM test is scheduled for spring 2019. XCOM (inc MXS) was delivered to the ISS in May 2019. Once the test was complete XCOM and the STP-H6 payload malfunctioned in September 2021. It was removed in November 2021 and disposed of on Cygnus NG-16.
Selected results
In May 2018, NICER discovered an X-ray pulsar in the fastest stellar orbit yet discovered. The pulsar and its companion star were found to orbit each other every 38 minutes.
On 21 August 2019 (UTC; 20 August in the U.S.), NICER spotted the brightest X-ray burst so far observed. It came from the neutron star SAX J1808.4−3658 about 11,000 light-years from Earth in the constellation Sagittarius.
Astronomers using NICER found evidence that a neutron star from a low-mass X-ray binary in NGC 6624 is spinning at 716 Hz (times per second), or 42,960 revolutions per minute, the same velocity as the fastest known spinning neutron star PSR J1748−2446ad and the only one in such a binary system.
Gallery
See also
Explorer program
Chandra X-ray Observatory, NASA's flagship space observatory for X-rays, in orbit since 1999
List of X-ray space telescopes
Scientific research on the International Space Station
NuSTAR, NASA Explorer-class hard X-ray space observatory, in orbit since 2012
Rossi X-ray Timing Explorer, an X-ray timing space observatory, active 1995–2012
X-ray telescope
XMM-Newton, ESA X-ray space observatory, in orbit since 1999
References
External links
NICER website by NASA's Goddard Space Flight Center
NICER website at nasa.gov
NICER installation animations and videos
Components of the International Space Station
Explorers Program
International Space Station experiments
Neutron stars
Piggyback mission
Space telescopes
SpaceX payloads contracted by NASA
X-ray telescopes
2017 in spaceflight | Neutron Star Interior Composition Explorer | [
"Astronomy"
] | 1,141 | [
"Space telescopes"
] |
39,052,461 | https://en.wikipedia.org/wiki/Virtual%20home%20design%20software | Virtual home design software is a type of computer-aided design software intended to help architects, designers, and homeowners preview their design implementations on-the-fly. These products differ from traditional homeowner design software and other online design tools in that they use HTML5 to ensure that changes to the design occur rapidly. This category of software as a service puts an emphasis on usability, speed, and customization.
Background
Homeowners, contractors, and architects use virtual home exterior design software to help visualize changes to designs. Since virtual home design suites that use HTML5 are able to rapidly propagate changes to the home design, users can A/B test designs much more efficiently than with previous iterations of online design software.
Virtual home design software has found widespread usage among homeowners who have suffered property damage, as server-side, HTML5-based design software is ideal for homeowners who wish to see what certain products will look like on damaged areas of their houses.
Examples
Several manufacturers use virtual home design software to display their products online. These companies that utilize virtual home design software include GAF Materials Corporation, James Hardie, Exterior Portfolio, and CertainTeed. Some companies, such as Design My Exterior, have built virtual home design software that is not limited to products or brands in order to allow for greater flexibility by the end-user. Design My Exterior also uses ImageMapster in order to generate a greater range of options with less processing time.
Live Home 3D is a virtual home design software for Microsoft Windows and macOS.
Future applications
Several companies are experimenting with virtual reality for architecture. They design virtual homes and allow customers to walk around with the help of a VR headset (such as the Occulus Rift). This way, customers get a realistic, true-to-scale idea of the result.
References
Computer-aided design software
Architectural design
Interior design | Virtual home design software | [
"Engineering"
] | 387 | [
"Design",
"Architectural design",
"Architecture"
] |
39,053,073 | https://en.wikipedia.org/wiki/Crossover%20junction%20endodeoxyribonuclease | Crossover junction endodeoxyribonuclease, also known as Holliday junction resolvase, Holliday junction endonuclease, Holliday junction-cleaving endonuclease, Holliday junction-resolving endoribonuclease, crossover junction endoribonuclease, and cruciform-cutting endonuclease, is an enzyme involved in DNA repair and homologous recombination. Specifically, it performs endonucleolytic cleavage that results in single-stranded crossover between two homologous DNA molecules at the Holliday junction to produce recombinant DNA products for chromosomal segregation. This process is known as Holliday junction resolution.
Biological Function
The Holliday junction is a structure that forms during genetic recombination, and links two double-stranded DNA molecules with a single-stranded crossover, which form during mitotic and meiotic recombination. Crossover junction endodeoxyribonucleases catalyze Holiday junction resolution, which is the formation of separate recombinant DNA molecules and chromosomal separation after the crossover event at the Holliday junction. Crossover junction endodeoxyribonucleases with Holliday Junction resolution function have been identified in all three domains of life - bacteria, archaea, and eukarya. RuvC in bacteria, CCE1 in Saccharomyces cerevisiae, and GEN1 in humans are all crossover junction endodeoxyribonucleases that perform Holliday Junction resolution. Holliday junction resolution catalyzed by crossover junction endodeoxyribonuclease is shown in the figure below.
Crossover junction endodeoxyribonucleases also play key roles in DNA repair. During cell growth and meiosis, DNA double-strand breaks (DSBs) often occur, and are usually repaired by homologous recombination. Because Crossover junction endodeoxyribonucleases perform Holliday Junction resolution, a crucial step of homologous recombination, they are therefore involved in repair of DSBs.
Structure
E. coli RuvC, a Crossover junction endodeoxyribonuclease, is a small protein of about 20 kD, and its active form is a dimer that requires and binds a magnesium ion [1]. RuvC is a 3-layer alpha-beta sandwich with a beta-sheet between 5 alpha-helices
. The enzyme contains two binding channels that contact the backbones of the Holliday junction over seven nucleotides. A Holliday junction resolvase enzyme has also been identified in archaea in Pyrococcus furiosus cells - it is encoded by a gene called hjc and is composed of 123 amino acids
.
A figure of Thermus thermophilus RuvC in complex with a Holliday junction is shown below.
Mechanism
These enzymes are highly selective for branched DNA, although induced fit occurs in the enzyme-substrate (resolvase-Holloday Junction) complex formation. Much remains unknown about the exact mechanism of action, but it is known that bacteria, bacteriophages and archaea catalyze Holliday junction resolution by introducing symmetric nicks across the Holliday junction
. Analysis of crossover junction endodeoxyribonucleases from bacteriophages (T7 endonuclease I), bacteria (RuvC), fungi (GEN1) and humans (hMus81-Eme1) have revealed that the enzymes function in dimers, and part of the resolution reaction takes place in a partially dissociated enzyme-substrate intermediate.
Human Relevance
After a 20-year search, in 2008, a human crossover junction endodeoxyribonuclease, GEN1, was finally identified
. GEN1 performs similar functions and operates by similar mechanisms as previously studied Crossover junction endodeoxyribonuclease in bacteria, archaea, and other eukarya.
The enzyme is thought to play a role in Bloom's syndrome. It has been proposed that Bloom's syndrome involves the induction of DSBs via an unidentified Holliday junction resolvase. It has also been shown that overexpression of Holliday Junction resolvase function is correlated with RAD51-overexpressing cancers.
References
External links
EC 3.1.22
Protein superfamilies | Crossover junction endodeoxyribonuclease | [
"Biology"
] | 934 | [
"Protein superfamilies",
"Protein classification"
] |
31,961,126 | https://en.wikipedia.org/wiki/Ocean%20temperature | The ocean temperature plays a crucial role in the global climate system, ocean currents and for marine habitats. It varies depending on depth, geographical location and season. Not only does the temperature differ in seawater, so does the salinity. Warm surface water is generally saltier than the cooler deep or polar waters. In polar regions, the upper layers of ocean water are cold and fresh. Deep ocean water is cold, salty water found deep below the surface of Earth's oceans. This water has a uniform temperature of around 0-3°C. The ocean temperature also depends on the amount of solar radiation falling on its surface. In the tropics, with the Sun nearly overhead, the temperature of the surface layers can rise to over . Near the poles the temperature in equilibrium with the sea ice is about .
There is a continuous large-scale circulation of water in the oceans. One part of it is the thermohaline circulation (THC). It is driven by global density gradients created by surface heat and freshwater fluxes. Warm surface currents cool as they move away from the tropics. This happens as the water becomes denser and sinks. Changes in temperature and density move the cold water back towards the equator as a deep sea current. Then it eventually wells up again towards the surface.
Ocean temperature as a term applies to the temperature in the ocean at any depth. It can also apply specifically to the ocean temperatures that are not near the surface. In this case it is synonymous with deep ocean temperature).
It is clear that the oceans are warming as a result of climate change and this rate of warming is increasing. The upper ocean (above 700 m) is warming fastest, but the warming trend extends throughout the ocean. In 2022, the global ocean was the hottest ever recorded by humans.
Definition and types
Sea surface temperature
Deep ocean temperature
Experts refer to the temperature further below the surface as ocean temperature or deep ocean temperature. Ocean temperatures more than 20 metres below the surface vary by region and time. They contribute to variations in ocean heat content and ocean stratification. The increase of both ocean surface temperature and deeper ocean temperature is an important effect of climate change on oceans.
Deep ocean water is the name for cold, salty water found deep below the surface of Earth's oceans. Deep ocean water makes up about 90% of the volume of the oceans. Deep ocean water has a very uniform temperature of around 0-3°C. Its salinity is about 3.5% or 35 ppt (parts per thousand).
Relevance
Ocean temperature and dissolved oxygen concentrations have a big influence on many aspects of the ocean. These two key parameters affect the ocean's primary productivity, the oceanic carbon cycle, nutrient cycles, and marine ecosystems. They work in conjunction with salinity and density to control a range of processes. These include mixing versus stratification, ocean currents and the thermohaline circulation.
Ocean heat content
Experts calculate ocean heat content by using ocean temperatures at different depths.
Measurements
There are various ways to measure ocean temperature. Below the sea surface, it is important to refer to the specific depth of measurement as well as measuring the general temperature. The reason is there is a lot of variation with depths. This is especially the case during the day. At this time low wind speed and a lot of sunshine may lead to the formation of a warm layer at the ocean surface and big changes in temperature as you get deeper. Experts call these strong daytime vertical temperature gradients a diurnal thermocline.
The basic technique involves lowering a device to measure temperature and other parameters electronically. This device is called CTD which stands for conductivity, temperature, and depth. It continuously sends the data up to the ship via a conducting cable. This device is usually mounted on a frame that includes water sampling bottles. Since the 2010s autonomous vehicles such as gliders or mini-submersibles have been increasingly available. They carry the same CTD sensors, but operate independently of a research ship.
Scientists can deploy CTD systems from research ships on moorings gliders and even on seals. With research ships they receive data through the conducting cable. For the other methods they use telemetry.
There are other ways of measuring sea surface temperature. At this near-surface layer measurements are possible using thermometers or satellites with spectroscopy. Weather satellites have been available to determine this parameter since 1967. Scientists created the first global composites during 1970.
The Advanced Very High Resolution Radiometer (AVHRR) is widely used to measure sea surface temperature from space.
There are various devices to measure ocean temperatures at different depths. These include the Nansen bottle, bathythermograph, CTD, or ocean acoustic tomography. Moored and drifting buoys also measure sea surface temperatures. Examples are those deployed by the Global Drifter Program and the National Data Buoy Center. The World Ocean Database Project is the largest database for temperature profiles from all of the world’s oceans.
A small test fleet of deep Argo floats aims to extend the measurement capability down to about 6000 meters. It will accurately sample temperature for a majority of the ocean volume once it is in full use.
The most frequent measurement technique on ships and buoys is thermistors and mercury thermometers. Scientists often use mercury thermometers to measure the temperature of surface waters. They can put them in buckets dropped over the side of a ship. To measure deeper temperatures they put them on Nansen bottles.
Monitoring through Argo program
Ocean warming
Trends
Causes
The cause of recent observed changes is the warming of the Earth due to human-caused emissions of greenhouse gases such as carbon dioxide and methane. Growing concentrations of greenhouse gases increases Earth's energy imbalance, further warming surface temperatures. The ocean takes up most of the added heat in the climate system, raising ocean temperatures.
Main physical effects
Increased stratification and lower oxygen levels
Higher air temperatures warm the ocean surface. And this leads to greater ocean stratification. Reduced mixing of the ocean layers stabilises warm water near the surface. At the same time it reduces cold, deep water circulation. The reduced up and down mixing reduces the ability of the ocean to absorb heat. This directs a larger fraction of future warming toward the atmosphere and land. Energy available for tropical cyclones and other storms is likely to increase. Nutrients for fish in the upper ocean layers are set to decrease. This is also like to reduce the capacity of the oceans to store carbon.
Warmer water cannot contain as much oxygen as cold water. Increased thermal stratification may reduce the supply of oxygen from the surface waters to deeper waters. This would further decrease the water's oxygen content. This process is called ocean deoxygenation. The ocean has already lost oxygen throughout the water column. Oxygen minimum zones are expanding worldwide.
Changing ocean currents
Varying temperatures associated with sunlight and air temperatures at different latitudes cause ocean currents. Prevailing winds and the different densities of saline and fresh water are another cause of currents. Air tends to be warmed and thus rise near the equator, then cool and thus sink slightly further poleward. Near the poles, cool air sinks, but is warmed and rises as it then travels along the surface equatorward. The sinking and upwelling that occur in lower latitudes, and the driving force of the winds on surface water, mean the ocean currents circulate water throughout the entire sea. Global warming on top of these processes causes changes to currents, especially in the regions where deep water is formed.
In the geologic past
Scientists believe the sea temperature was much hotter in the Precambrian period. Such temperature reconstructions derive from oxygen and silicon isotopes from rock samples. These reconstructions suggest the ocean had a temperature of 55–85 °C . It then cooled to milder temperatures of between 10 and 40 °C by . Reconstructed proteins from Precambrian organisms also provide evidence that the ancient world was much warmer than today.
The Cambrian Explosion approximately 538.8 million years ago was a key event in the evolution of life on Earth. This event took place at a time when scientists believe sea surface temperatures reached about 60 °C. Such high temperatures are above the upper thermal limit of 38 °C for modern marine invertebrates. They preclude a major biological revolution.
During the later Cretaceous period, from , average global temperatures reached their highest level in the last 200 million years or so. This was probably the result of the configuration of the continents during this period. It allowed for improved circulation in the oceans. This discouraged the formation of large scale ice sheet.
Data from an oxygen isotope database indicate that there have been seven global warming events during the geologic past. These include the Late Cambrian, Early Triassic, Late Cretaceous, and Paleocene-Eocene transition. The surface of the sea was about 5-30º warmer than today in these warming period.
See also
References
Oceans
Coastal and oceanic landforms | Ocean temperature | [
"Physics",
"Chemistry",
"Mathematics",
"Environmental_science"
] | 1,811 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Applied and interdisciplinary physics",
"Physical quantities",
"Functions and mappings",
"Hydrology",
"Oceanography",
"SI base quantities",
"Intensive quantities",
"Mathematical objects",
"Vertical distributions",
"Mat... |
31,962,002 | https://en.wikipedia.org/wiki/Evaporator%20%28marine%29 | An evaporator, distiller or distilling apparatus is a piece of ship's equipment used to produce fresh drinking water from sea water by distillation. As fresh water is bulky, may spoil in storage, and is an essential supply for any long voyage, the ability to produce more fresh water in mid-ocean is important for any ship.
Early evaporators on sailing vessels
Although distillers are often associated with steam ships, their use pre-dates this. Obtaining fresh water from seawater is a theoretically simple system that, in practice, presented many difficulties. While there are numerous effective methods today, early desalination efforts had low yields and often could not produce potable water.
At first, only larger warships and some exploratory ships were fitted with distilling apparatus: a warship's large crew naturally needed a large supply of water, more than they could stow on board in advance. Cargo ships, with their smaller crews, merely carried their supplies with them. A selection of documented systems is as follows:
1539. Blasco de Garay.
1560. "Jornada de Los Gelves".
1578. Martin Frobisher. According to some authors, obtained fresh water from frozen seawater.
1717. A doctor from Nantes, M. Gauthier, proposed a still (not working well on the sea, with the rocking of the ship).
1763. Poissonier. Implemented a countercurrent water condenser.
1771. Method of Dr. Charles Irving, adopted by the British Royal Navy.
1771. Cook's Pacific exploration ship carried a distiller and did tests to check: coal consumption vs. amount of fresh water produced.
1783. Louis Antoine de Bougainville.
1805. Nelson's was fitted with distilling apparatus in her galley.
1817. Louis Claude de Saulces de Freycinet.
1821. Publication of the details of an apparatus for distilling aiguardente in continuous process, by the Catalan Joan Jordana i Elias. This still had many advantages over the previous ones and was quickly adopted in Catalonia.
Boiler feedwater
With the development of the marine steam engine, their boilers also required a continual supply of feedwater.
Early boilers used seawater directly, but this gave problems with the build-up of brine and scale. For efficiency, as well as conserving feedwater, marine engines have usually been condensing engines. By 1865, the use of an improved surface condenser permitted the use of fresh water feed, as the additional feedwater now required was only the small amount required to make up for losses, rather than the total passed through the boiler. Despite this, fresh water makeup to the feedwater system of a large warship under full power could still require up to 100 tons per day. Attention was also paid to de-aereating feedwater, to further reduce boiler corrosion.
The distillation system for boiler feedwater at this time was usually termed an evaporator, partly to distinguish it from a separate system or distiller used for drinking water. Separate systems were often used, especially in early systems, owing to the problem of contamination from oily lubricants in the feedwater system and because of the greatly different capacities required in larger ships. In time, the two functions became combined and the two terms were applied to the separate components of the system.
Potable water distillers
The first water supply by distillation of boiler steam appeared on early paddle steamers and used a simple iron box in the paddle boxes, cooled by water splash. A steam supply direct from the boiler, avoiding the engine and its lubricants, was led to them. With the development of steam heating jackets around the cylinders of engines such as the trunk engine, the exhaust from this source, again unlubricated, could be condensed.
Evaporators
Combined supply
The first distilling plants that boiled a separate water supply from that of the main boiler, appeared around 1867. These were not directly heated by a flame, but had a primary steam circuit using main boiler steam through coils within a steam drum or evaporator. The distillate from this vessel then passed to an adjacent vessel, the distilling condenser. As these evaporators used a 'clean' seawater supply directly, rather than contaminated water from the boiler circuit, they could be used to supply both feedwater and drinking water. These double distillers appeared around 1884. For security against failure, ships except the smallest were fitted with two sets.
Vacuum evaporators
Evaporators consume a great deal of steam, and thus fuel, in relation to the quantity of fresh water produced. Their efficiency is improved by working them at a partial vacuum, supplied by the main engine condensers. On modern diesel-powered ships, this vacuum can instead be produced by an ejector, usually worked by the output from the brine pump. Working under vacuum also reduces the temperature required to boil seawater and thus permits evaporators to be used with lower-temperature waste heat from the diesel cooling system.
Scale
One of the greatest operational problems with an evaporator is the build-up of scale. Its design is tailored to reduce this, and to make its cleaning as effective as possible. The usual design, as developed by Weir and the Admiralty, is for a vertical cylindrical drum, heated by steam-carrying drowned coils in the lower portion. As they are entirely submerged, they avoid the most active region for the deposition of scale, around the waterline. Each coil consists of one or two spirals in a flat plane. Each coil is easily removed for cleaning, being fastened by individual pipe unions through the side of the evaporator. A large door is also provided, allowing the coils to be removed or replaced. Cleaning may be carried out mechanically, with a manual scaling hammer. This also has a risk of mechanical damage to the tubes, as the slightest pitting tends to act as a nucleus for scale or corrosion. It is also common practice to break light scaling free by thermal shock, by passing steam through the coils without cooling water present or by heating the coils, then introducing cold seawater. In 1957, the trials ship , an obsolete heavy cruiser, was used for the first tests of the 'flexing element' distiller, where non-rigid heating coils flexed continually in service and so broke the scale free as soon as it formed a stiff layer.
Despite the obvious salinity of seawater, salt is not a problem for deposition until it reaches the saturation concentration. As this is around seven times that of seawater and evaporators are only operated to a concentration of two and a half times, this is not a problem in service.
A greater problem for scaling is the deposition of calcium sulphate. The saturation point for this compound decreases with temperature above , so that beginning from around a hard and tenacious deposit is formed.
To further control scale formation, equipment may be provided to automatically inject a weak citric acid solution into the seawater feed. The ratio is 1:1350, by weight of seawater.
Compound evaporators
Operation of an evaporator represents a costly consumption of main boiler steam, thus fuel. Evaporators for a warship must also be adequate to supply the boilers at continuous full power when required, even though this is rarely required. Varying the vacuum under which the evaporator works, and thus the boiling point of the feedwater, may optimise production for either maximum output, or better efficiency, depending on which is needed at the time. Greatest output is achieved when the evaporator operates at near atmospheric pressure and a high temperature (for saturated steam this will be at a limit of 100 °C), which may then have an efficiency of 0.87 kg of feedwater produced for each kg of steam supplied.
If condenser vacuum is increased to its maximum, evaporator temperature may be reduced to around 72 °C. Efficiency increases until the mass of feedwater produced almost equals that of the supplied steam, although production is now restricted to 86% of the previous maximum.
Evaporators are generally installed as a set, where two evaporators are coupled to a single distiller. For reliability, large ships will then have a pair of these sets. It is possible to arrange these sets of evaporators in either parallel or in series, for either maximum or most efficient production. This arranges the two evaporators so that the first operates at atmospheric pressure and high temperature (the maximum output case), but then uses the resultant hot output from the first evaporator to drive a second, running at maximum vacuum and low temperature (the maximum efficiency case). The overall output of feedwater may exceed the weight of steam first supplied, as up to 160% of it. Capacity is however reduced, to 72% of the maximum.
Evaporator pumps
The unevaporated seawater in an evaporator gradually becomes a concentrated brine and, like the early steam boilers with seawater feed, this brine must be intermittently blown down every six to eight hours and dumped overboard. Early evaporators were simply mounted high-up and dumped their brine by gravity. As the increasing complexity of surface condensers demanded better feedwater quality, a pump became part of the evaporator equipment. This pump had three combined functions as a seawater feed pump, a fresh water delivery pump and a brine extraction pump, each of progressively smaller capacity. The brine salinity was an important factor in evaporator efficiency: too dense encouraged scale formation, but too little represented a waste of heated seawater. The optimum operating salinity was thus fixed at three times that of seawater, and so the brine pump had to remove at least one third of the total feedwater supply rate. These pumps resembled the steam-powered reciprocating feedwater pumps already in service. They were usually produced by the well-known makers, such as G & J Weir. Vertical and horizontal pumps were used, although horizontal pumps were favoured as they encouraged the de-aeration of feedwater. Electrically powered rotary centrifugal pumps were later adopted, as more efficient and more reliable. There were initial concerns whether these would be capable of pumping brine against the vacuum of the evaporator and so there was also a transitional type where a worm gear-driven plunger pump for brine was driven from the rotary shaft.
Flash distillers
A later form of marine evaporator is the flash distiller. Heated seawater is pumped into a vacuum chamber, where it 'flashes' into pure water vapour. This is then condensed for further use.
As the use of vacuum reduces the vapour pressure, the seawater need only be raised to a temperature of . Both evaporator and distiller are combined into a single chamber, although most plants use two joined chambers, worked in series. The first chamber is worked at vacuum, the second at . Seawater is supplied to the distiller by a pump at around . The cold seawater passes through a condenser coil in the upper part of each chamber before being heated by steam in an external feedwater heater. The heated seawater enters the lower part of the first chamber, then drains over a weir and passes to the second chamber, encouraged by the differential vacuum between them. The brine produced by a flash distiller is only slightly concentrated and is pumped overboard continuously.
Fresh water vapour rises through the chambers and is condensed by the seawater coils. Baffles and catchment trays capture this water in the upper part of the chamber. Vacuum itself is maintained by steam ejectors.
The advantage of the flash distiller over the compound evaporator is its greater operating efficiency, in terms of heat supplied. This is due to working under vacuum, thus low temperature, and also the regenerative use of the condenser coils to pre-heat the seawater feed.
A limitation of the flash distiller is its sensitivity to seawater inlet temperature, as this affects the efficiency of the condenser coils. In tropical waters, the distiller flowrate must be throttled to maintain effective condensation. As these systems are more modern, they are generally fitted with an electric salinometer and some degree of automatic control.
Vapour-compression distillers
Diesel-powered motorships do not use steam boilers as part of their main propulsion system and so may not have steam supplies available to drive evaporators. Some do, as they use auxiliary boilers for non-propulsion tasks such as this. Such boilers may even be heat-recovery boilers that are heated by the engine exhaust.
Where no adequate steam supply is available, a vapour-compression distiller is used instead. This is driven mechanically, either electrically or by its own diesel engine.
Seawater is pumped into an evaporator, where it is boiled by a heating coil. Vapour produced is then compressed, raising its temperature. This heated vapour is used to heat the evaporator coils. Condensate from the coil outlet provides the fresh water supply. To start the cycle, an electric pre-heater is used to heat the first water supply. The main energy input to the plant is in mechanically driving the compressor, not as heat energy.
Both the fresh water production and the waste brine from the evaporator are led through an output cooler. This acts as a heat exchanger with the inlet seawater, pre-heating it to improve efficiency. The plant may operate at either a low pressure or slight vacuum, according to design. As the evaporator works at pressure, not under vacuum, boiling may be violent. To avoid the risk of priming and a carry over of saltwater into the vapour, the evaporator is divided by a bubble cap separator.
Submarines
Vapour-compression distillers were installed on US submarines shortly before World War 2. Early attempts had been made with evaporators running from diesel engine exhaust heat, but these could only be used when the submarine was running at speed on the surface. A further difficulty with submarines was the need to produce high-quality water for topping up their large storage batteries. Typical consumption on a war patrol was around per day for hotel services, drinking, cooking, washing etc. and also for replenishing the diesel engine cooling system. A further 500 gallons per week was required for the batteries. The standard Badger model X-1 for diesel submarines could produce 1,000 gallons per day. Tank capacity of 5,600 gallons (1,200 of which was battery water) was provided, around 10 days reserve. With the appearance of nuclear submarines and their plentiful electricity supply, even larger plants could be installed. The X-1 plant was designed so that it could be operated when snorkelling, or even when completely submerged. As the ambient pressure increased when submerged, and thus the boiling point, additional heat was required in these submarine distillers, and so they were designed to run with electric heat continuously.
See also
Chaplin's Patent Distilling Apparatus
Scuttlebutt
Notes
References
Bibliography
Marine
Water desalination
Watercraft components
Marine steam propulsion | Evaporator (marine) | [
"Chemistry",
"Engineering"
] | 3,141 | [
"Water desalination",
"Water treatment",
"Chemical equipment",
"Distillation",
"Evaporators",
"Water technology"
] |
31,962,997 | https://en.wikipedia.org/wiki/Multimedia%20fugacity%20model | Multimedia fugacity model is a model in environmental chemistry that summarizes the processes controlling chemical behavior in environmental media by developing and applying of mathematical statements or "models" of chemical fate.
Most chemicals have the potential to migrate from the medium to medium. Multimedia fugacity models are utilized to study and predict the behavior of chemicals in different environmental compartments.
The models are formulated using the concept of fugacity, which was introduced by Gilbert N. Lewis in 1901 as a criterion of equilibrium and convenient method of calculating multimedia equilibrium partitioning.
The fugacity of chemicals is a mathematical expression that describes the rates at which chemicals diffuse, or are transported between phases. The transfer rate is proportional to the fugacity difference that exists between the source and destination phases.
For building the model, the initial step is to set up a mass balance equation for each phase in question that includes fugacities, concentrations, fluxes and amounts. The important values are the proportionality constant, called fugacity capacity expressed as Z-values (SI unit: mol/m3 Pa) for a variety of media, and transport parameters expressed as D-values (SI unit: mol/Pa h) for processes such as advection, reaction and intermedia transport. The Z-values are calculated using the equilibrium partitioning coefficients of the chemicals, Henry's law constant and other related physical-chemical properties.
Application of models
There are four levels of multimedia fugacity Models applied for prediction of fate and transport of organic chemicals in the multicompartmental environment:
Depending on the number of phases and complexity of processes different level models are applied. Many of the models apply to steady-state conditions and can be reformulated to describe time-varying conditions by using differential equations. The concept has been used to assess the relative propensity for chemicals to transform from temperate zones and “condense out” at the polar regions.
The multicompartmental approach has been applied to the “quantitative water air sediment interaction" or "QWASI" model designed to assist in understanding chemical fate in lakes. Another application found in POPCYCLING-BALTIC model, which is describing fate of persistent organic pollutants in Baltic region.
References
Further reading
Chemical thermodynamics
Physical chemistry
Equilibrium chemistry
Chemical engineering thermodynamics
Environmental chemistry | Multimedia fugacity model | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 474 | [
"Applied and interdisciplinary physics",
"Chemical engineering",
"Environmental chemistry",
"Equilibrium chemistry",
"Chemical engineering thermodynamics",
"nan",
"Chemical thermodynamics",
"Physical chemistry"
] |
31,965,473 | https://en.wikipedia.org/wiki/Electroviscous%20effects | Electroviscous effects, in chemistry of colloids and surface chemistry, according to an IUPAC definition, are the effects of the particle surface charge on viscosity of a fluid.
Viscoelectric is an effect by which an electric field near a charged interface influences the structure of the surrounding fluid and affects the viscosity of the fluid.
Kinematic viscosity of a fluid, η, can be expressed as a function of electric potential gradient (electric field), , by an equation in the form:
where f is the viscoelectric coefficient of the fluid.
The value of f for water (ambient temperature) has been estimated to be (0.5–1.0) × 10−15 V−2 m2.
See also
Constrictivity
Electrorheological fluid
Wien effect
References
Surface science | Electroviscous effects | [
"Physics",
"Chemistry",
"Materials_science"
] | 172 | [
"Condensed matter physics",
"Surface science"
] |
31,968,050 | https://en.wikipedia.org/wiki/Toshiba%20Pasopia%207 | Toshiba Pasopia 7 (also known as PA7007) is a computer from manufacturer Toshiba, released in 1983 and only available in Japan, with a price of $1350.
It was intended as the successor of the Toshiba Pasopia, offering improved sound and graphics. The machine is partially compatible with the original Pasopia, and supports connecting cartridge-type peripherals.
Graphic memory is increased to 48 KB and two SN76489 sound chips are available, producing six five-octave channels and two noise channels.
A new version of the operating system, T-BASIC7, is also available. This version is based on Microsoft BASIC and adds specific commands for this model, such as higher numerical precision or support for extra colors.
Available peripherals for the Pasopia 7 are a 5" disk drive, a Chinese characters ROM, a RS-232 interface and a printer. The keyboard is full-stroke JIS standard, with a separate numeric keypad and some function keys.
After 1988, some Pasopia 7 computers were donated to other countries (ex: Poland) under the "International Development of Computer Education Program".
Related models
Released in 1985, the Pasopia 700 is based on the Pasopia 7, and was intended as a home learning system developed by Toshiba and Obunsha. Two disk-drives were added to the side of the main unit and the keyboard is separate. This machine has two cartridge slots (one at the front).
Color palette
The Pasopia 7 uses hardware dithering to simulate intermediate color intensities, based on a mix of two of eight base RGB colors displayed using the 640 x 200 resolution. This allows the machine to display a maximum of 27 colors (3-level RGB).
The 8 base colors are displayed in bold.
Actual color limits depend on the graphic mode used:
Text mode: characters in 8 base colors, graphics in 4 colors (from 27);
Fine graphics mode: Kanji characters in 8 base colors, graphics in 8 colors (from 27);
Palette function: 8 or 4 colors (from 27) depending on the overlap of Kanji and graphics;
Hardware tiling function: 27 colors can be displayed by combining 2 pixels, with 8 base colors available per pixel.
See also
Toshiba Pasopia
Toshiba Pasopia 5
Toshiba Pasopia IQ
Toshiba Pasopia 16
References
Pasopia
Z80-based home computers
Computer-related introductions in 1983 | Toshiba Pasopia 7 | [
"Technology"
] | 505 | [
"Computing stubs",
"Computer hardware stubs"
] |
31,969,872 | https://en.wikipedia.org/wiki/Droplet%20vaporization | The vaporizing droplet (droplet vaporization) problem is a challenging issue in fluid dynamics. It is part of many engineering situations involving the transport and computation of sprays: fuel injection, spray painting, aerosol spray, flashing releases… In most of these engineering situations there is a relative motion between the droplet and the surrounding gas. The gas flow over the droplet has many features of the gas flow over a rigid sphere: pressure gradient, viscous boundary layer, wake. In addition to these common flow features one can also mention the internal liquid circulation phenomenon driven by surface-shear forces and the boundary layer blowing effect.
One of the key parameter which characterizes the gas flow over the droplet is the droplet Reynolds number based on the relative velocity, droplet diameter and gas phase properties. The features of the gas flow have a critical impact on the exchanges of mass, momentum and energy between the gas and the liquid phases and thus, they have to be properly accounted for in any vaporizing droplet model.
As a first step it is worth investigating the simple case where there is no relative motion between the droplet and the surrounding gas. It will provide some useful insights on the physics involved in the vaporizing droplet problem. In a second step models used in engineering situations where a relative motion between the droplet and the surrounding exists are presented.
Single spherically symmetric droplet
In this section we assume that there is no relative motion between the droplet and the gas, , and that the temperature inside the droplet is uniform (models that account for the non-uniformity of the droplet temperature are presented in the next section). The time evolution of the droplet radius, , and droplet temperature, , can be computed by solving the following set of ordinary differential equations:
where:
is the liquid density (kg.m−3)
is the vaporization rate of the droplet (kg.s−1)
is the liquid specific heat at constant pressure (J.kg−1.K−1)
is the heat flux entering the droplet (J.s−1)
The heat flux entering the droplet can be expressed as:
where:
is the heat flux from the gas to the droplet surface (J.s−1)
is the latent heat of evaporation of the species considered (J.kg−1)
Analytical expressions for the droplet vaporization rate, , and for the heat flux are now derived. A single, pure, component droplet is considered and the gas phase is assumed to behave as an ideal gas. A spherically symmetric field exists for the gas field surrounding the droplet. Analytical expressions for and are found by considering heat and mass transfer processes in the gas film surrounding the droplet. The droplet vaporizes and creates a radial flow field in the gas film. The vapor from the droplet convects and diffuses away from the droplet surface. Heat conducts radially against the convection toward the droplet interface. This process is called Stefan convection or Stefan flow.
The gas phase conservation equations for mass, fuel-vapor mass fraction and energy are written in a spherical coordinate system:
where:
density of the gas phase (kg.m−3)
radial position (m)
Stefan velocity (m.s−1)
Fuel mass fraction in the gas film (-)
Mass diffusivity (m2.s−1)
Enthalpy of the gas (J.kg−1)
Gas film temperature (K)
Thermal conductivity of the gas (W.m−1.K−1)
Number of species inside the gas phase, i.e. air + fuel (-)
It is assumed that the gas phase heat and mass transfer processes are quasi-steady and that the thermo-physical properties might be considered as constant. The assumption of quasi-steadiness of the gas phase finds its limitation in situations in which the gas film surrounding the droplet is in a near-critical state or in a situation in which the gas field is submitted to an acoustic field. The assumption of constant thermo-physical properties is found to be satisfying provided that the properties are evaluated at some reference conditions
where:
is the reference temperature (K)
is the temperature at the droplet surface (K)
is the temperature of the gas far away from the droplet surface (K)
is the reference fuel mass fraction (-)
is the fuel mass fraction at the droplet surface (-)
is the fuel mass fraction far away from the droplet surface (-)
The 1/3 averaging rule, , is often recommended in the literature
The conservation equation of mass simplifies to:
Combining the conservation equations for mass and fuel vapor mass fraction the following differential equation for the fuel vapor mass fraction is obtained:
Integrating this equation between and the ambient gas phase region and applying the boundary condition at gives the expression for the droplet vaporization rate:
and
where:
is the Spalding mass transfer number
Phase equilibrium is assumed at the droplet surface and the mole fraction of fuel vapor at the droplet surface is obtained via the use of the Clapeyron's equation.
An analytical expression for the heat flux is now derived. After some manipulations the conservation equation of energy writes:
where:
is the enthalpy of the fuel vapor (J.kg−1)
Applying the boundary condition at the droplet surface and using the relation we have:
where:
is the specific heat at constant pressure of the fuel vapor (J.Kg−1.K−1)
Integrating this equation from to the ambient gas phase conditions () gives the variation of the gas film temperature () as a function of the radial distance:
The above equation provides a second expression for the droplet vaporization rate:
and
where:
is the Spalding heat transfer number
Finally combining the new expression for the droplet vaporization rate and the expression for the variation of the gas film temperature the following equation is obtained for :
Two different expressions for the droplet vaporization rate have been derived. Hence, a relation exists between the Spalding mass transfer number and the Spalding heat transfer number and writes:
where:
is the gas film Lewis number (-)
is the gas film specific heat at constant pressure (J.Kg−1.K−1)
The droplet vaporization rate can be expressed as a function of the Sherwood number. The Sherwood number describes the non-dimensional mass transfer rate to the droplet and is defined as:
Thus, the expression for the droplet vaporization rate can be re-written as:
Similarly, the conductive heat transfer from the gas to the droplet can be expressed as a function of the Nusselt number. The Nusselt number describes a non-dimensional heat transfer rate to the droplet and is defined as:
and then:
In the limit where we have which corresponds to the classical heated sphere result.
Single convective droplet
The relative motion between a droplet and the gas results in an increase of the heat and mass transfer rates in the gas film surrounding the droplet. A convective boundary layer and a wake can surround the droplet. Furthermore, the shear force on the liquid surface causes an internal circulation that enhances the heating of the liquid. As a consequence, the vaporization rate increases with the droplet Reynolds number. Many different models exist for the single convective droplet vaporization case. Vaporizing droplet models can be seen to belong to six different classes:
Constant droplet temperature model (d2-law)
Infinite liquid conductivity model
Spherically symmetric transient droplet heating model
Effective conductivity model
Vortex model of droplet heating
Navier-Stokes solution
The main difference between all these models is the treatment of the heating of the liquid phase which is usually the rate controlling phenomenon in droplet vaporization. The first three models do not consider internal liquid circulation. The effective conductivity model (4) and the vortex model of droplet heating (5) account for internal circulation and internal convective heating. The direct resolution of the Navier-Stokes equations provide, in principle, exact solutions both for the gas phase and the liquid phase.
Model (1) is a simplification of model (2) which is in turn a simplification of model (3). The spherically symmetric transient droplet heating model (3) solves the equation for heat diffusion through the liquid phase. A droplet heating time τh can be defined as the time required for a thermal diffusion wave to penetrate from the droplet surface to its center. The droplet heating time is compared to the droplet lifetime, τl. If the droplet heating time is short compared to the droplet lifetime we can assume that the temperature field inside the droplet is uniform and model (2) is obtained. In the infinite liquid conductivity model (2) the temperature of the droplet is uniform but varies with time. It is possible to go one step further and find the conditions for which we can neglect the temporal variation of the droplet temperature. The liquid temperature varies in time until the wet-bulb temperature is reached. If the wet-bulb temperature is reached in a time of the same order of magnitude as the droplet heating time, then the liquid temperature can be considered to be constant with regard to time; model (1), the d2-law, is obtained.
The infinite liquid conductivity model is widely used in industrial spray calculations: for its balance between computational costs and accuracy. To account for the convective effects which enhanced the heat and mass transfer rates around the droplet, a correction is applied to the spherically symmetric expressions of the Sherwood and Nusselt numbers
Abramzon and Sirignano suggest the following formulation for the modified Sherwood and Nusselt numbers:
where and account for surface blowing which results in a thickening of the boundary layer surrounding the droplet.
and can be found from the well-known Frössling, or Ranz-Marshall, correlation:
where
is the Schmidt number,
is the Prandtl number,
is the Reynolds number.
The expressions above show that the heat and mass transfer rates increase with increasing Reynolds number.
References
Fluid dynamics | Droplet vaporization | [
"Chemistry",
"Engineering"
] | 2,051 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
24,896,900 | https://en.wikipedia.org/wiki/R%C3%B8mer%27s%20determination%20of%20the%20speed%20of%20light | Rømer's determination of the speed of light was the demonstration in 1676 that light has an apprehensible, measurable speed and so does not travel instantaneously. The discovery is usually attributed to Danish astronomer Ole Rømer, who was working at the Royal Observatory in Paris at the time.
By timing the eclipses of Jupiter's moon Io, Rømer estimated that light would take about 22 minutes to travel a distance equal to the diameter of Earth's orbit around the Sun. Using modern orbits, this would imply a speed of light of 226,663 kilometres per second, 24.4% lower than the true value of 299,792 km/s. In his calculations Rømer used the idea and observations that the apparent time between eclipses would be greater while the Earth is moving further from Jupiter and lesser while moving closer.
Rømer's theory was controversial at the time that he announced it and he never convinced the director of the Paris Observatory, Giovanni Domenico Cassini, to fully accept it. However, it quickly gained support among other natural philosophers of the period such as Christiaan Huygens and Isaac Newton. It was finally confirmed nearly two decades after Rømer's death, with the explanation in 1729 of stellar aberration by the English astronomer James Bradley.
Background
The determination of east–west positioning (longitude) was a significant practical problem in cartography and navigation before the 1700s. In 1598 Philip III of Spain had offered a prize for a method to determine the longitude of a ship out of sight of land. Galileo proposed a method of establishing the time of day, and thus longitude, based on the times of the eclipses of the moons of Jupiter, in essence using the Jovian system as a cosmic clock; this method was not significantly improved until accurate mechanical clocks were developed in the eighteenth century. Galileo proposed this method to the Spanish crown in 1616–1617 but it proved to be impractical, not least because of the difficulty of observing the eclipses from a ship. However, with refinements the method could be made to work on land.
The Italian astronomer Giovanni Domenico Cassini had pioneered the use of the eclipses of the Galilean moons for longitude measurements, and published tables predicting when eclipses would be visible from a given location. He was invited to France by Louis XIV to set up the Royal Observatory, which opened in 1671 with Cassini as director, a post he would hold for the rest of his life.
One of Cassini's first projects at his new post in Paris was to send Frenchman Jean Picard to the site of Tycho Brahe's old observatory at Uraniborg, on the island of Hven near Copenhagen. Picard was to observe and time the eclipses of Jupiter's moons from Uraniborg while Cassini recorded the times they were seen in Paris. If Picard recorded the end of an eclipse at 9 hours 43 minutes 54 seconds after midday in Uraniborg, while Cassini recorded the end of the same eclipse at 9 hours 1 minute 44 seconds after midday in Paris – a difference of 42 minutes 10 seconds – the difference in longitude could be calculated to be 10° 32' 30". Picard was helped in his observations by a young Dane who had recently completed his studies at the University of Copenhagen – Ole Rømer – and he must have been impressed by his assistant's skills, as he arranged for the young man to come to Paris to work at the Royal Observatory there.
Eclipses of Io
Io is the innermost of the four moons of Jupiter discovered by Galileo in January 1610. Rømer and Cassini refer to it as the "first satellite of Jupiter". It orbits Jupiter once every 42½ hours, and the plane of its orbit is very close to the plane of Jupiter's orbit around the sun. This means that it passes some of each orbit in the shadow of Jupiter – an eclipse.
Viewed from the Earth, an eclipse of Io is seen in one of two ways.
Io suddenly disappears, as it moves into the shadow of Jupiter. This is termed an immersion.
Io suddenly reappears, as it moves out of the shadow of Jupiter. This is termed an emergence.
From the Earth, the immersion and the emergence cannot be observed for the same eclipse of Io, because one or the other will be hidden (occulted) by Jupiter itself. At the point of opposition (point H in the diagram below), both the immersion and the emergence would be hidden by Jupiter.
For about four months before the opposition (from F to G), immersions of Io into Jupiter's shadow can be observed, and about four months after the opposition of Jupiter (from L to K in the diagram below), emergences of Io from its eclipses can be observed. For about five or six months of the year, around the point of conjunction, eclipses of Io cannot be observed at all, because the view of Jupiter is too close to the sun. Even during the periods before and after opposition, many eclipses of Io can not be observed from a given location on the Earth's surface: some will occur during the daytime and some will occur while Jupiter is below the horizon (hidden by the Earth itself).
The key phenomenon that Rømer observed was that the time between eclipses was not constant, but varied slightly over the year. He was fairly confident that the orbital period of Io was not actually changing, so he deduced that change was a consequence of changes in the distance between Earth and Jupiter. The orbital paths of Earth and Jupiter were available to him, and by referring to them he noticed that during periods in which Earth and Jupiter were moving away from each other the interval between eclipses always increased, whereas when Earth and Jupiter were moving toward each other, the interval between eclipses decreased. Rømer reasoned that these observations could be explained by a constant speed of light, which he calculated.
Observations
Most of Rømer's papers were destroyed in the Copenhagen Fire of 1728, but one manuscript that survived contains a listing of about sixty observations of eclipses of Io from 1668 to 1678. In particular, it details two series of observations on either side of the oppositions of 2 March 1672 and 2 April 1673. Rømer comments in a letter to Christiaan Huygens dated 30 September 1677 that these observations from 1671 to 1673 form the basis for his calculations.
The surviving manuscript was written some time after January 1678, the date of the last recorded astronomical observation (an emergence of Io on 6 January), and so was later than Rømer's letter to Huygens. Rømer appears to have been collecting data on eclipses of the Galilean moons in the form of an aide-mémoire, possibly as he was preparing to return to Denmark in 1681. The document also records the observations around the opposition of 8 July 1676 that formed the basis for the announcement of Rømer's results.
Initial announcement
On 22 August 1676, Cassini made an announcement to the Royal Academy of Sciences in Paris that he would be changing the basis of calculation for his tables of eclipses of Io. He may also have stated the reason:
This second inequality appears to be due to light taking some time to reach us from the satellite; light seems to take about ten to eleven minutes [to cross] a distance equal to the half-diameter of the terrestrial orbit.
Most importantly, Rømer announced the prediction that the emergence of Io on 16 November 1676 would be observed about ten minutes later than would have been calculated by the previous method. There is no record of any observation of an emergence of Io on 16 November, but an emergence was observed on 9 November. With this experimental evidence in hand, Rømer explained his new method of calculation to the Royal Academy of Sciences on 21 November.
The original record of the meeting of the Royal Academy of Sciences has been lost, but Rømer's presentation was recorded as a news report in the Journal des sçavans on 7 December. This anonymous report was translated into English and published in Philosophical Transactions of the Royal Society in London on 25 July 1677.
Rømer's reasoning
Order of magnitude
Rømer starts with an order of magnitude demonstration that the speed of light must be so great that it takes much less than one second to travel a distance equal to Earth's diameter.
The point L on the diagram represents the second quadrature of Jupiter, when the angle between Jupiter and the Sun (as seen from Earth) is 90°. Rømer assumes that an observer could see an emergence of Io at the second quadrature (L), and the emergence which occurs after one orbit of Io around Jupiter (when the Earth is taken to be at point K, the diagram not being to scale), that is 42½ hours later. During those 42½ hours, the Earth has moved farther away from Jupiter by the distance LK: this, according to Rømer, is 210 times the Earth's diameter. If light travelled at a speed of one Earth-diameter per second, it would take 3½ minutes to travel the distance LK. And if the period of Io's orbit around Jupiter were taken as the time difference between the emergence at L and the emergence at K, the value would be 3½ minutes longer than the true value.
Rømer then applies the same logic to observations around the first quadrature (point G), when Earth is moving towards Jupiter. The time difference between an immersion seen from point F and the next immersion seen from point G should be 3½ minutes shorter than the true orbital period of Io. Hence, there should be a difference of about 7 minutes between the periods of Io measured at the first quadrature and those measured at the second quadrature. In practice, no difference is observed, from which Rømer concludes that the speed of light must be very much greater than one Earth-diameter per second.
Cumulative effect
Rømer realised that any effect of the finite speed of light would add up over a long series of observations, and it is this cumulative effect that he announced to the Royal Academy of Sciences in Paris. The effect can be illustrated with Rømer's observations from spring 1672.
Jupiter was in opposition on 2 March 1672: the first observations of emergences were on 7 March (at 07:58:25) and 14 March (at 09:52:30). Between the two observations, Io had completed four orbits of Jupiter, giving an orbital period of 42 hours 28 minutes 31¼ seconds.
The last emergence observed in the series was on 29 April (at 10:30:06). By this time, Io had completed thirty orbits around Jupiter since 7 March: the apparent orbital period is 42 hours 29 minutes 3 seconds. The difference seems tiny – 32 seconds – but it meant that the emergence on 29 April was occurring a quarter-hour after it would have been predicted. The only alternative explanation was that the observations on 7 and 14 March were wrong by two minutes.
Prediction
Rømer never published the formal description of his method, possibly because of the opposition of Cassini and Picard to his ideas (see below). However, the general nature of his calculation can be inferred from the news report in the Journal des sçavans and from Cassini's announcement on 22 August 1676.
Cassini announced that the new tables would
contain the inequality of the days or the true motion of the Sun [i.e. the inequality due to the eccentricity of the Earth’s orbit], the eccentric motion of Jupiter [i.e. the inequality due to the eccentricity of the orbit of Jupiter] and this new, not previously detected, inequality [i.e. due to the finite speed of light].
Hence Cassini and Rømer appear to have been calculating the times of each eclipse based on the approximation of circular orbits, and then applying three successive corrections to estimate the time that the eclipse would be observed in Paris.
The three "inequalities" (or irregularities) listed by Cassini were not the only ones known, but they were the ones that could be corrected for by calculation. The orbit of Io is also slightly irregular because of orbital resonance with Europa and Ganymede, two of the other Galilean moons of Jupiter, but this would not be fully explained for another century. The only solution available to Cassini and to other astronomers of his time was to issue periodic corrections to the tables of eclipses of Io to take account of its irregular orbital motion: periodically resetting the clock, as it were. The obvious time to reset the clock was just after the opposition of Jupiter to the Sun, when Jupiter is at its closest to Earth and so most easily observable.
The opposition of Jupiter to the Sun occurred on or around 8 July 1676. Rømer's aide-mémoire lists two observation of emergences of Io after this opposition but before Cassini's announcement: on 7 August at 09:44:50 and on 14 August at 11:45:55. With these data, and knowing the orbital period of Io, Cassini could calculate the times of each of the eclipses over the next four to five months.
The next step in applying Rømer's correction is to calculate the position of Earth and Jupiter in their orbits for each of the eclipses. This sort of coordinate transformation was commonplace in preparing tables of positions of the planets for both astronomy and astrology: it is equivalent to finding each of the positions L (or K) for the various eclipses which might be observable.
Finally, the distance between Earth and Jupiter can be calculated using standard trigonometry, in particular the law of cosines, knowing two sides (distance between the Sun and Earth; distance between the Sun and Jupiter) and one angle (the angle between Jupiter and Earth as formed at the Sun) of a triangle. The distance from the Sun to Earth was not well known at the time, but taking it as a fixed value a, the distance from the Sun to Jupiter can be calculated as some multiple of a.
This model left just one adjustable parameter – the time taken for light to travel a distance equal to a, the radius of Earth's orbit. Rømer had about thirty observations of eclipses of Io from 1671 to 1673 that he used to find the value which fitted best: eleven minutes. With that value, he could calculate the extra time it would take light to reach Earth from Jupiter in November 1676 compared to August 1676: about ten minutes.
Initial reactions
Rømer's explanation of the difference between predicted and observed timings of Io's eclipses was widely, but far from universally, accepted. Huygens was an early supporter, especially as it supported his ideas about refraction, and wrote to the French Controller-General of Finances Jean-Baptiste Colbert in Rømer's defence. However Cassini, Rømer's superior at the Royal Observatory, was an early and tenacious opponent of Rømer's ideas, and it seems that Picard, Rømer's mentor, shared many of Cassini's doubts.
Cassini's practical objections stimulated much debate at the Royal Academy of Sciences (with Huygens participating by letter from London). Cassini noted that the other three Galilean moons did not seem to show the same effect as seen for Io, and that there were other irregularities which could not be explained by Rømer's theory. Rømer replied that it was much more difficult to accurately observe the eclipses of the other moons, and that the unexplained effects were much smaller (for Io) than the effect of the speed of light: however, he admitted to Huygens that the unexplained "irregularities" in the other satellites were larger than the effect of the speed of light. The dispute had something of a philosophical note: Rømer claimed that he had discovered a simple solution to an important practical problem, while Cassini rejected the theory as flawed as it could not explain all the observations. Cassini was forced to include "empirical corrections" in his 1693 tables of eclipses, but never accepted the theoretical basis: indeed, he chose different correction values for the different moons of Jupiter, in direct contradiction with Rømer's theory.
Rømer's ideas received a much warmer reception in England. Although Robert Hooke (1635–1703) dismissed the supposed speed of light as so large as to be virtually instantaneous, the Astronomer Royal John Flamsteed (1646–1719) accepted Rømer's hypothesis in his ephemerides of eclipses of Io. Edmond Halley (1656–1742), a future Astronomer Royal, was an early and enthusiastic supporter. Isaac Newton (1643–1727) accepted Rømer's idea, giving a value of "seven or eight minutes" in his 1704 book Opticks for light to travel from the Sun to Earth, closer to the true value (8 minutes 19 seconds) than Rømer's initial estimate of 11 minutes. Newton notes that Rømer's observations had been confirmed by others, presumably meaning Flamsteed and Halley in Greenwich.
While it was difficult for people such as Hooke to conceive of the enormous speed of light, acceptance of Rømer's idea suffered a second handicap in that it was based on Kepler's model of the planets orbiting the Sun in elliptical orbits. While Kepler's model had widespread acceptance by the late seventeenth century, it was still considered sufficiently controversial for Newton to spend several pages discussing the observational evidence in favour of that model in his Philosophiæ Naturalis Principia Mathematica (1687).
Rømer's view that the velocity of light was finite was not fully accepted until measurements of stellar aberration were made in 1727 by James Bradley (1693–1762). Bradley, who succeeded Halley as Astronomer Royal, calculated a value of 8 minutes 13 seconds for light to travel from the Sun to Earth. Ironically, stellar aberration had first been observed by Cassini and (independently) by Picard in 1671, but neither astronomer was able to give an explanation for the phenomenon. Bradley's work laid to rest any remaining serious objections to the Keplerian model of the Solar System.
Later measurements
Swedish astronomer Pehr Wilhelm Wargentin (1717–83) used Rømer's method in the preparation of his ephemerides of Jupiter's moons (1746), as did Giovanni Domenico Maraldi working in Paris. The remaining irregularities in the orbits of the Galilean moons would not be satisfactorily explained until the work of Joseph Louis Lagrange (1736–1813) and Pierre-Simon Laplace (1749–1827) on orbital resonance.
In 1809, again making use of observations of Io, but this time with the benefit of more than a century of increasingly precise observations, the astronomer Jean Baptiste Joseph Delambre (1749–1822) reported the time for light to travel from the Sun to the Earth as 8 minutes 12 seconds. Depending on the value assumed for the astronomical unit, this yields the speed of light as just a little more than 300,000 kilometres per second.
The first measurements of the speed of light using completely terrestrial apparatus were published in 1849 by Hippolyte Fizeau (1819–96). Compared to values accepted today, Fizeau's result (about 313,000 kilometres per second) was too high, and less accurate than those obtained by Rømer's method. It would be another thirty years before A. A. Michelson in the United States published his more precise results (299,910±50 km/s) and Simon Newcomb confirmed the agreement with astronomical measurements, almost exactly two centuries after Rømer's announcement.
Later discussion
Did Rømer measure the speed of light?
Several discussions have suggested that Rømer should not be credited with the measurement of the speed of light, as he never gave a value in Earth-based units. These authors credit Huygens with the first calculation of the speed of light.
Huygens's estimate was a value of 110,000,000 toises per second: as the toise was later determined to be just under two metres, this gives the value in SI units.
However, Huygens's estimate was not a precise calculation but rather an illustration at an order of magnitude level. The relevant passage from Treatise sur la lumière reads:
If one considers the vast size of the diameter KL, which according to me is some 24 thousand diameters of the Earth, one will acknowledge the extreme velocity of Light. For, supposing that KL is no more than 22 thousand of these diameters, it appears that being traversed in 22 minutes this makes the speed a thousand diameters in one minute, that is 16-2/3 diameters in one second or in one beat of the pulse, which makes more than 11 hundred times a hundred thousand toises;
Huygens was obviously not concerned about the 9% difference between his preferred value for the distance from the Sun to Earth and the one he uses in his calculation. Nor was there any doubt in Huygens's mind as to Rømer's achievement, as he wrote to Colbert (emphasis added):
I have seen recently, with much pleasure, the beautiful discovery of Mr. Romer, to demonstrate that light takes time in propagating, and even to measure this time;
Neither Newton nor Bradley bothered to calculate the speed of light in Earth-based units. The next recorded calculation was probably made by Fontenelle: claiming to work from Rømer's results, the historical account of Rømer's work written some time after 1707 gives a value of 48203 leagues per second. This is 16.826 Earth-diameters (214,636 km) per second.
Doppler method
It has also been suggested that Rømer was measuring a Doppler effect. The original effect discovered by Christian Doppler 166 years later refers to propagating electromagnetic waves. The generalization referred to here is the change in observed frequency of an oscillator (in this case, Io orbiting around Jupiter) when the observer (in this case, on Earth's surface) is moving: the frequency is higher when the observer is moving towards the oscillator and lower when the observer is moving away from the oscillator. This apparently anachronistic analysis implies that Rømer was measuring the ratio , where c is the speed of light and v is the Earth's orbital velocity (strictly, the component of the Earth's orbital velocity parallel to the Earth–Jupiter vector), and indicates that the major inaccuracy of Rømer's calculations was his poor knowledge of the orbit of Jupiter.
There is no evidence that Rømer thought that he was measuring : he gives his result as the time of 22 minutes for light to travel a distance equal to the diameter of Earth's orbit or, equivalently, 11 minutes for light to travel from the Sun to Earth. It can be readily shown that the two measurements are equivalent: if we give τ as the time taken for light to cross the radius of an orbit (e.g. from the Sun to Earth) and P as the orbital period (the time for one complete rotation), then
Bradley, who was measuring in his studies of aberration in 1729, was well aware of this relation as he converts his results for into a value for τ without any comment.
See also
Longitude prize (UK)
Bibliography
.
.
; reprinted in book form by the Burndy Library, 1942.
.
.
.
.
.
.
.
.
.
.
.
.
.
Notes
References
External links
Short, uncluttered explanation by Ethan Siegel
Visualize Solar System at a given Epoch
The history of a velocity
Rømer and the Doppler principle
Proceeding of a Rømer Experiment for Schools from EAAE Summer Schools
Determination of the speed of light
Light
Physics experiments
1670s in science
1676 in science | Rømer's determination of the speed of light | [
"Physics"
] | 4,976 | [
"Physical phenomena",
"Physics experiments",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Waves",
"Experimental physics",
"Light"
] |
24,900,031 | https://en.wikipedia.org/wiki/River%20linking | River linking is a project of linking two or more rivers by creating a network of manually created reservoirs and canals, and providing land areas that otherwise does not have river water access and reducing the flow of water to sea using this means. It is based on the assumptions that surplus water in some rivers can be diverted to deficit rivers by creating a network of canals to interconnect the rivers.
Reasons and motivations
For an instance, in India the rainfall over the country is primarily orographic, associated with tropical depressions originating in the Arabian Sea and the Bay of Bengal. The summer monsoon accounts for more than 85 per cent of the precipitation. The uncertainty of occurrence of rainfall marked by prolonged dry spells and fluctuations in seasonal and annual rainfall is a serious problem for the country. Large parts of Haryana, Maharashtra, Andhra Pradesh, Rajasthan, Gujarat, Madhya Pradesh, Karnataka and Tamil Nadu are not only in deficit in rainfall but also subject to large variations, resulting in frequent droughts and causing immense hardship to the population and enormous loss to the nation. The water availability even for drinking purposes becomes critical, particularly in the summer months as the rivers dry up and the ground water recedes. Regional variations in the rainfall lead to situations when some parts of the country do not have enough water even for raising a single crop. On the other hand, excess rainfall occurring in some parts of the country creates havoc due to floods.
Irrigation using river water and ground water has been the prime factor for raising the food grain production in India from a mere 50 million tonnes in the 1950s to more than 200 million tonnes at present, leading India to attain self-sufficiency in food. Irrigated area has increased from 22 million hectares to 95 million hectares during this period. The population of India, which is around 1100 million at present, is expected to increase to 1500 to 1800 million in the year 2050 and that would require about 450 million tonnes of food grains. For meeting this requirement, it would be necessary to increase irrigation potential to 160 million hectares for all crops by 2050. India's maximum irrigation potential that could be created through conventional sources has been assessed to be about 140 million hectares. For attaining a potential of 160 million hectares, other strategies shall have to be evolved.
Floods are a recurring feature, particularly by the Brahmaputra and Ganga rivers, in which almost 60 per cent of the river flows of India occur. Flood damages, which were Rs. 52 crores in 1953, have gone up to Rs. 5,846 crores in 1998 with annual average being Rs. 1,343 crores affecting the States of Assam, Bihar, West Bengal and Uttar Pradesh along with untold human sufferings. On the other hand, large areas in the States of Rajasthan, Gujarat, Andhra Pradesh, Karnataka and Tamil Nadu face recurring droughts. As much as 85 percentage of drought prone area falls in these States. One of the most effective ways to increase the irrigation potential for increasing the food grain production, mitigating floods and droughts and reducing regional imbalance in the availability of water is the Inter Basin Water Transfer (IBWT) from the surplus rivers to deficit areas. Brahmaputra and Ganga particularly their northern tributaries, Mahanadi, Godavari and West Flowing Rivers originating from the Western Ghats are found to be surplus in water resources. If we can build storage reservoirs on these rivers and connect them to other parts of the country, regional imbalances could be reduced significantly and lot of benefits could be gained by way of additional irrigation, domestic and industrial water supply, hydropower generation, navigational facilities etc.
Benefits
Irrigation
By linking the rivers, vast amount of land areas which will not otherwise be irrigated and are unusable for agriculture become fertile.
Flood prevention
During heavy rainy seasons some areas can experience heavy floods while other areas might be experiencing drought like situations. With network of rivers this problem can be greatly avoided by channeling excess water to areas that are not experiencing a flood or are dry.
Generation of electricity
With new canals built, feasibility of new dams to generate hydroelectric power becomes a possibility.
Transportation
Newly created network of canals opens up new routes and ways of water navigation, which is generally more efficient and cheaper compared to road transport.
National River Linking Project in India
The National River Linking Project (NRLP) is designed to ease water shortages in western and southern India while mitigating the impacts of recurrent floods in the eastern parts of the Ganga basin. The NRLP, if and when implemented, will be one of the biggest interbasin water transfer projects in the world.
Aqua life
A number of leading environmentalists are of the opinion that the project could be an ecological disaster. There would be a decrease in downstream flows resulting in reduction of fresh water inflows into the seas seriously jeopardizing aquatic life.
Deforestation
Creation of canals would need large areas of land resulting in large scale deforestation in certain areas.
Areas getting submerged
Possibility of new dams comes with the threat of large otherwise habitable or reserved land getting submerged under water or surface water.
Displacement of people
As large strips of land might have to be converted to canals, a considerable population living in these areas must need to be rehabilitated to new areas.
Dirtying of clean water
As the rivers interlink, rivers with dirty water will get connect to rivers with clean water, hence dirtying the clean water. By Raunak Tupparwar
References
External links
http://nrlp.iwmi.org/main/maps.asp
http://www.rediff.com/news/2004/apr/29guest.htm
http://www.the-south-asian.com/Aug2004/River-linking.htm
Environmental engineering
/
Interbasin transfer | River linking | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,174 | [
"Hydrology",
"Chemical engineering",
"Interbasin transfer",
"Civil engineering",
"Environmental engineering"
] |
24,901,202 | https://en.wikipedia.org/wiki/Symplectite | A symplectite (or symplektite) is a material texture: a micrometre-scale or submicrometre-scale intergrowth of two or more crystals. Symplectites form from the breakdown of unstable phases, and may be composed of minerals, ceramics, or metals. Fundamentally, their formation is the result of slow grain-boundary diffusion relative to interface propagation rate.
If a material undergoes a change in temperature, pressure or other physical conditions (e.g., fluid composition or activity), one or more phases may be rendered unstable and recrystallize to more stable constituents. If the recrystallized minerals are fine grained and intergrown, this may be termed a symplectite. A cellular precipitation reaction, in which a reactant phase decomposes to a product phase with the same structure as the parent phase and a second phase with a different structure, can form a symplectite. Eutectoid reactions, involving the breakdown of a single phase to two or more phases, neither of which is structurally or compositionally identical to the parent phase, can also form symplectites.
Symplectites may be formed by reaction between adjacent phases or to decomposition of a single phase. The intergrown phases may be planar or rodlike, depending on the volume proportions of the phases, their interfacial free energies, the rate of reaction, the Gibbs free energy change, and the degree of recrystallization. Lamellar symplectites are common in retrogressed eclogite. Kelyphite is a symplectite formed from the decomposition of garnet. Myrmekite is a globular or bulbous symplectite of quartz in plagioclase.
Examples of symplectites formed in Earth materials include
dolomite + calcite, aragonite + calcite, and magnetite + clinopyroxene.
Symplectite formation is important in metallurgy: bainite or pearlite formation from the decomposition of austenite, for example.
See also
Granophyre
Micrographic texture
References
Mineralogy
Petrology
Metamorphic petrology
Phase transitions | Symplectite | [
"Physics",
"Chemistry"
] | 449 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Statistical mechanics",
"Matter"
] |
24,903,189 | https://en.wikipedia.org/wiki/Calmagite | Calmagite is a complexometric indicator used in analytical chemistry to identify the presence of metal ions in solution. As with other metal ion indicators calmagite will change color when it is bound to an ion. Calmagite will be wine red when it is bound to a metal ion and may be blue, red, or orange when it is not bound to a metal ion. Calmagite is often used in conjunction with EDTA, a stronger metal binding agent. This chemical is also used in the quantitation of magnesium in the clinical laboratory.
References
Complexometric indicators
Azo compounds
2-Naphthols
Naphthalenesulfonic acids | Calmagite | [
"Chemistry",
"Materials_science"
] | 133 | [
"Complexometric indicators",
"Chromism"
] |
29,555,674 | https://en.wikipedia.org/wiki/BICEP%20and%20Keck%20Array | BICEP (Background Imaging of Cosmic Extragalactic Polarization) and the Keck Array are a series of cosmic microwave background (CMB) experiments. They aim to measure the polarization of the CMB; in particular, measuring the B-mode of the CMB. The experiments have had five generations of instrumentation, consisting of BICEP1 (or just BICEP), BICEP2, the Keck Array, BICEP3, and the BICEP Array. The Keck Array started observations in 2012 and BICEP3 has been fully operational since May 2016, with the BICEP Array beginning installation in 2017/18.
Purpose and collaboration
The purpose of the BICEP experiment is to measure the polarization of the cosmic microwave background. Specifically, it aims to measure the B-modes (curl component) of the polarization of the CMB. BICEP operates from Antarctica, at the Amundsen–Scott South Pole Station. All three instruments have mapped the same part of the sky, around the south celestial pole.
The institutions involved in the various instruments are Caltech, Cardiff University, University of Chicago, Center for Astrophysics Harvard & Smithsonian, Jet Propulsion Laboratory, CEA Grenoble (FR), University of Minnesota and Stanford University (all experiments); UC San Diego (BICEP1 and 2); National Institute of Standards and Technology (NIST), University of British Columbia and University of Toronto (BICEP2, Keck Array and BICEP3); and Case Western Reserve University (Keck Array).
The series of experiments began at the California Institute of Technology in 2002. In collaboration with the Jet Propulsion Laboratory, physicists Andrew Lange, Jamie Bock, Brian Keating, and William Holzapfel began the construction of the BICEP1 telescope which deployed to the Amundsen-Scott South Pole Station in 2005 for a three-season observing run. Immediately after deployment of BICEP1, the team, which now included Caltech postdoctoral fellows John Kovac and Chao-Lin Kuo, among others, began work on BICEP2. The telescope remained the same, but new detectors were inserted into BICEP2 using a completely different technology: a printed circuit board on the focal plane that could filter, process, image, and measure radiation from the cosmic microwave background. BICEP2 was deployed to the South Pole in 2009 to begin its three-season observing run which yielded the detection of B-mode polarization in the cosmic microwave background.
BICEP1
The first BICEP instrument (known during development as the "Robinson gravitational wave background telescope") observed the sky at 100 and 150 GHz (3 mm and 2 mm wavelength) with an angular resolution of 1.0 and 0.7 degrees. It had an array of 98 detectors (50 at 100 GHz and 48 at 150 GHz), which were sensitive to the polarisation of the CMB. A pair of detectors constitutes one polarization-sensitive pixel. The instrument, a prototype for future instruments, was first described in Keating et al. 2003 and started observing in January 2006 and ran until the end of 2008.
BICEP2
The second-generation instrument was BICEP2. Featuring a greatly improved focal-plane transition edge sensor (TES) bolometer array of 512 sensors (256 pixels) operating at 150 GHz, this 26 cm aperture telescope replaced the BICEP1 instrument, and observed from 2010 to 2012.
Reports stated in March 2014 that BICEP2 had detected B-modes from gravitational waves in the early universe (called primordial gravitational waves), a result reported by the four co-principal investigators of BICEP2: John M. Kovac of the Center for Astrophysics Harvard & Smithsonian; Chao-Lin Kuo of Stanford University; Jamie Bock of the California Institute of Technology; and Clem Pryke of the University of Minnesota.
An announcement was made on 17 March 2014 from the Center for Astrophysics Harvard & Smithsonian. The reported detection was of B-modes at the level of , disfavouring the null hypothesis () at the level of 7 sigma (5.9σ after foreground subtraction). However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported; the accepted and reviewed version of the discovery paper contains an appendix discussing the possible production of the signal by cosmic dust. In part because the large value of the tensor to scalar ratio, which contradicts limits from the Planck data, this is considered the most likely explanation for the detected signal by many scientists. For example, on June 5, 2014 at a conference of the American Astronomical Society, astronomer David Spergel argued that the B-mode polarization detected by BICEP2 could instead be the result of light emitted from dust between the stars in our Milky Way galaxy.
A preprint released by the Planck team in September 2014, eventually accepted in 2016, provided the most accurate measurement yet of dust, concluding that the signal from dust is the same strength as that reported from BICEP2. On January 30, 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way.
BICEP2 has combined their data with the Keck Array and Planck in a joint analysis. A March 2015 publication in Physical Review Letters set a limit on the tensor-to-scalar ratio of .
The BICEP2 affair forms the subject of a book by Brian Keating.
Keck Array
Immediately next to the BICEP telescope at the Martin A. Pomerantz Observatory building at the South Pole was an unused telescope mount previously occupied by the Degree Angular Scale Interferometer. The Keck Array was built to take advantage of this larger telescope mount. This project was funded by $2.3 million from W. M. Keck Foundation, as well as funding from the National Science Foundation, the Gordon and Betty Moore Foundation, the James and Nelly Kilroy Foundation and the Barzan Foundation. The Keck Array project was originally led by Andrew Lange.
The Keck Array consists of five polarimeters, each very similar to the BICEP2 design, but using a pulse tube refrigerator rather than a large liquid helium cryogenic storage dewar.
The first three started observations in the austral summer of 2010–11; another two started observing in 2012. All of the receivers observed at 150 GHz until 2013, when two of them were converted to observe at 100 GHz. Each polarimeter consists of a refracting telescope (to minimise systematics) cooled by a pulse tube cooler to 4 K, and a focal-plane array of 512 transition edge sensors cooled to 250 mK, giving a total of 2560 detectors, or 1280 dual-polarization pixels.
In October 2018, the first results from the Keck Array (combined with BICEP2 data) were announced, using observations up to and including the 2015 season. These yielded an upper limit on cosmological B-modes of (95% confidence level), which reduces to in combination with Planck data.
In October 2021, new results were announced giving (at 95% confidence level) based on BICEP/Keck 2018 observation season combined with Planck and WMAP data.
BICEP3
Once the Keck array was completed in 2012, it was no longer cost-effective to continue to operate BICEP2. However, using the same technique as the Keck array to eliminate the large liquid helium dewar, a much larger telescope has been installed on the original BICEP telescope mount.
BICEP3 consists of a single telescope with the same 2560 detectors (observing at 95 GHz) as the five-telescope Keck array, but a 68 cm aperture, providing roughly twice the optical throughput of the entire Keck array. One consequence of the large focal plane is a larger 28° field of view, which will necessarily mean scanning some foreground-contaminated portions of the sky. It was installed (with initial configuration) at the pole in January 2015. It was upgraded for the 2015-2016 Austral summer season to a full 2560 detector configuration. BICEP3 is also a prototype for the BICEP Array.
BICEP Array
The Keck array is being succeeded by the BICEP array, which consists of four BICEP3-like telescopes on a common mount, operating at 30/40, 95, 150 and 220/270 GHz. Installation began between the 2017 and 2018 observing seasons. It is scheduled to be fully installed by the 2020 observing season.
According to the project website: "BICEP Array will measure the polarized sky in five frequency bands to reach an ultimate sensitivity to the amplitude of IGW [inflationary gravitational waves] of σ(r) < 0.005" and "This measurement will be a definitive test of slow-roll models of inflation, which generally predict a gravitational-wave signal above approximately 0.01."
See also
Cosmology
Inflation (cosmology)
Atacama Cosmology Telescope
South Pole Telescope
Cosmology Large Angular Scale Surveyor
POLARBEAR
LiteBIRD, space-based CMB B-mode polarization search project
Spider, balloon-based CMB B-mode polarization project
References
External links
BICEP2 winter-over (2009–2012) Steffen Richter (9 winters at the South Pole).
Keck winter-over (2010-current) Robert Schwarz (12 winters at the South Pole).
Radio telescopes
Physics experiments
Cosmic microwave background experiments
Astronomical experiments in the Antarctic
Inflation (cosmology) | BICEP and Keck Array | [
"Physics"
] | 1,988 | [
"Experimental physics",
"Physics experiments"
] |
29,556,072 | https://en.wikipedia.org/wiki/4G%20Americas | 4G Americas is a wireless industry trade association representing the 3GPP family of technologies. The organization was established in January 2002 under the name 3G Americas. On September 28, 2010, 3G Americas announced the organization's name change to 4G Americas. 4G Americas works throughout the Western hemisphere to inform government agencies, other businesses and the public about the 3GPP wireless technologies.
4G Americas works with government agencies, regulatory bodies, technical standards organizations and other global wireless organizations to promote interoperability and convergence. The organization holds partnership agreements, MOUs or memberships with global wireless organizations, including the 3rd Generation Partnership Project (3GPP), International Telecommunication Union (ITU) and the Inter-American Telecommunication Commission (CITEL) of the Organization of American States, working agreements with the GSMA, UMTS Forum, Next Generation Mobile Networks Alliance (NGMN), the Centro de Investigación de las Telecomunicaciones (CINTEL) in Colombia, the Cámara de Empresas de Servicios de Telecomunicaciones (CASETEL) in Venezuela and Association of Telecommunications Enterprises of the Andean Community (ASETA).
References
Wireless | 4G Americas | [
"Engineering"
] | 244 | [
"Wireless",
"Telecommunications engineering"
] |
29,558,063 | https://en.wikipedia.org/wiki/Standard%20Interchange%20Protocol | The Standard Interchange Protocol is a proprietary standard for communication between library computer systems and self-service circulation terminals. Although owned and controlled by 3M, the protocol is published and is widely used by other vendors. Version 2.0 of the protocol, known as "SIP2", is a de facto standard for library self-service applications.
History
SIP version 1.0 was published by 3M in 1993. The first version of the protocol supported basic check in and check out operations, but had minimal support for more advanced operations. Version 2.0 of the protocol was published in 2006 and added support for flexible, more user-friendly notifications, and for the automated processing of payments for late fees.
SIP2 was widely adopted by library automation vendors, including ODILO, Lyngsoe Systems, Nexbib, Bibliotheca, Nedap, Checkpoint, Envisionware, FE Technologies, Meescan, Redia and open source integrated library system software such as Koha and Evergreen. The standard was the basis for the NISO Circulation Interchange Protocol (NCIP) standard which is eventually intended to replace it.
Description
SIP is a simple protocol in which requests to perform operations are sent over a connection, and responses are sent in return. The protocol explicitly does not define how a connection between the two devices is established; it is limited to specifying the format of the messages sent over the connection. There are no "trial" transactions; each operation will be attempted immediately and will either be permitted or not.
The protocol specifies messages to check books in and out, to manage fee payments, to request holds and renewals, and to carry out the other basic circulation operations of a library.
Encryption and authentication
SIP has no built in encryption, so steps need to be taken to send the connection through some sort of encrypted tunnel. Two common methods are to use either stunnel or SSH to add a layer of encryption and/or an extra level of authentication.
References
Library automation
Network protocols | Standard Interchange Protocol | [
"Engineering"
] | 408 | [
"Library automation",
"Automation"
] |
44,787,116 | https://en.wikipedia.org/wiki/Pedro%20G.%20Ferreira | Pedro Gil Ferreira (born 18 March 1968) is a Portuguese astrophysicist and author. As of 2016 he is Professor of Astrophysics at the University of Oxford, and a fellow of Wolfson College.
Education and early life
Ferreira was born in Lisbon, Portugal, and attended the Technical University of Lisbon, where he studied engineering from 1986–1991. While there, he taught himself general relativity. He studied for a PhD in theoretical physics at Imperial College London, supervised by Andy Albrecht.
Research and career
He occupied postdoctoral positions at Berkeley and CERN, before returning to the UK to join the faculty in the astrophysics department at the University of Oxford as a research fellow and lecturer. He became Professor of Astrophysics there in 2008. He has been director of the Programme on Computational Cosmology at the Oxford Martin School since 2010, and also runs an astrophysics 'artist in residency' programme. Ferreira regularly lectures at the African Institute for Mathematical Sciences, and has frequently appeared on TV and radio as a science commentator.
Ferreira's main interests are in general relativity and theoretical cosmology. He has authored more than 100 publications in peer-reviewed scientific journals. With Michael Joyce, in 1997 he was one of the first to propose quintessence scalar field models as a possible explanation of dark energy. Ferreira was also a member of the MAXIMA and BOOMERanG balloon-borne CMB experiments, which measured the acoustic peaks of the CMB. He is currently involved in several proposals to test general relativity using the Euclid spacecraft and Square Kilometre Array radio telescope.
Media
Ferreira is a regular contributor to the scientific press, including Nature, Science, and New Scientist, and has authored two popular science books on cosmology and the history of general relativity. One of them, The Perfect Theory, was shortlisted for the 2014 Royal Society Winton Prize for Science Books. He regularly appears on TV and radio to discuss astrophysics and cosmology news stories, and has contributed to several science and mathematics documentaries for the BBC, Discovery Channel, and others. In 2016 he serves on the editorial board of the Open Journal of Astrophysics.
Books
TV and video
Stephen Hawking: Master of the Universe (Channel 4) 2008
The One Show (BBC) 2009
Naked Science: Hawking’s Universe (National Geographic) 2009
Horizon: Is Everything We Know About the Universe Wrong? (BBC) 2010
Beautiful Equations (BBC) 2010
The Beauty of Diagrams (BBC) 2010
References
External links
Ferreira's webpage
New Scientist Instant Expert series: General relativity
Lecture on testing gravity at the Perimeter Institute
Scientific publications of Pedro G. Ferreira on INSPIRE-HEP
1968 births
Living people
Scientists from Lisbon
Portuguese astronomers
21st-century British astronomers
Portuguese science writers
English science writers
Portuguese emigrants to England
People associated with CERN
Cosmologists
Fellows of Oriel College, Oxford
Theoretical physicists
English people of Portuguese descent | Pedro G. Ferreira | [
"Physics"
] | 582 | [
"Theoretical physics",
"Theoretical physicists"
] |
28,174,525 | https://en.wikipedia.org/wiki/Polygon-circle%20graph | In the mathematical discipline of graph theory, a polygon-circle graph is an intersection graph of a set of convex polygons all of whose vertices lie on a common circle. These graphs have also been called spider graphs. This class of graphs was first suggested by Michael Fellows in 1988, motivated by the fact that it is closed under edge contraction and induced subgraph operations.
A polygon-circle graph can be represented as an "alternating sequence". Such a sequence can be gained by perturbing the polygons representing the graph (if necessary) so that no two share a vertex, and then listing for each vertex (in circular order, starting at an arbitrary point) the polygon attached to that vertex.
Closure under induced minors
Contracting an edge of a polygon-circle graph results in another polygon-circle graph. A geometric representation of the new graph may be formed by replacing the polygons corresponding to the two endpoints of the contracted edge by their convex hull. Alternatively, in the alternating sequence representing the original graph, combining the subsequences representing the endpoints of the contracted edge into a single subsequence produces an alternating sequence representation of the contracted graph. Polygon circle graphs are also closed under induced subgraph or equivalently vertex deletion operations: to delete a vertex, remove its polygon from the geometric representation, or remove its subsequence of points from the alternating sequence.
Recognition
M. Koebe announced a polynomial time recognition algorithm; however, his preliminary version had "serious errors" and a final version was never published. Martin Pergel later proved that the problem of recognizing these graphs is NP-complete.
It is also NP-complete to determine whether a given graph can be represented as a polygon-circle graph with at most vertices per polygon, for any .
Related graph families
The polygon-circle graphs are a generalization of the circle graphs, which are intersection graphs of the chords of a circle, and the trapezoid graphs, intersection graphs of trapezoids that all have their vertices on the same two parallel lines. They also include the circular arc graphs.
Polygon-circle graphs are not, in general, perfect graphs, but they are near-perfect, in the sense that their chromatic numbers can be bounded by an (exponential) function of their clique numbers.
References
Intersection classes of graphs
NP-complete problems | Polygon-circle graph | [
"Mathematics"
] | 489 | [
"NP-complete problems",
"Mathematical problems",
"Computational problems"
] |
28,175,582 | https://en.wikipedia.org/wiki/Palacio%20de%20Cristal%20del%20Retiro | The Palacio de Cristal ("Glass Palace") is a 19th-century conservatory located in the Buen Retiro Park in Madrid, Spain. It is currently used for art exhibitions.
The Palacio de Cristal, in the shape of a Greek cross, is made almost entirely of glass set in an iron framework on a brick base, which is decorated with ceramics. Its cupola makes the structure over 22 metres high. When it was erected, glass and iron construction on a large scale was already to be seen in Madrid at Delicias station (1880), the work of a French architect; however, the curved architecture of the Palacio de Cristal is more comparable to the techniques pioneered by the British architects Joseph Paxton (who was responsible for London's Crystal Palace) and Decimus Burton (who was responsible for the Palm House at Kew Gardens). The Palacio de Cristal was, alongside the Pabellón Central, one of the main venues of the 1887 Philippines Exposition.
The cast-iron frame was manufactured in Bilbao.
The structure was designed in a way that would allow it to be re-erected on another site (as happened to the equivalent building in London). However, the building has remained on the original site, next to a lake, and has been restored to its original appearance. It is no longer used as a greenhouse, and is currently used for art exhibits.
Use
The Crystal Palace belongs to the Reina Sofía Museum, and is one of its temporary exposition centres together with Velázquez Palace.
Gallery
References
Buen Retiro Park
Cast-iron architecture in Spain
Glass architecture
Tourist attractions in Madrid
Palaces in Madrid
Buildings and structures in Jerónimos neighborhood, Madrid | Palacio de Cristal del Retiro | [
"Materials_science",
"Engineering"
] | 353 | [
"Glass architecture",
"Glass engineering and science"
] |
28,177,884 | https://en.wikipedia.org/wiki/Analyst%27s%20traveling%20salesman%20theorem | The analyst's traveling salesman problem is an analog of the traveling salesman problem in combinatorial optimization. In its simplest and original form, it asks which plane sets are subsets of rectifiable curves of finite length. Whereas the original traveling salesman problem asks for the shortest way to visit every vertex in a finite set with a discrete path, this analytical version may require the curve to visit infinitely many points.
β-numbers
A rectifiable curve has tangents at almost all of its points, where in this case "almost all" means all but a subset whose one-dimensional Hausdorff measure is zero. Accordingly, if a set is contained in a rectifiable curve, the set must look flat when zooming in on almost all of its points. This suggests that testing us whether a set could be contained in a rectifiable curve must somehow incorporate information about how flat it is when one zooms in on its points at different scales.
This discussion motivates the definition of the following quantity, for a plane set :
where is the set that is to be contained in a rectifiable curve, is any square, is the side length of , and dist measures the distance from to the line . Intuitively, is the width of the smallest rectangle containing the portion of inside , and hence gives a scale invariant notion of flatness.
Jones' traveling salesman theorem in R2
Let Δ denote the collection of dyadic squares, that is,
where denotes the set of integers. For a set , define
where diam E is the diameter of E and is the square with same center as with side length . Then Peter Jones's analyst's traveling salesman theorem may be stated as follows:
There is a number C > 0 such that whenever E is a set with such that β(E) < ∞, E can be contained in a curve with length no more than Cβ(E).
Conversely (and substantially more difficult to prove), if Γ is a rectifiable curve, then β(Γ) < CH1(Γ).
Generalizations and Menger curvature
Euclidean space and Hilbert space
The Traveling Salesman Theorem was shown to hold in general Euclidean spaces by Kate Okikiolu, that is, the same theorem above holds for sets , d > 1, where Δ is now the collection of dyadic cubes in defined in a similar way as dyadic squares. In her proof, the constant C grows exponentially with the dimension d.
With some slight modifications to the definition of β(E), Raanan Schul showed Traveling Salesman Theorem also holds for sets E that lie in any Hilbert Space, and in particular, implies the theorems of Jones and Okikiolu, where now the constant C is independent of dimension. (In particular, this involves using β-numbers of balls instead of cubes).
Menger curvature and metric spaces
Hahlomaa further adjusted the definition of β(E) to get a condition for when a set E of an arbitrary metric space may be contained in the Lipschitz-image of a subset of positive measure. For this, he had to redefine the definition of the β-numbers using menger curvature (since in a metric space there isn't necessarily a notion of a cube or a straight line).
Menger curvature, as in the previous example, can be used to give numerical estimates that determine whether a set contains a rectifiable subset, and the proofs of these results frequently depend on β-numbers.
Denjoy–Riesz theorem
The Denjoy–Riesz theorem gives general conditions under which a point set can be covered by the homeomorphic image of a curve. This is true, in particular, for every compact totally disconnected subset of the Euclidean plane. However, it may be necessary for such an arc to have infinite length, failing to meet the conditions of the analyst's traveling salesman theorem.
References
Harmonic analysis
Real analysis
Geometry
Theorems in discrete mathematics | Analyst's traveling salesman theorem | [
"Mathematics"
] | 818 | [
"Discrete mathematics",
"Theorems in discrete mathematics",
"Geometry",
"Mathematical problems",
"Mathematical theorems"
] |
28,180,441 | https://en.wikipedia.org/wiki/Residual-resistance%20ratio | Residual-resistivity ratio (also known as Residual-resistance ratio or just RRR) is usually defined as the ratio of the resistivity of a material at room temperature and at 0 K. Of course, 0 K can never be reached in practice so some estimation is usually made. Since the RRR can vary quite strongly for a single material depending on the amount of impurities and other crystallographic defects, it serves as a rough index of the purity and overall quality of a sample. Since resistivity usually increases as defect prevalence increases, a large RRR is associated with a pure sample. RRR is also important for characterizing certain unusual low temperature states such as the Kondo effect and superconductivity. Note that since it is a unitless ratio there is no difference between a residual resistivity and residual-resistance ratio.
Background
Usually at "warm" temperatures the resistivity of a metal varies linearly with temperature. That is, a plot of the resistivity as a function of temperature is a straight line. If this straight line were extrapolated all the way down to absolute zero, a theoretical RRR could be calculated
In the simplest case of a good metal that is free of scattering mechanisms one would expect ρ(0K) = 0, which would cause RRR to diverge. However, usually this is not the case because defects such as grain boundaries, impurities, etc. act as scattering sources that contribute a temperature independent ρ0 value. This shifts the intercept of the curve to a higher number, giving a smaller RRR.
In practice the resistivity of a given sample is measured down to as cold as possible, which on typical laboratory instruments is in the range of 2 K, though much lower is possible. By this point the linear resistive behavior is usually no longer applicable and by the low temperature ρ is taken as a good approximation to 0 K.
Special Cases
For superconducting materials, RRR is calculated differently because ρ is always exactly 0 below the critical temperature, Tc, which may be significantly above 0 K. In this case the RRR is calculated using the ρ from just above the superconducting transition temperature instead of at 0 K. For example, superconducting Niobium–titanium wires have an RRR defined as .
In the Kondo effect the resistivity begins to increase again with cooling at very low temperatures, and the value of RRR is useful for characterizing this state.
Examples
The RRR of copper wire is generally ~ 40–50 when used for telephone lines, etc.
References
Bibliography
Ashcroft, Neil W.; Mermin, N. David (1976). Solid State Physics. Holt, Rinehart and Winston. .
Electrical resistance and conductance
Cryogenics
Superconductivity | Residual-resistance ratio | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 565 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Superconductivity",
"Cryogenics",
"Materials science",
"Condensed matter physics",
"Wikipedia categories named after physical quantities",
"Electrical resistance and conductance"
] |
28,180,925 | https://en.wikipedia.org/wiki/PRL%20Advanced%20Radial-velocity%20All-sky%20Search | PRL Advanced Radial-velocity Abu-sky Search, abbreviated PARAS, is a ground-based extrasolar planet search device. Based at 1.2m telescope is located at Mt. Abu, India. The project is funded by Physical Research Laboratory, India. The spectrograph works at a resolution of 67000. With the help of simultaneous calibration technique, PARAS has achieved an RV accuracy of 1.3 m/s for bright, quiet, sun-like stars. Thorium-Argon lamp is used for calibration. New calibration techniques are also being explored by the project team. PARAS can detect planet in the habitable zone around M-type stars.
References
ISRO’s PRL scientists discover an ‘EPIC’ planet
Exoplanet search projects
Spectrographs | PRL Advanced Radial-velocity All-sky Search | [
"Physics",
"Chemistry",
"Astronomy"
] | 167 | [
"Exoplanet search projects",
"Spectrum (physical sciences)",
"Spectrographs",
"Astronomy projects",
"Spectroscopy"
] |
35,114,568 | https://en.wikipedia.org/wiki/Bitoscanate | Bitoscanate is an organic chemical compound used in the treatment of hookworms. It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
References
External links
4885 at NOAA CAMEO
Anthelmintics
Isothiocyanates | Bitoscanate | [
"Chemistry"
] | 104 | [
"Isothiocyanates",
"Functional groups"
] |
35,123,141 | https://en.wikipedia.org/wiki/Secondary%20cohomology%20operation | In mathematics, a secondary cohomology operation is a functorial correspondence between cohomology groups. More precisely, it is a natural transformation from the kernel of some primary cohomology operation to the cokernel of another primary operation. They were introduced by in his solution to the Hopf invariant problem. Similarly, one can define tertiary cohomology operations from the kernel to the cokernel of secondary operations, and continue in this manner to define higher cohomology operations, as noted by .
Michael Atiyah pointed out in the 1960s that many of the classical applications could be proved more easily using generalized cohomology theories, such as in his reproof of the Hopf invariant one theorem. Despite this, secondary cohomology operations still see modern usage, for example, in the obstruction theory of commutative ring spectra.
Examples of secondary and higher cohomology operations include the Massey product, the Toda bracket, and differentials of spectral sequences.
See also
Peterson–Stein formula
References
Algebraic topology | Secondary cohomology operation | [
"Mathematics"
] | 208 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
35,123,529 | https://en.wikipedia.org/wiki/Outline%20of%20applied%20physics | The following outline is provided as an overview of, and topical guide to, applied physics:
Applied physics – physics intended for a particular technological or practical use.
It is usually considered as a bridge or a connection between "pure" physics and engineering.
Applied Physics – is the proper name of a journal founded and edited by Helmut K.V. Lotsch in 1972 and published by Springer-Verlag Berlin Heidelberg New York from 1973 on
Topics in Applied Physics – is the proper name of a series of quasi-monographs founded by Helmut K.V. Lotsch and published by Springer-Verlag Berlin Heidelberg New York
Type of things that are applied physics
Applied physics can be described as all of the following:
Branch of science
Branch of physics
Branch of applied science
Branch of engineering
Branches of applied physics
Fields and areas of research include:
Accelerator physics
Acoustics
Agrophysics
Analog electronics
Astrodynamics
Astrophysics
Ballistics
Biophysics
Communication physics
Computational physics
Condensed matter physics
Control theory
Digital electronics
Econophysics
Experimental physics
Engineering physics
Fiber optics
Fluid dynamics
Force microscopy and imaging
Geophysics
Laser physics
Medical physics
Metrological physics
Microfluidics
Nanotechnology
Nondestructive testing
Nuclear engineering
Nuclear technology
Optics
Optoelectronics
Petrophysics
Photonics
Photovoltaics
Plasma physics
Quantum electronics
Semiconductor physics and devices
Soil physics
Solid state physics
Space physics
Spintronics
Superconductors
Vehicle dynamics
Applied physics institutions and organizations
International Union of Pure and Applied Physics
Harvard School of Engineering and Applied Sciences
Applied Physics Laboratory, Johns Hopkins University
National Institute of Physics, University of the Philippines Diliman
Institute of Mathematical Science and Physics, University of the Philippines Los Baños
School of Pure and Applied Physics, Mahatma Gandhi University
Institute of Applied Physics and Computational Mathematics, Beijing, China
Institute of Applied Physics, National Academy of Sciences of Ukraine
School of Pure and Applied Physics, University of KwaZulu-Natal
Department of Applied Physics, University of Karachi
Department of Applied Physics and Materials Science, Northern Arizona University
Applied physics publications
Applied physics journals
American Institute of Physics
Journal of Applied Physics
Applied Physics Letters
Japan Society of Applied Physics
Japanese Journal of Applied Physics
Applied Physics Express
IOP Publishing
Journal of Physics D: Applied Physics
Springer Berlin Heidelberg New York
Applied Physics
Applied Physics A
Applied Physics B
Topics in Applied Physics
Persons influential in applied physics
Nikola Tesla
Michael Faraday
See also
Engineering physics/Engineering science
Outline of applied science
Outline of engineering
Outline of physics
References
External links
Applied physics at Harvard
Applied physics at Stanford University
Applied physics at Caltech
Applied physics at Columbia University
Sample Plans of Study for the Bachelor of Science (B.S.) in Physics, Applied Option - Oklahoma State University
Applied physics
Applied physics | Outline of applied physics | [
"Physics"
] | 530 | [
"Applied and interdisciplinary physics"
] |
35,125,534 | https://en.wikipedia.org/wiki/North%20Icelandic%20Jet | The North Icelandic Jet is a deep-reaching current that flows along the continental slope of Iceland. The North Icelandic Jet advects overflow water into the Denmark Strait and constitutes a pathway that is distinct from the East Greenland Current. It is a cold current that runs west across the top of Iceland, then southwest between Greenland and Iceland at a depth of about 600 metres (almost 2,000 feet). The North Icelandic Jet is deep and narrow (about 12 mile wide) and can carry more than a million cubic meters of water per second.
It was not discovered until 2004. It was initially studied and described by two Icelandic Marine Research Institute’s specialists, Steingrímur Jónsson (also a professor at the University of Akureyri, and Hédinn Valdimarsson).
The current was found to be a key element of the Atlantic Meridional Overturning Circulation.
References
Kjetil Våge, Robert S. Pickart, Michael A. Spall, Héðinn Valdimarsson, Steingrímur Jónsson, Daniel J. Torres, Svein Østerhus & Tor Eldevik, Significant role of the North Icelandic Jet in the formation of Denmark Strait overflow water, Nature Geoscience 4, 723–727 (2011) doi:10.1038/ngeo1234
Steingrimur Jonsson and Hedinn Valdimarsson, A new path for the Denmark Strait overflow water from the Iceland, GEOPHYSICAL RESEARCH LETTERS, VOL. 31, L03305, doi:10.1029/2003GL019214
Stefanie Semper, Kjetil Våge, Robert S. Pickart, Héðinn Valdimarsson, Daniel J. Torres & Steingrímur Jónsson, The emergence of the North Icelandic Jet and its evolution from Northeast Iceland to Denmark Strait, Journal of Physical Oceanography, 49, 2499-2521, doi:10.1175/JPO-D-19-0088.1
Oceanography
Currents of the Arctic Ocean | North Icelandic Jet | [
"Physics",
"Environmental_science"
] | 427 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
35,126,108 | https://en.wikipedia.org/wiki/RK2%20plasmid | The RK2 Plasmid is a broad-host-range plasmid belonging to the incP incompatibility group It is notable for its ability to replicate in a wide variety of single-celled organisms, which makes it suitable as a genetic engineering tool. It is capable of transfer, replication, and maintenance in most genera of Gram-negative bacteria. RK2 may sometimes be referred to as pRK2, which is also the name of another, unrelated plasmid. Other names for RK2 include R18, R68, RP1, and RP4. These were all separate isolates, and later found to be identical plasmids.
The IncP-1 plasmid group (IncP plasmids in Escherichia coli) of which RK2 is a part has been described as "highly potent, self-transmissible, selfish DNA molecules with a complicated regulatory circuit"<ref>Malgorzata Adamczyk and Grazyna Jagura-Burdzy: "Spread and survival of promiscuous IncP-1 plasmids", Acta Biochimica Polonica, Vol 50, no. 2/2003, p. 425-453</ref>
Discovery
RK2 was first isolated in connection with an outbreak of antibiotic-resistant Pseudomonas aeruginosa and Klebsiella aerogenes in Birmingham in 1969, as one of a family of plasmids implicated in transfer of Ampicillin resistance between bacterial strains. Plasmids in the IncP-1 subgroup has been isolated from wastewater, agricultural soil, and hospitals.
Structure
RK2 is approximately 60 kbp long and contains genes for replication, maintenance, conjugation and antibiotic resistance. The resistance genes confer resistance to the antibiotics kanamycin, ampicillin and tetracycline. In addition, RK2 contains a set of potentially lethal (to the cell) genes, called kil genes, and a set of complementary transcriptional repressor genes, called kor (short for "kil-override") genes, which inactivate the kil genes. The kil and kor genes together are suspected to play a role in the broad host range of RK2.
Replication
The essential replication system in RK2 consists of an origin of replication, oriV, and a gene, trfA, whose gene product, the TrfA protein, binds to and activates oriV.CHRISTOPHER M. THOMAS, RICHARD MEYER,* AND DONALD R. HELINSKI: "Regions of Broad-Host-Range Plasmid RK2 Which Are
Essential for Replication and Maintenance", Journal of Bacteriology, Vol. 172, No. 7 JUlY 1990, p. 3859-3867 In Escherichia coli, replication proceeds unidirectionally from oriV after activation by TrfA. In E. coli, multiple plasmid copies appear to cluster together, creating a few multiplasmid clusters in each cell.Kolatka K, Witosinska M, Pierechod M, Konieczny I.: "Bacterial partitioning proteins affect the subcellular location of broad-host-range plasmid RK2", Plasmid, 2010 Nov;64(3):119-34 The copy number of RK2 is about 4-7 per cell in E. coli and 3 in P. aeruginosa.
Minimal derivatives
Several minimal derivatives of RK2 have been prepared. In these plasmids most of the genes have been removed, leaving only genes essential for replication and one or more selectable markers. One such "mini-replicon" is the plasmid PFF1, which is 5873 basepairs long.
PFF1 consists of an origin of replication, oriV, an origin of transfer, oriT, a gene coding for plasmid replication proteins, trfA, and two antibiotic resistance genes, bla and cat, which confer resistance to Ampicillin and Chloramphenicol, respectively. Minimal plasmids such as PFF1 are useful for studying the basic mechanisms of plasmid replication and copy number regulation, as there are less superfluous genetic elements which might affect the processes being studied. Several mutants of PFF1 which affect the copy number of the plasmid have been identified. Two such mutants, PFF1cop254D and PFF1cop271C, increase the copy number of PFF1 in E. coli from approximately 39-40 to about 501 and 113 plasmids per cell, respectively. An increase in copy number is useful for genetic engineering applications to increase the production yield of recombinant protein.
Notes
Further reading
Vectron Biosolutions: "The RK2 replicon", http://vectronbiosolutions.com/info.php?id=14
Meyer, et al.: "Molecular vehicle properties of the broad host range plasmid RK2", Science, December 1975: pp. 1226–1228. https://www.science.org/doi/abs/10.1126/science.1060178
Genome data from Stanford University: http://genome-www.stanford.edu/vectordb/vector_descrip/NOTCOMPL/RK2.SEQ.html
C M Thomas (editor):"Promiscuous Plasmids of Gram-negative Bacteria", Academic Press, London, 1989.
C M Thomas, and C A Smith: "Incompatibility Group P Plasmids: Genetics, Evolution, and Use in Genetic Manipulation", Annual Review of Microbiology, Vol. 41: 77-101, October 1987
"Pansegrau et al.: "Complete Nucleotide Sequence of Birmingham IncPα Plasmids: Compilation and Comparative Analysis", Journal of Molecular Biology'', Volume 239, Issue 5, 23 June 1994, Pages 623-663
Sequence data deposited at the NCBI: https://www.ncbi.nlm.nih.gov/nucleotide/508311?report=genbank&log$=nucltop&blast_rank=18&RID=CD93RUA001S
Mobile genetic elements
Molecular biology
Plasmids | RK2 plasmid | [
"Chemistry",
"Biology"
] | 1,362 | [
"Mobile genetic elements",
"Plasmids",
"Molecular genetics",
"Bacteria",
"Molecular biology",
"Biochemistry"
] |
35,128,259 | https://en.wikipedia.org/wiki/N%C3%A9ron%E2%80%93Ogg%E2%80%93Shafarevich%20criterion | In mathematics, the Néron–Ogg–Shafarevich criterion states that if A is an elliptic curve or abelian variety over a local field K and ℓ is a prime not dividing the characteristic of the residue field of K then A has good reduction if and only if the ℓ-adic Tate module Tℓ of A is unramified. introduced the criterion for elliptic curves. used the results of to extend it to abelian varieties,
and named the criterion after Ogg, Néron and Igor Shafarevich (commenting that Ogg's result seems to have been known to Shafarevich).
References
Abelian varieties
Elliptic curves
Theorems in algebraic geometry
Arithmetic geometry | Néron–Ogg–Shafarevich criterion | [
"Mathematics"
] | 139 | [
"Theorems in algebraic geometry",
"Arithmetic geometry",
"Number theory",
"Theorems in geometry"
] |
35,129,887 | https://en.wikipedia.org/wiki/Bayesian%20inference%20using%20Gibbs%20sampling | Bayesian inference using Gibbs sampling (BUGS) is a statistical software for performing Bayesian inference using Markov chain Monte Carlo (MCMC) methods. It was developed by David Spiegelhalter at the Medical Research Council Biostatistics Unit in Cambridge in 1989 and released as free software in 1991.
The BUGS project has evolved through four main versions: ClassicBUGS, WinBUGS, OpenBUGS and MultiBUGS. MultiBUGS is built on the existing algorithms and tools in OpenBUGS and WinBUGS, which are no longer developed, and implements parallelization to speed up computation. Several R packages are available, R2MultiBUGS acts as an interface to MultiBUGS, while Nimble is an extension of the BUGS language.
Alternative implementations of the BUGS language include JAGS and Stan.
See also
Spike and slab variable selection
Bayesian structural time series
References
External links
The BUGS Project
Computational statistics
Domain-specific programming languages | Bayesian inference using Gibbs sampling | [
"Mathematics"
] | 195 | [
"Computational statistics",
"Computational mathematics"
] |
35,130,602 | https://en.wikipedia.org/wiki/Grammatical%20Man | Grammatical Man: Information, Entropy, Language, and Life is a 1982 book written by Jeremy Campbell, then Washington correspondent for the Evening Standard. The book examines the topics of probability, information theory, cybernetics, genetics, and linguistics.
Information processes are used to frame and examine all of existence, from the Big Bang to DNA to human communication to artificial intelligence.
Part 1: Establishing the Theory of Information
The book's first chapter, The Second Law and the Yellow Peril, introduces the concept of entropy and gives brief outlines of the histories of Information Theory and cybernetics, examining World War II figures such as Claude Shannon and Norbert Wiener.
The Noise of Heat gives an outline of the history of thermodynamics, focusing on Rudolf Clausius's 2nd Law and its relation to order and information.
In The Demon Possessed Campbell examines the concept of entropy and presents entropy as missing information.
Chapter Four, A Nest of Subtleties and Traps, takes its name from a critique of one of the earliest theorems in probability theory, Law of large numbers (Bernoulli, 1713). The chapter outlines the history of probability, touching on characters such as Gerolamo Cardano, Antoine Gombaud, Bernoulli, Richard von Mises, and John Maynard Keynes. Campbell examines information and entropy as a probability distribution of possible messages and says that subjective versus objective interpretations of probability are made largely obsolete by an understanding of the relationship between probability and information.
Not Too Dull, Not Too Exciting addresses the problem of clarifying order from disorder within communication by highlighting the role that redundancy plays in information theory.
In the last chapter of Part 1, The Struggle Against Randomness, Campbell addresses the concepts published by Shannon in 1948—that a message can be sent from one place to another, even under noisy conditions, and be as free from error as the sender cares to make it, as long as it is coded in the proper form.
Part 2: Nature as an Information Process
Campbell uses Arrows in All Directions discusses the potential inverse relation between entropy and novelty, invoking such concepts as Laplace's Superman. Campbell quotes David Layzer: For Laplace's "intelligence," as for the God of Plato, Galileo and Einstein, the past and future coexist on equal terms, like the two rays into which an arbitrarily chosen point divides a straight line. If the theories I have presented are correct, however, not even the ultimate computer --the universe itself-- ever contains enough information to specify completely its own future states. The present moment always contains an element of genuine novelty and the future is never wholly predictable. Because biological processes also generate information and because consciousness enables us to experience those processes directly, the intuitive perception of the world as unfolding in time captures one of the most deepseated properties of the universe.
Chapter 8, Chemical Word and Chemical Deed, examines the processes of DNA as information processes. Campbell makes the distinction between first order DNA messages and second order, or structural, DNA messages (e.g., "how to bake a cake" versus "how to read a recipe"). This distinction he relates to the linguistic principles of Noam Chomsky's Universal Grammar.
In Jumping the Complexity Barrier, Campbell discusses the concept of emergence and notes that Information Theory, thermodynamics, linguistics, and the theory of evolution make significant use of terms and phrases such as "complexity," "novelty," and "constraints on possibilities." Campbell writes: To understand complex systems, such as a large computer or a living organism, we cannot use ordinary, formal logic, which deals with events that definitely will happen or definitely will not happen. A probabilistic logic is needed, one that makes statements about how likely or unlikely it is that various events will happen. Campbell also discusses John von Neumann in relating information theory, evolution, and linguistics to machines. The chapter closes with an examination of emergent systems and their relation to Gödel incompleteness.
Something Rather Subtle
Part 3: Coding Language, Coding Life
Algorithms and Evolution
Partly Green Till the Day We Die
No Need for Ancient Astronauts
The Clear and the Noisy Messages of Language
A Mirror of the Mind
Part 4: How the Brain Puts It All Together
The Brain as Cat on a Hot Tin Roof and Other Fallacies
The Strategies of Seeing
The Bottom and Top of Memory
The Information of Dreams
The Left and Right of Knowing
The Second-Theorem Society
See also
The Information: A History, a Theory, a Flood
Decoding the Universe
Systems theory
References
Information theory
Systems theory books | Grammatical Man | [
"Mathematics",
"Technology",
"Engineering"
] | 929 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
35,132,005 | https://en.wikipedia.org/wiki/Gibbons%E2%80%93Hawking%20ansatz | In mathematics, the Gibbons–Hawking ansatz is a method of constructing gravitational instantons introduced by . It gives examples of hyperkähler manifolds in dimension 4 that are invariant under a circle action.
See also
Gibbons–Hawking space
References
1978 introductions
Differential geometry
General relativity
Stephen Hawking | Gibbons–Hawking ansatz | [
"Physics"
] | 61 | [
"General relativity",
"Relativity stubs",
"Theory of relativity"
] |
45,016,128 | https://en.wikipedia.org/wiki/Grave%20%28tempo%29 | Grave is a tempo mark and mood designation in music. The word originates in the Italian language and means solemn, heavy, or serious. The grave tempo is very slow at a pace of approximately 20-40 musical beats per minute.
History
The term grave did not become widely associated with a tempo designation until the latter part of the 17th century. Earlier uses of the word grave were done as an adjective or descriptor of a work, but were not associated with a tempo marking. Examples of this earlier use would be Antonio Brunelli's Ballo grave (1616) and Biagio Marini's Symphonia grave (1617). In Venetian polychoral style of the Renaissance and Baroque music era the term grave had a unique musical meaning. This type of music employed two separate choruses divided by space and singing in alternation. The upper voiced choir was referred to as the acuto and the lower voiced choir was named grave.
Francesco Cavalli was among the first composers to use the word grave as a tempo marking, with that term being employed as a performance instruction within his opera Le nozze di Teti e di Peleo (1639). Other early examples of grave being used as a tempo term include Marco Uccellini's Sonate (1646), and Biagio Marini's Op. 22: Per ogni sorte di strumento musicale diversi generi di sonate, da chiesa, e da camera (1655). By the 1680s, the term was common in Italy with Henry Purcell writing in his preface to his Sonnata’s of III Parts (1683) that the term was widely used by Italian composers and musicians to refer to a "very slow movement" and that the term had spread to other parts of Europe.
While today the term grave is widely understood to be slower than the tempo terms largo and adagio, music theorists and composers of the 17th and 18th century were not so consistent in their interpretation and use of these terms, with some composers marking scores with grave but with performance descriptions described elsewhere that would indicate a speed more akin to modern tempos for largo or adagio.
References
Musical terminology
Rhythm and meter
Temporal rates | Grave (tempo) | [
"Physics"
] | 457 | [
"Temporal quantities",
"Physical quantities",
"Time",
"Temporal rates",
"Rhythm and meter",
"Spacetime"
] |
45,017,791 | https://en.wikipedia.org/wiki/Omni%20processor | Omni processor is a term coined in 2012 by staff of the Water, Sanitation, Hygiene Program of the Bill & Melinda Gates Foundation to describe a range of physical, biological or chemical treatments to remove pathogens from human-generated fecal sludge, while simultaneously creating commercially valuable byproducts (e.g., energy). Air from feces are separated from common air, then these collected air from feces are compressed like (LPG) and used as fuel. An omni processor mitigates unsafe methods in developing countries of capturing and treating human waste, which annually result in the spread of disease and the deaths of more than 1.5 million children.
Rather than a trademark, or a reference to a specific technology, the term omni processor is a general term for a range of self-sustaining, independently developed systems designed with the same end in mind, to transform and extract value from human waste — using various technological approaches, including combustion, supercritical water oxidation and pyrolysis.
In the term, omni refers to the ability of an omni processor to treat a wide variety of waste streams or fuel sources.
Background
Since 2012, the Bill and Melinda Gates Foundation has been funding research into omni processors. An omni processor is any of various types of technologies that treat fecal sludge, also known as septage to remove pathogens and simultaneously extract byproducts with commercial value, for example energy or soil nutrients, the latter which could be reused in agriculture. The omni processor program, which targets community-scale solutions that may optionally combine sludge and solid waste processing, complements the foundation's pit latrine emptying ("omni-ingestor") and "Reinvent the Toilet" programs.
Challenges
The omni processor is targeted as a solution for developing countries, although challenges around technical and financial aspects remain. Omni processors and omni ingestors are being designed to provide an alternative to sewerage system-based technologies. They are also intended to address the large number of existing pit latrines which lack a supporting infrastructure of fecal sludge collection and processing when the pits are full. Sludge from pit latrines has to be removed from the pits for treatment and disposal either by pumping (if the fecal sludge is sufficiently liquid) or by manual emptying with shovels or other devices (in India, this practice is called manual scavenging). Despite new low-cost pumps being developed, only a small fraction of sludge is safely extracted and treated currently in many African and Asian cities.
Examples
Biomass Controls PBC
Biomass Controls PBC is a U.S. Delaware public benefit corporation that delivered the first biogenic refinery (OP) prototype to New Delhi, India, in 2014 in partnership with the Climate Foundation. This system was designed to process non-sewered sanitation for populations between 100 and 10,000 people. The prototype was funded by the Bill and Melinda Gates Foundation. In 2016 a biogenic refinery was delivered to Kivalina, Alaska, for the processing of urine-diverting dry toilets (UDDTs) as part of the Alaska Water & Sewer Challenge. In 2017, three systems were shipped to India and installed in the cities of Wai, Warangal and Narsapur in partnership with Tide Technocrats. In 2018 a prototype was shown that can generate electricity (mCHP) from the thermal energy from the processing of fecal sludge, at the Bill and Melinda Gates Foundation reinvented toilet event in Beijing, China. In 2019, a system was set up at a dairy farm to process the separated solids from cow manure. This system demonstrated a significant reduction in greenhouse gas emissions while reducing solids volume by over 90% and producing biochar.
Sedron Technologies
The U.S.-based company Sedron Technologies (formerly Janicki Bioenergy) presented in 2014 a prototype using combustion. Their process is a sewage sludge treatment system that produces drinking water and electrical energy as end products from sewage sludge. Manufactured by Sedron Technologies, the proof of concept model was funded by the Bill and Melinda Gates Foundation. The S100 prototype model can produce 10,800 liters of drinking water per day and 100 kW net electricity. A larger model under development, the S200, is designed to handle the waste from 100,000 people, produce 86,000 liters of drinking water per day and 250 kW net output electricity. These systems are designed to provide a "self-sustaining bioenergy" process.
The treatment process first involves boiling (or thermally drying) the sewage sludge, during which water vapor is boiled off and recovered. A dry sludge is left behind which is then combusted as fuel to heat a boiler. This boiler produces steam and the heat necessary for the boiling process. The steam is then used to generate electrical energy. Some of this electrical energy is used for the final water reverse osmosis purification stages to produce safe drinking water, and to power ancillary pumps, fans and motors. The process immediately uses the solid fuel it produces, and therefore the process does not make a solid fuel product as an end product.
A pilot project of Sedron Technologies' omni processor was installed in Dakar, Senegal, in 2015 and can now treat the fecal sludge of 50,000-100,000 people.
Climate Foundation
The U.S.-based NGO Climate Foundation, in collaboration with Stanford University, has built several pilot-scale reactors to treat human waste and turn it into biochar, which can be used as an agricultural soil amendment.
Duke University and 374Water
Scientists at Duke University in the U.S. have developed and are testing a pilot fecal sludge treatment unit that fits in a 20-foot shipping container and treats the fecal matter of roughly 1000 people using a new supercritical water oxidation (SCWO) process. The SCWO technology can covert any type of organic waste (fecal, food waste, paper, plastic, etc.) to energy and clean water.
The waste (sludge) is reacted with air at temperatures and pressures above the critical point of water (374 °C, 221 Bar) to convert all of the organics into clean water and CO2 in seconds. Byproducts include distilled water, clean water which contains suspended inorganic minerals that can be utilized as fertilizers. The unit generates more than 900 liters of water for each ton of processed waste and the water can be processed further to drinking water.
The continuous process utilizes the energy embedded in the waste, thus enabling operating off-the-grid. 374Water is a Duke University spin-off company aiming to commercialize the SCWO technology.
Unilever
Unilever PLC in the United Kingdom is developing a pyrolysis-based fecal sludge treatment unit designed to serve over 2000 people.
Related research efforts
The omni processor initiative for processing fecal sludge is being complemented by an effort to develop new technologies for improved pit latrine emptying (called by the Gates Foundation the "omni ingestor") and by the Reinvent the Toilet Challenge. The latter is a long-term research and development effort to develop a hygienic, stand-alone toilet. It is focused on "reinventing the flush toilet". The aim is to create a toilet that not only removes pathogens from human excreta, but also recovers resources such as energy, clean water, and nutrients (a concept also known as reuse of excreta). It should operate "off the grid" without connections to water, sewer, or electrical networks. Finally, it should cost less than 5 US-cents per user per day.
Society and culture
Media attention
In a publicity stunt in late 2014, Bill Gates drank the water produced from Sedron Technologies' omni processor system, causing widespread media attention. In early 2015, Gates appeared on Late Night With Jimmy Fallon and challenged Fallon to see if he could taste the difference between water from this particular "omni processor" or bottled water.
The project was covered in a Netflix documentary mini-series Inside Bill's Brain: Decoding Bill Gates.
References
External links
Water, sanitation and hygiene program of the Bill and Melinda Gates Foundation
Environmental engineering
Sanitation
Bioenergy
Biofuels
Waste treatment technology | Omni processor | [
"Chemistry",
"Engineering"
] | 1,703 | [
"Water treatment",
"Chemical engineering",
"Civil engineering",
"Environmental engineering",
"Waste treatment technology"
] |
45,022,181 | https://en.wikipedia.org/wiki/Web%20Application%20Messaging%20Protocol | WAMP is a WebSocket subprotocol registered at IANA, specified to offer routed RPC and PubSub. Its design goal is to provide an open standard for soft, real-time message exchange between application components and ease the creation of loosely coupled architectures based on microservices. Because of this, it is a suitable enterprise service bus (ESB), fit for developing responsive web applications or coordinating multiple connected IoT devices.
Characteristics
Structure
WAMP requires a reliable, ordered, full-duplex message channel as a transport layer, and by default uses Websocket. However, implementations can use other transports matching these characteristics and communicate with WAMP over e.g. raw sockets, Unix sockets, or HTTP long poll.
Message serialization assumes integers, strings and ordered sequence types are available, and defaults to JSON as the most common format offering these. Implementations often provide MessagePack as a faster alternative to JSON at the cost of an additional dependency.
Workflow
WAMP is architectured around client–client communications with a central software, the router, dispatching messages between them. The typical data exchange workflow is:
Clients connect to the router using a transport, establishing a session.
The router identifies the clients and gives them permissions for the current session.
Clients send messages to the router which dispatches them to the proper targets using the attached URIs.
The clients send these messages using the two high-level primitives that are RPC and PUB/SUB, doing four core interactions:
register: a client exposes a procedure to be called remotely.
call: a client asks the router to get the result of an exposed procedure from another client.
subscribe: a client notifies its interest in a topic.
publish: a client publishes information about this topic.
This can have subtle variations depending on the underlying transport. However, implementation details are hidden to the end-user who only programs with the two high-level primitives that are RPC and PubSub.
Security
As WAMP uses Websocket, connections can be wrapped in TLS for encryption. Even when full confidentiality is not established, several mechanisms are implemented to isolate components and avoid man-in-the-middle attacks. Default implementations ensure that trying to register an already registered procedure will fail.
Routers can define realms as administrative domains, and clients must specify which realm they want to join upon connection. Once joined, the realm will act as a namespace, preventing clients connected to a realm from using IDs defined in another for RPC and PubSub. Realms also have permissions attached and can limit the clients to one subset of the REGISTER/CALL/PubSub actions available.
Some realms can only be joined by authenticated clients, using various authentication methods such as using TLS certificate, cookies or a simple ticket.
Routed RPCs
Unlike with traditional RPCs, which are addressed directly from a caller to the entity offering the procedure (typically a server backend) and are strictly unidirectional (client-to-server), RPCs in WAMP are routed by a middleware and work bidirectionally.
Registration of RPCs is with the WAMP router, and calls to procedures are similarly issued to the WAMP router. This means first of all that a client can issue all RPCs via the single connection to the WAMP router, and does not need to have any knowledge what client is currently offering the procedure, where that client resides or how to address it. This can indeed change between calls, opening up the possibility for advanced features such as load-balancing or fail-over for procedure calls.
It additionally means that all WAMP clients are equal in that they can offer procedures for calling. This avoids the traditional distinction between clients and server backends, and allows architectures where browser clients call procedures on other browser clients, with an API that feels like peer to peer communication.
However, even with multi-tiers architectures, the router is still a single point of failure. For this reason, some router implementation roadmaps include clustering features.
Implementations
Clients
As WAMP main targets are Web applications and the Internet of Things, the first client implementations are in languages well established in these industries (only WAMP v2 clients listed):
The minimum requirements to build a WAMP client are the abilities to use sockets and to serialise to JSON. Thus, many modern languages already fulfill these requirements with their standard library. Additional features which would add dependencies, such as TLS encryptions or MessagePack serialization, are optional.
However, the persistent nature of WebSocket connections requires the use of non-blocking libraries and asynchronous APIs. In languages with one official mechanism such as JavaScript, Erlang or Go, this is not an issue. But for languages with several competing solutions for asynchronous programming, such as Python or PHP, it forces the client author to commit to a specific part of the ecosystem.
For the same reason, integrating legacy projects can also require work. As an example, most popular Web Python frameworks are using WSGI, a synchronous API, and running a WAMP client inside a WSGI worker needs manual adapters such as crochet.
Routers
While routers can technically be embedded directly into the application code and some client libraries also provide a router, this architecture is discouraged by the specification.
Since the router is a moving part, it is best used as a swappable black box just like one would consider Apache or Nginx for HTTP:
Tavendo, the company from which originated the protocol, is also the author of Crossbar.io, which promotes itself as the de facto router implementation. As they are promoting microservice-based architectures, Crossbar.io embeds a service manager for hosting and monitoring WAMP app components, a static file Web server, and a WSGI container. Being written with the Twisted library, it is one of the implementations that can be set up in production without a proxy, aiming to replace stacks such as Nginx associated with Supervisor and Gunicorn.
Use cases
Being a WebSocket sub-protocol, WAMP fits naturally anywhere one would use raw web sockets, as a way to synchronize clients such as Web browsers, push notifications to them and allow soft real-time collaboration between users. It has also the same limitations, requiring client support, which is missing for Internet Explorer versions older than 10. This is mitigated by the existence of polyfills using more portable technologies such as Flash or the use of HTTP Longpoll as a fallback. In that sense, WAMP is a competitor to Meteor's DDP.
WAMP also targets the IoT, where it is used in the same way as MQTT as a light and efficient medium to orchestrate clusters of connected objects. The implementations in various languages make it suitable to control and monitor small devices such as the Raspberry Pi (in Python) or the Tessel (in JavaScript).
And last but not least, WAMP can act as an enterprise service bus, serving as the link between microservices like one would do with CORBA, ZeroMQ, Apache Thrift, SOAP or AMQP.
Evolution
WAMP is currently in version 2 which introduced routed RPC. As of now, all routers are compatible with version 2. Some clients remain unported: Wamp.io, AutobahnAndroid, and cljWAMP.
The version 2 of the specification is divided into two parts: the basic profile, including the router RPC and Pub/Sub, and the advanced profile, featuring trust levels, URI pattern matching, and client listing. The basic profile is considered stable and is what current libraries are implementing while the advanced profile is still in evolution.
Comparison
The WAMP website claims the following selling points for the technology:
Native PubSub: supports Publish & Subscribe out of the box (no extension required).
RPC: supports Remote Procedure Calls out of the box (no extension required).
Routed RPC: supports routed (not only point-to-point) Remote Procedure Calls.
Web native: runs natively on the Web (without tunneling or bridging).
Cross Language: works on and between different programming languages and run-times.
Open Standard: Is an open, official specification implemented by different vendors.
On the other hand, WAMP does not try to achieve some goals of other protocols:
Full object passing like CORBA.
Data synchronization like DDP.
Peer-to-peer communication like ZeroMQ.
Multi-media streaming like WebRTC.
Large file transfer like HTTP.
Nevertheless, numerous protocols share some characteristics with WAMP:
Although, it is important to note that while DDP does Pub/Sub under the hood to synchronize data sets, it does not expose PubSub primitives. It also is an open specification with several implementations, but not registered as a standard.
References
Application layer protocols
JSON
Remote procedure call
Data serialization formats
Inter-process communication
Message-oriented middleware
Middleware
Internet protocols
Network protocols
Open standards | Web Application Messaging Protocol | [
"Technology",
"Engineering"
] | 1,901 | [
"Software engineering",
"Middleware",
"IT infrastructure"
] |
45,024,047 | https://en.wikipedia.org/wiki/Ballistic%20capture | Ballistic capture is a low energy method for a spacecraft to achieve an orbit around a distant planet or moon with no fuel required to go into orbit. In the ideal case, the transfer is ballistic (requiring zero Delta-v) after launch. In the traditional alternative to ballistic capture, spacecraft would either use a Hohmann transfer orbit or Oberth effect, which requires the spacecraft to burn fuel in order to slow down at the target. A requirement for the spacecraft to carry fuel adds to its cost and complexity.
To achieve ballistic capture the spacecraft is placed on a flight path ahead of the target's orbital path. The spacecraft then falls into the desired orbit, requiring only minor orbit corrections which may only need low power ion thrusters.
The first paper on using ballistic capture for transfer designed for spacecraft was written in 1987. The mathematical theory that describes ballistic capture is called Weak Stability Boundary theory.
Ballistic capture was first used by the Japanese spacecraft Hiten in 1991 as a method to get to the Moon. This was designed by Edward Belbruno and J. Miller. The ballistic capture transfer that performed this is an exterior ballistic capture transfer since it goes beyond the Earth-Moon distance. An interior ballistic capture transfer stays within the Earth-Moon distance. This was described in 1987 and was first used by the ESA SMART-1 spacecraft in 2004.
Advantages
Ballistic capture is predicted to be:
safer, as there is no time critical orbit insertion burn,
launchable at almost any time, rather than having to wait for a narrow launch window,
more fuel efficient for some missions.
Low-energy transfer
Trajectories that use ballistic capture are also known as a Low energy transfer (LET). More precisely, the terminology ballistic capture transfer (BCT) is used. They are low energy because they use no delta-V for capture. However, a low energy transfer need not be a ballistic capture transfer. The term ballistic lunar transfer (BLT) is also sometimes used.
The region about a target body where ballistic capture occurs is called a weak stability boundary. The term weak stability boundary transfer is also used, or for short, WSB transfer.
In 2014, ballistic capture transfer was proposed as an alternate low energy transfer for future Mars missions. It can be performed anytime, not only once per 26 months as in other maneuvers and does not involve dangerous and expensive (fuel cost) braking. But it takes up to one year, instead of nine months for a Hohmann transfer.
Missions using ballistic capture
The following missions have used ballistic capture transfers, (EBCT – Exterior ballistic capture transfer, IBCT – Interior ballistic capture transfer):
See also
Trans-lunar injection
Asteroid capture
References
Further reading
Lunar Transfer Orbits Utilizing Solar Perturbations and Ballistic Capture; Wolfgang Seefelder; 2002.
Low Energy Transfer To The Moon
Ballistic Lunar Transfer (BLT) Cheat Sheet
Designing Low Energy Capture Transfers for Spacecraft to the Moon and Mars (Special Seminar in Symplectic Geometry). Institute for Advanced Study, Princeton. Tuesday October, 28 2014
Astrodynamics
Spacecraft propulsion
Orbital maneuvers | Ballistic capture | [
"Engineering"
] | 618 | [
"Astrodynamics",
"Aerospace engineering"
] |
45,024,063 | https://en.wikipedia.org/wiki/PKS%201302%E2%80%93102 | PKS 1302−102 is a quasar in the Virgo constellation, located at a distance of approximately 1.1 Gpc (around 3.5 billion light-years). It has an apparent magnitude of about 14.9 mag in the V band with a redshift of 0.2784. The quasar is hosted by a bright elliptical galaxy, with two neighboring companions at distances of 3 kpc and 6 kpc. The light curve of PKS 1302−102 appears to be sinusoidal with an amplitude of 0.14 mag and a period of 1,884 ± 88 days, which suggests evidence of a supermassive black hole binary.
Possible black hole binary
PKS 1302−102 was selected from the Catalina Real-Time Transient Survey as one of 20 quasars with apparent periodic variations in the light curve. Of these quasars, PKS 1302−102 appeared to be the best candidate in terms of sinusoidal behavior and other selection criteria, such as data coverage of more than 1.5 cycles in the measured period. One plausible interpretation of the apparent periodic behavior is the possibility of two supermassive black holes (SMBH) orbiting each other with a separation of approximately 0.1 pc in the final stages of a 3.3 billion year old galaxy merger. If this turns out to be the case, it would make PKS 1302−102 an important object of study to various areas of research, including gravitational wave studies and the unsolved final parsec problem in a merger of black holes.
Other explanations, of lesser likelihood, to the observed sinusoidal periodicity include a hot spot on the inner part of the black hole's accretion disk and the possibility of a warped accretion disk which partially eclipses in the orbit around a single SMBH. However, it also remains possible that the periodic behavior in PKS 1302−102 is indeed just a random occurrence in the light curve of an ordinary quasar, as spurious nearly-periodic variations can occur over limited time periods as part of stochastic quasar variability. Further observations of the quasar could either promote true periodicity or rule out a binary interpretation, especially if the measured light curve randomly diverges from the sinusoidal model.
References
Further reading
https://arstechnica.com/science/2015/01/supermassive-black-hole-binary-discovered/
https://www.nytimes.com/2015/01/08/science/in-a-far-off-galaxy-2-black-holes-dance-toward-an-explosive-union.html
Quasars
Supermassive black holes
Virgo (constellation)
4662778 | PKS 1302–102 | [
"Physics",
"Astronomy"
] | 575 | [
"Black holes",
"Unsolved problems in physics",
"Supermassive black holes",
"Virgo (constellation)",
"Constellations"
] |
45,025,528 | https://en.wikipedia.org/wiki/Beilby%20Medal%20and%20Prize | The Beilby Medal and Prize is awarded annually to a scientist or engineer for work that has exceptional practical significance in chemical engineering, applied materials science, energy efficiency or a related field. The prize is jointly administered by the Institute of Materials, Minerals and Mining, the Royal Society of Chemistry and the Society of Chemical Industry, who make the award in rotation.
The award is open to members of the Institute of Materials, Minerals and Mining, the Royal Society of Chemistry and the Society of Chemical Industry as well as other scientists and engineers worldwide. The aim of the award is to recognise the achievements of early-career scientists, and nominees should be no older than 39 years of age.
The Beilby Medal and Prize is awarded in memory of Scottish scientist Sir George Thomas Beilby FRS. Born in 1850, he joined the Oakbank Oil Company in 1869 following his studies at the University of Edinburgh. He later became President of all three organisations or their precursor societies, acting as President of the Society of Chemical Industry from 1898–99, The Institute of Chemistry from 1902–12 and the Institute of Metals from 1916-18.
Recipients of the award receive a medal, a certificate and a prize of £1,000. The first award was made in 1930.
Recipients
The Beilby Medal and Prize recipients since 1930 are:
2023 – Charlotte Vogt
2022 – Sahika Inal
2021 – Pola Goldberg Oppenheimer
2020 – Jin Xuan
2019 – Prashant K. Jain
2018 –
2017 –
2016 –
2015 –
2014 – Javier Pérez-Ramírez
2013 –
2012 –
2011 –
2010 –
2009 – Zhenan Bao
2008 – Neil McKeown
2007 –
2006 –
2005 – Simon R. Biggs, Nilay Shah
2004 –
2003 – Peter Bruce
2002 – No award
2001 – Alfred Cerezo
2000 – Zheng Xiao Guo
1999 – John T. S. Irvine, Anthony J. Ryan
1998 – Costos C. Pantelides
1997 – Richard A. Williams
1996 – Paul J. Luckham
1995 – Lynn F. Gladden
1994 –
1993 – Howard A. Chase, David C. Sherrington
1992 – R. C. Brown
1991 – Geoffrey J. Ashwell
1990 – R. F. Dalton
1989 – No award
1988 – No award
1987 – G. E. Thompson
1986 – Malcolm Robert Mackley
1985 – George D. W. Smith
1984 – A. Grint
1983 –
1981 – Derek John Fray, R. M. Nedderman
1980 – James Barrie Scuffham
1979 – Stephen F. Bush
1978 – John Christopher Scully
1977 – James E. Castle
1976 – Ian Fells
1975 – Peter Roland Swann
1973 – Julian Szekely, G. C. Wood
1972 – Frank Pearson Lees
1971 – John Howard Purnell
1970 – Albert R. C. Westwood
1969 – Raymond Edward Smallman
1968 – J. Mardon
1967 – Anthony Kelly
1966 – J. F. Davidson
1965 – J. A. Charles
1964 – Peter L. Pratt
1963 – Robert Honeycombe, R. W. B. Nurse
1961 – C. Edeleanu, John Nutting
1957 – B. E. Hopkins, Edmund C. Potter
1956 – R. W. Kear
1955 – F. D. Richardson, F. Wormwell
1954 – H. K. Hardy, Sir James Woodham Menter
1952 – T. V. Arden
1951 – Kenneth Henderson Jack, W. A. Wood
1950 – W. A. Baker, G. Whittingham
1949 – Frank R. N. Nabarro, C. E. Ransley,
1948 – A. Stuart C. Lawrence
1947 – Geoffrey Vincent Raynor, G. R. Rigby
1940 – F. M. Lea
1938 – Frank Philip Bowden, B. Jones
1937 – Bernard Scott Evans, William Harold Juggins Vernon
1934 – William Hume-Rothery, E. A. Rudge
1933 – Constance Tipper, Arthur Joseph Victor Underwood
1932 – Walter James Rees, W. R. Schoeller
1930 – , Ulick Richardson Evans
See also
List of chemistry awards
List of engineering awards
References
Awards of the Royal Society of Chemistry
Awards established in 1930
British awards
Chemical engineering awards | Beilby Medal and Prize | [
"Chemistry",
"Engineering"
] | 839 | [
"Chemical engineering",
"Chemical engineering awards"
] |
41,787,408 | https://en.wikipedia.org/wiki/Martin%20diameter | The Martin diameter is the length of the area bisector of an irregular object in a specified direction of measurement. It is used to measure particle size in microscopy.
See also
Feret diameter
References
External links
Martin's diameter, Photonics Dictionary Plus
Microscopy
Length | Martin diameter | [
"Physics",
"Chemistry",
"Mathematics"
] | 54 | [
"Scalar physical quantities",
"Physical quantities",
"Distance",
"Quantity",
"Size",
"Length",
"Microscopy",
"Wikipedia categories named after physical quantities"
] |
41,790,574 | https://en.wikipedia.org/wiki/Moduli%20stack%20of%20principal%20bundles | In algebraic geometry, given a smooth projective curve X over a finite field and a smooth affine group scheme G over it, the moduli stack of principal bundles over X, denoted by , is an algebraic stack given by: for any -algebra R,
the category of principal G-bundles over the relative curve .
In particular, the category of -points of , that is, , is the category of G-bundles over X.
Similarly, can also be defined when the curve X is over the field of complex numbers. Roughly, in the complex case, one can define as the quotient stack of the space of holomorphic connections on X by the gauge group. Replacing the quotient stack (which is not a topological space) by a homotopy quotient (which is a topological space) gives the homotopy type of .
In the finite field case, it is not common to define the homotopy type of . But one can still define a (smooth) cohomology and homology of .
Basic properties
It is known that is a smooth stack of dimension where is the genus of X. It is not of finite type but locally of finite type; one thus usually uses a stratification by open substacks of finite type (cf. the Harder–Narasimhan stratification), also for parahoric G over curve X see and for G only a flat group scheme of finite type over X see.
If G is a split reductive group, then the set of connected components is in a natural bijection with the fundamental group .
The Atiyah–Bott formula
Behrend's trace formula
This is a (conjectural) version of the Lefschetz trace formula for when X is over a finite field, introduced by Behrend in 1993. It states: if G is a smooth affine group scheme with semisimple connected generic fiber, then
where (see also Behrend's trace formula for the details)
l is a prime number that is not p and the ring of l-adic integers is viewed as a subring of .
is the geometric Frobenius.
, the sum running over all isomorphism classes of G-bundles on X and convergent.
for a graded vector space , provided the series on the right absolutely converges.
A priori, neither left nor right side in the formula converges. Thus, the formula states that the two sides converge to finite numbers and that those numbers coincide.
Notes
References
J. Heinloth, A.H.W. Schmitt, The Cohomology Ring of Moduli Stacks of Principal Bundles over Curves, 2010 preprint, available at http://www.uni-essen.de/~hm0002/.
Further reading
C. Sorger, Lectures on moduli of principal G-bundles over algebraic curves
See also
Geometric Langlands conjectures
Ran space
Moduli stack of vector bundles
Algebraic geometry | Moduli stack of principal bundles | [
"Mathematics"
] | 611 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
41,791,807 | https://en.wikipedia.org/wiki/Behrend%27s%20trace%20formula | In algebraic geometry, Behrend's trace formula is a generalization of the Grothendieck–Lefschetz trace formula to a smooth algebraic stack over a finite field conjectured in 1993 and proven in 2003 by Kai Behrend. Unlike the classical one, the formula counts points in the "stacky way"; it takes into account the presence of nontrivial automorphisms.
The desire for the formula comes from the fact that it applies to the moduli stack of principal bundles on a curve over a finite field (in some instances indirectly, via the Harder–Narasimhan stratification, as the moduli stack is not of finite type.) See the moduli stack of principal bundles and references therein for the precise formulation in this case.
Pierre Deligne found an example that shows the formula may be interpreted as a sort of the Selberg trace formula.
A proof of the formula in the context of the six operations formalism developed by Yves Laszlo and Martin Olsson is given by Shenghao Sun.
Formulation
By definition, if C is a category in which each object has finitely many automorphisms, the number of points in is denoted by
with the sum running over representatives p of all isomorphism classes in C. (The series may diverge in general.) The formula states: for a smooth algebraic stack X of finite type over a finite field and the "arithmetic" Frobenius , i.e., the inverse of the usual geometric Frobenius in Grothendieck's formula,
Here, it is crucial that the cohomology of a stack is with respect to the smooth topology (not etale).
When X is a variety, the smooth cohomology is the same as etale one and, via the Poincaré duality, this is equivalent to Grothendieck's trace formula. (But the proof of Behrend's trace formula relies on Grothendieck's formula, so this does not subsume Grothendieck's.)
Simple example
Consider , the classifying stack of the multiplicative group scheme (that is, ). By definition, is the category of principal -bundles over , which has only one isomorphism class (since all such bundles are trivial by Lang's theorem). Its group of automorphisms is , which means that the number of -isomorphisms is .
On the other hand, we may compute the l-adic cohomology of directly. We remark that in the topological setting, we have (where now denotes the usual classifying space of a topological group), whose rational cohomology ring is a polynomial ring in one generator (Borel's theorem), but we shall not use this directly. If we wish to stay in the world of algebraic geometry, we may instead "approximate" by projective spaces of larger and larger dimension. Thus we consider the map induced by the -bundle corresponding to This map induces an isomorphism in cohomology in degrees up to 2N. Thus the even (resp. odd) Betti numbers of are 1 (resp. 0), and the l-adic Galois representation on the (2n)th cohomology group is the nth power of the cyclotomic character. The second part is a consequence of the fact that the cohomology of is generated by algebraic cycle classes. This shows that
Note that
Multiplying by , one obtains the predicted equality.
Notes
References
Theorems in algebraic geometry | Behrend's trace formula | [
"Mathematics"
] | 731 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
41,795,239 | https://en.wikipedia.org/wiki/John%20R.%20Huizenga | John Robert Huizenga (April 21, 1921 – January 25, 2014) was an American physicist who helped build the first atomic bomb and who also debunked University of Utah scientists' claim of achieving cold fusion.
Early life and education
John Robert Huizenga was born on a farm near Fulton, Illinois, the son of Henry and Josie (Brands) Huizenga. He attended Erie High School and Morrison High School, graduating from the latter in 1940. He continued his education at Calvin College in Michigan, from which he received a bachelor's degree in 1944. He would maintain his ties to Calvin later in life, for example collaborating on fundamental nuclear research with his Calvin friend Roger Griffioen, who had gone on to become a professor there. Calvin would name him one of the college's Distinguished Alumni in 1975.
Along with other Calvin students, he was recruited after graduation to work for the Manhattan Project, at the Project's site in Oak Ridge, Tennessee, that was dedicated to the production of highly enriched uranium. Following his time in Oak Ridge, he continued his education at the University of Illinois, receiving a Doctor of Philosophy degree in physical chemistry in 1949. On completing his studies he held joint appointments at the University of Chicago and Argonne National Laboratory.
Professional career
During World War II, Huizenga supervised teams at the Manhattan Project in Oak Ridge, Tenn., involved in enriching uranium used in the atomic weapon dropped on Hiroshima in August 1945. During his Argonne years, as a result of examining debris from the "Ivy Mike" nuclear test in 1952, Huizenga was part of the team that added two new synthetic chemical elements, einsteinium and fermium, to the periodic table. Huizenga and his colleagues were at first unable to publish papers on their discoveries in the open literature, because of classification concerns relating to the nuclear test, but these concerns were eventually resolved and the team was able to publish in Physical Review and thus claim priority for their discovery. During his Argonne years he was one of the founders of the Gordon Research Conferences on nuclear chemistry, serving as chairman of the nuclear chemistry Gordon Conference in 1958. He received a Guggenheim Fellowship in 1964 and took a sabbatical from Argonne to further his studies as a visiting professor at the University of Paris for the 1964–1965 academic year.
In 1967, he became a professor of chemistry and physics at the University of Rochester where he worked for the remainder of his career, apart from a second Guggenheim Fellowship that allowed him to engage in research during the 1973–1974 school year at the University of California, Berkeley, the Technische Universität München, and the Niels Bohr Institute in Copenhagen. His research interests at Rochester covered topics in nuclear structure of actinides, nuclear fission, and nuclear reactions between heavy ions. He was chairman of the Department of Chemistry from 1983 to 1988, retiring as Tracy H. Harris Professor (later Professor Emeritus) of Chemistry.
During Huizenga's time at Rochester, the university had its own particle accelerator, a tandem Van de Graaff accelerator that produced beams of nuclei accelerated to energies of several MeV per nucleon. This facility, which opened in 1966, afforded him the opportunity to continue his research program in experimental nuclear science. However, the limited beam energies available led him to more powerful accelerators, such as the SuperHILAC at Berkeley and the Los Alamos Meson Physics Facility, LAMPF, at Los Alamos National Laboratory, for his experimental work. His LAMPF proposal to study actinide muonic atoms was one of the earliest experiments to receive beam time at the LAMPF stopped-muon facility.
In 1989, Huizenga co-chaired, with Norman Ramsey, a panel convened by the United States Department of Energy which attempted to debunk claims by two University of Utah chemists that they had achieved nuclear fusion at room temperature. The findings of the Huizenga/Ramsey panel, although highly skeptical of the reality of cold fusion, were cautious:
Based on the examination of published reports, reprints, numerous communications to the Panel and several site visits, the Panel concludes that the experimental results of excess heat from calorimetric cells reported to date do not present convincing evidence that useful sources of energy will result from the phenomena attributed to cold fusion. ... The Panel concludes that the experiments reported to date do not present convincing evidence to associate the reported anomalous heat with a nuclear process. ...
Current understanding of the very extensive literature of experimental and theoretical results for hydrogen in solids gives no support for the occurrence of cold fusion in solids. Specifically, no theoretical or experimental evidence suggests the existence of D-D distances shorter than that in the molecule D2 or the achievement of "confinement" pressure above relatively modest levels. The known behavior of deuterium in solids does not give any support for the supposition that the fusion probability is enhanced by the presence of the palladium, titanium, or other elements.
Nuclear fusion at room temperature, of the type discussed in this report, would be contrary to all understanding gained of nuclear reactions in the last half century; it would require the invention of an entirely new nuclear process.
However, Huizenga later published a book titled "Cold Fusion: The Scientific Fiasco of the Century".
Awards and honors
Huizenga was elected to the National Academy of Sciences in 1976 and the American Academy of Arts and Sciences (Fellow) in 1992. He was a 1966 recipient of the Ernest Orlando Lawrence Award bestowed by the United States Atomic Energy Commission.
Personal life
Huizenga married Dorothy Koeze in 1946. They had two sons and two daughters. One son, Dr. Robert Huizenga, is a prominent physician whose career has included a stint as team physician for the Los Angeles Raiders American football team.
Following his retirement from Rochester, Huizenga and his wife moved to North Carolina, where he continued to serve on advisory committees at major accelerator laboratories, worked to debunk cold fusion, and wrote his memoirs. Dolly Huizenga died in 1999. John Huizenga died of heart failure in San Diego, California, in January 2014, aged 92.
Published works
References
1921 births
2014 deaths
People from Fulton, Illinois
American physicists
Nuclear chemists
Manhattan Project people
Cold fusion
University of Rochester faculty
Calvin University alumni
Members of the United States National Academy of Sciences
Writers from Illinois
Fellows of the American Physical Society
American expatriates in France | John R. Huizenga | [
"Physics",
"Chemistry"
] | 1,316 | [
"Nuclear chemists",
"Nuclear fusion",
"Cold fusion",
"Nuclear physics"
] |
36,172,654 | https://en.wikipedia.org/wiki/Uranium%20hexachloride | Uranium hexachloride () is an inorganic chemical compound of uranium in the +6 oxidation state. is a metal halide composed of uranium and chlorine. It is a multi-luminescent dark green crystalline solid with a vapor pressure between 1-3 mmHg at 373.15 K. is stable in a vacuum, dry air, nitrogen and helium at room temperature. It is soluble in carbon tetrachloride (). Compared to the other uranium halides, little is known about .
Structure and Bonding
Uranium hexachloride has an octahedral geometry, with point group Oh. Its lattice (dimensions: 10.95 ± 0.02 Å x 6.03 ± 0.01 Å) is hexagonal in shape with three molecules per cell; the average theoretical U-Cl bond is 2.472 Å long (the experimental U-Cl length found by X-ray diffraction is 2.42 Å), and the distance between two adjacent chlorine atoms is 3.65 Å.
Chemical properties
Uranium hexachloride is a highly hygroscopic compound and decomposes readily when exposed to ordinary atmospheric conditions. therefore it should be handled in either a vacuum apparatus or in a dry box.
Thermal decomposition
is stable up to temperatures between 120 °C and 150 °C. The decomposition of results in a solid phase transition from one crystal form of to another more stable form. However, the decomposition of gaseous produces . The activation energy for this reaction is about 40 kcal per mole.
Solubility
is not a very soluble compound. It dissolves in to give a brown solution. It is slightly soluble in isobutyl bromide and in fluorocarbon ().
Reaction with hydrogen fluoride
When is reacted with purified anhydrous liquid hydrogen fluoride (HF) at room temperature produces .
Synthesis
Uranium hexachloride can be synthesized from the reaction of uranium trioxide () with a mixture of liquid and hot chlorine (). The yield can be increased if the reaction carried out in the presence of . The is converted to , which in turn reacts with the excess to form . It requires a substantial amount of heat for the reaction to take place; the temperature range is from 65 °C to 170 °C depending on the amount of reactant (ideal temperature 100 °C - 125 °C). The reaction is carried out in a closed gas-tight vessel (for example a glovebox) that can withstand the pressure that builds up.
Step 1:
Step 2:
Overall reaction:
This metal hexahalide can also be synthesized by blowing gas over sublimed at 350 °C.
Step 1:
Step 2:
Overall Reaction:
References
Uranium(VI) compounds
Chlorides
Actinide halides | Uranium hexachloride | [
"Chemistry"
] | 570 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
36,172,705 | https://en.wikipedia.org/wiki/Uranium%20tetrabromide | Uranium tetrabromide is an inorganic chemical compound of uranium in oxidation state +4.
Production
Uranium tetrabromide can be produced by reacting uranium and bromine:
U+ 2 Br2 → UBr4
References
Uranium(IV) compounds
Bromides
Actinide halides | Uranium tetrabromide | [
"Chemistry"
] | 59 | [
"Bromides",
"Inorganic compounds",
"Inorganic compound stubs",
"Salts"
] |
36,172,806 | https://en.wikipedia.org/wiki/Uranium%20disilicide | Uranium disilicide is an inorganic chemical compound of uranium in oxidation state +4. It is a silicide of uranium.
There has been recent interest in using uranium disilicide as an alternative to uranium dioxide for
fuel in nuclear reactors. Advantages are higher
percentage of uranium and higher thermal conductivity. A direct replacement of UO2 with U3Si2
should enable a reactor to generate more energy from a set of fuel rods and also provide more "coping time" in the case of a LOCA (Loss of Cooling Accident).
The development of uranium disilicide, uranium nitride, or other high thermal conductivity uranium compound may be critical for the performance of "Accident Tolerant Fuel", a
development effort mandated by the US Department of Energy. This is due to zircalloy having a higher thermal
conductivity than all replacement materials being developed. In particular, SIC-SiC CMC (link), which has several superior material properties to zircalloy for this application, has about five times lower thermal conductivity (varies due to the manufacturing methods used for the fiber and for the matrix) than zircalloy.(refs on SiC-SiC and zircalloy). The lower thermal conductivity means that a reactor using fuel rods with SiC-SiC CMC cladding and conventional UO2 fuel will have to either: 1) Run at a lower power output to keep the fuel the same temperature, or 2) Run with the same power, with the fuel hotter, which means the reactor has less coping time (time to fix what is wrong before something fails). The alternative, enabled by U3Si2 which has about five times better thermal conductivity than UO2 , is expected to be a fuel rod capable of equal power output, slightly better energy output, and longer coping time.
References
Further reading
http://www.rertr.anl.gov/Web1999/Abstracts/18suripto99.html
Uranium(IV) compounds
Transition metal silicides | Uranium disilicide | [
"Chemistry"
] | 422 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
36,172,886 | https://en.wikipedia.org/wiki/Uranium%20disulfide | Uranium disulfide is an inorganic chemical compound of uranium in oxidation state +4 and sulfur in oxidation state -2. It is radioactive and appears in the form of black crystals.
Uranium disulfide has two allotropic forms: α-uranium disulfide, which is stable above the transition temperature (about 1350 °C) and metastable below it, and β-uranium disulfide which is stable below this temperature. The tetragonal crystal structure of α-US2 is identical to α-USe2.
Uranium disulfide can be synthesized by reduction of gaseous hydrogen sulfide with uranium metal powder at elevated temperatures.
References
Uranium(IV) compounds
Sulfides
Dichalcogenides | Uranium disulfide | [
"Chemistry"
] | 147 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
36,173,298 | https://en.wikipedia.org/wiki/Diuranium%20pentoxide | Diuranium pentoxide (uranium(V) oxide) is an inorganic chemical compound of uranium and oxygen.
References
Uranium(V) compounds
Oxides | Diuranium pentoxide | [
"Chemistry"
] | 33 | [
"Inorganic compounds",
"Oxides",
"Inorganic compound stubs",
"Salts"
] |
26,322,186 | https://en.wikipedia.org/wiki/AutoDock | AutoDock is a molecular modeling simulation software. It is especially effective for protein-ligand docking. AutoDock 4 is available under the GNU General Public License. AutoDock is one of the most cited docking software applications in the research community. It is used by the FightAIDS@Home and OpenPandemics - COVID-19 projects run at World Community Grid, to search for antivirals against HIV/AIDS and COVID-19. In February 2007, a search of the ISI Citation Index showed more than 1,100 publications had been cited using the primary AutoDock method papers. As of 2009, this number surpassed 1,200.
AutoDock Vina is a successor of AutoDock, significantly improved in terms of accuracy and performance. It is available under the Apache license.
Both AutoDock and Vina are currently maintained by Scripps Research, specifically the Center for Computational Structural Biology (CCSB) led by Dr. Arthur J. Olson
AutoDock is widely used and played a role in the development of the first clinically approved HIV-1 integrase inhibitor by Merck & Co.
Programs
AutoDock consists of two main programs:
AutoDock for docking of the ligand to a set of grids describing the target protein;
AutoGrid for pre-calculating these grids.
Usage of AutoDock has contributed to the discovery of several drugs, including HIV1 integrase inhibitors.
Platform support
AutoDock runs on Linux, Mac OS X, SGI IRIX, and Microsoft Windows. It is available as a package in several Linux distributions, including Debian, Fedora, and Arch Linux.
Compiling the application in native 64-bit mode on Microsoft Windows enables faster floating-point operation of the software.
Improved versions
AutoDock for GPUs
Improved calculation routines using OpenCL and CUDA have been developed by the AutoDock Scripps research team.
It results in observed speedups of up to 4x (quad-core CPU) and 56x (GPU) over the original serial AutoDock 4.2 (Solis-Wets) on CPU.
The CUDA version was developed in a collaboration between the Scripps research team and Nvidia while the OpenCL version was further optimized with support from the IBM World Community Grid team.
AutoDock Vina
AutoDock has a successor, AutoDock Vina, which has an improved local search routine and makes use of multicore/multi-CPU computer setups.
AutoDock Vina has been noted for running significantly faster under 64-bit Linux operating systems in several World Community Grid projects that used the software.
AutoDock Vina is currently on version 1.2, released in July 2021.
Third-party improvements and tools
As an open source project, AutoDock has gained several third-party improved versions such as:
Scoring and Minimization with AutoDock Vina (smina) is a fork of AutoDock Vina with improved support for scoring function development and energy minimization.
Off-Target Pipeline allows integration of AutoDock within bigger projects.
Consensus Scoring ToolKit provides rescoring of AutoDock Vina poses with multiple scoring functions and calibration of consensus scoring equations.
VSLAB is a VMD plug-in that allows the use of AutoDock directly from VMD.
PyRx provides a nice GUI for running virtual screening with AutoDock. PyRx includes a docking wizard and you can use it to run AutoDock Vina in the Cloud or HPC cluster.
POAP is a shell-script-based tool which automates AutoDock for virtual screening from ligand preparation to post docking analysis.
VirtualFlow allows to carry out ultra-large virtual screenings on computer clusters and the cloud using AutoDock Vina-based docking programs, allowing to routinely screen billions of compounds.
FPGA acceleration
Using general programmable chips as co-processors, specifically the OMIXON experimental product, speedup was within the range 10x-100x the speed of standard Intel Dual Core 2 GHz CPU.
See also
Docking (molecular)
Virtual screening
List of protein-ligand docking software
References
External links
AutoDock homepage
AutoDock Vina homepage
Molecular modelling software
Molecular modelling
Free and open-source software
Free software programmed in C++
Free software programmed in C
Software using the Apache license
Software using the GNU General Public License | AutoDock | [
"Chemistry"
] | 874 | [
"Molecular modelling software",
"Molecular physics",
"Computational chemistry software",
"Theoretical chemistry",
"Molecular modelling"
] |
26,322,806 | https://en.wikipedia.org/wiki/Continuous%20adsorption-regeneration | Electrochemical regeneration of activated carbon adsorbents such as granular activated carbon present an alternative to thermal regeneration or land filling at the end of useful adsorbent life. Continuous adsorption-electrochemical regeneration encompasses the adsorption and regeneration steps, typically separated in the bulk of industrial processes due to long adsorption equilibrium times (ranging from hours to months), into one continuous system. This is possible using a non-porous, electrically conducting carbon derivative called Nyex. The non-porosity of Nyex allows it to achieve its full adsorptive capacity within a few minutes and its electrical conductivity allows it to form part of the electrode in an electrochemical cell. As a result of its properties Nyex can undergo quick adsorption and fast electrochemical regeneration in a combined adsorption-electrochemical regeneration cell achieving 100% regeneration efficiency.
Continuous adsorption-regeneration cell
The adsorption regeneration process is divided into three key elements which occur in different parts of the cell. All three occur continuously and simultaneously, with parameters such as charge passed, rate of effluent in/outflow and air inlet rate varied according to pollutant type and concentration.
Pollutant contacting and adsorption
Polluted effluent is added into the bottom of the cell and mixed with the adsorbent in the adsorption zone 1.1 where adsorption of the pollutants onto the surface of the adsorbent occurs. Mixing between the adsorbent and the polluted effluent is promoted by air spargers at the base of the cell which also facilitate the migration of the adsorbent upwards and into the cell's sedimentation zone.
Adsorbent-treated effluent separation
The adsorbent is separated from the now treated effluent in the sedimentation zone where the density of the adsorbent allows separation by gravitational sedimentation. The treated effluent is allowed to overflow out of the cell.
Adsorbent electrochemical regeneration
The adsorbent, loaded with adsorbed pollutant on its surface sediments and forms a bed in the regeneration zone in the cell. The mass of the Nyex causes the bed to travel down the regeneration column slowly and eventually pass back into the cell. During the journey down the regeneration column, a DC current is passed across the electrochemical cell of which the adsorbent forms the anode. The applied current causes the pollutants adsorbed on the surface of the Nyex to be electrochemically oxidised regenerating the adsorbent surface restoring its full adsorptive capacity completing the adsorption-regeneration cycle.
Applications
This technology is currently being incorporated into a variety of industries for applications in effluent treatment areas such as:
Groundwater remediation
Volatile Organic Compound Removal
Dye-house Effluent De-Colourisation
Electrochemical Disinfection
References
Water treatment
Electrochemistry | Continuous adsorption-regeneration | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 615 | [
"Water treatment",
"Water pollution",
"Electrochemistry",
"Environmental engineering",
"Water technology"
] |
26,325,026 | https://en.wikipedia.org/wiki/CTL-mediated%20cytotoxicity | Within the scientific discipline of toxicology, Cytotoxic T lymphocytes (CTLs) are generated by immune activation of cytotoxic T cells (Tc cells). They are generally CD8+, which makes them MHC class I restricted. CTLs are able to eliminate most cells in the body since most nucleated cells express class I MHC molecules. The CTL-mediated immune system can be divided into two phases. In the first phase, functional effector CTLs are generated from naive Tc cells through activation and differentiation. In the second phase, affector CTLs destroy target cells by recognizing the antigen-MHC class I complex.
Phase 1
In phase one, effector CTLs are generated from CTL precursors. The CTL precursors include naive Tc cells since they are incapable of killing target cells. After a precursor cell has been activated, it can then differentiate into a functional CTL with cytotoxic activity. There are three sequential signals that are required to complete this process.
First, there is TCR recognition of the peptide-MHC class I complex. This step allows the cell to become licensed to an antigen-presenting cell.
Second, a costimulatory signal is transmitted by the interaction between CD28 and B7 of the precursor cell and the licensed antigen-presenting cell.
Last, a signal is induced by the interaction between IL-2 and the high-affinity IL-2 receptor. This results in proliferation and differentiation of the antigen-activated precursor cell into a functional effector CTL.
Phase 2
In phase two, the now functional effector CTLs destroy the target cells. This can be done in two ways. These pathways are the cytotoxic protein pathway and the Fas ligand pathway. Apoptosis is the primary mechanism for both of these pathways.
Cytotoxic protein pathway
One pathway is the cytotoxic protein pathway. In this pathway perforins and granzymes are taken up by the target cell. In this pathway, the perforins facilitate the entry of granule contents into the cell. The granzymes then activate the endogenous apoptosis pathway, which induces cell death without necrosis. This leaves packages of fragmented DNA material from the target cell for the macrophages to dispose of.
Fas ligand pathway
The other pathway is the Fas ligand pathway. In this pathway, a Fas ligand (FasL) on the CTL binds to Fas receptor (FasR) on the target cell. This pathway is independent of granzymes. Instead it involves Interleukin-1β converting enzyme (ICE; also known as Caspase 1) activation, which is similar to granzyme B. This ultimately leads to DNA fragmentation.
References
Kindt, Thomas, Richard Goldsby, Barbara Osborne, and Janis Kuby. Kuby Immunology. W. H. Freeman, 2007. 353–360.
Toxicology | CTL-mediated cytotoxicity | [
"Environmental_science"
] | 612 | [
"Toxicology"
] |
26,325,989 | https://en.wikipedia.org/wiki/Richard%20C.%20Mulligan | Richard C. Mulligan (born 1954) is an American scientist who is the Mallinckrodt Professor of Genetics at Harvard Medical School, the Director of the Harvard Gene Therapy Initiative and a visiting scientist at the Koch Institute for Integrative Cancer Research at the Massachusetts Institute of Technology. He is also the head of SanaX at Sana Biotechnology.
Research and career
Mulligan started his career in gene therapy as an undergraduate in biology in Alexander Rich's lab at MIT and was involved with early work controlling gene expression using SV40. He would earn his PhD in biochemistry at Stanford University in 1980 working with Paul Berg to develop viral vectors to express human and bacterial genes. He would then do his postdoctoral training at the Center for Cancer Research at MIT with David Baltimore and Phillip Sharp. He would join the faculty of molecular biology and was a member of the Whitehead Institute for Biomedical Research. During that time, he was a founding member of the Recombinant DNA Advisory Committee (RAC). In 1996, he joined Children's Hospital and Harvard to become the director of the Harvard Gene Therapy Initiative and an investigator of the Howard Hughes Medical Institute.
Mulligan is an active investor as the founding partner and senior managing director of Sarissa Capital Management from 2013 to 2016 along with Alex Denner when they worked together with Carl Icahn. He would then join Icahn Capital as a portfolio manager in 2017. He serves as Director of Enzon Pharmaceuticals, and Biogen Idec, Inc.
Awards
1981 MacArthur Fellows Program
1983 Searle Scholars Program
1993 ASBMB-Amgen Award
Works
Lindemann, D., Patriquin, E., Feng, S. and Mulligan, R.C. 1997 "Versatile retrovirus vector systems for regulated gene expression in vitro and in vivo". Molecular Medicine 3:466-476.
Goodell, M.A., Rosenzweig, H-.K., Marks, D.G., DeMaria, M., Paradis, G., Grupp, S.A., Sieff, C.A., Mulligan, R.C. and Johnson, R.P. 1997. "Dye efflux studies suggest the existence of CD34-negative/low hematopoietic stem cells in multiple species". Nature Medicine 3:1337–1345.
Mach, N., Lantz, C.S., Galli, S.J., Reznikoff, G., Mihm, M., Small, C., Granstein, R., Beissert, S., Sadelain, M., Mulligan, R.C. and Dranoff, G. 1998. "Involvement of interleukin-3 in delayed-type hypersensitivity". Blood 92:778-783.
References
1954 births
MacArthur Fellows
Harvard Medical School faculty
Living people
Searle Scholars Program recipients
Gene therapy
Stanford University School of Medicine alumni
Massachusetts Institute of Technology alumni
Massachusetts Institute of Technology faculty
Howard Hughes Medical Investigators | Richard C. Mulligan | [
"Engineering",
"Biology"
] | 633 | [
"Gene therapy",
"Genetic engineering"
] |
26,326,433 | https://en.wikipedia.org/wiki/Packard%20Automotive%20Plant | The Packard Automotive Plant was an automobile-manufacturing factory in Detroit, Michigan, where luxury cars were made by the Packard Motor Car Company and later by the Studebaker-Packard Corporation. Demolition began on building 21 on October 27, 2022, and a second round of demolition began on building 28 on January 24, 2023, which was wrapped up by April 1, however all demolition efforts by the City of Detroit halted, which stopped finishing demolition work of building 21. The Packard Plant currently sits empty and partially demolished, with many parcels still remaining.
Design and operation
Under Packard
The 3,500,000-square-foot (325,000 m2) factory, designed by Albert Kahn Associates using Trussed Concrete Steel Company products is located on of land on East Grand Boulevard on Detroit's east side. It included the first use of reinforced concrete in the United States for industrial construction in the automobile industry.
The Packard plant was opened in 1903 and contained 10,000 square feet of floor space and at the time was considered the most modern automobile manufacturing facility in the world: modern, efficient, and massive in scale. By 1908, when an enlargement for the construction of trucks was announced, the factory was already six times larger than when constructed and occupied over fourteen acres of space. At its peak the complex employed 40,000 people, including skilled craftsmen involved in over eighty trades. The plant turned out Packard automobiles from 1903 to 1956, except during World War II, when production was shifted to war material, particularly the Packard V-1650 Merlin, which powered the North American P-51 Mustang fighter plane.
After Packard
The factory complex closed in 1958, though other businesses operated on the premises or used it for storage until the late 1990s.
In the 1990s, the buildings were used to host infamous "underground" raves and techno parties, including the Spastik party hosted by Richie Hawtin. The majority of the property was claimed by the city of Detroit in 1994 after former owners failed to pay back taxes. Parts of the complex continued to operate under the name The Motor City Industrial Complex until 1999 when it was closed by the city.
A number of the outer buildings were in use by businesses up through the early 2000s. In 2010 Chemical Processing, announced its intention to vacate the premises after 52 years. This left Krilin Co. a lighting company located in Building 22 a 255,000-square-foot 1939 addition connected to the northside as the complex's sole remaining tenant.
In 2010, a mural by the England-based graffiti artist, Banksy, was discovered in the ruins of the plant. In 2015, the mural, entitled I Remember When All This Was Trees, was sold at an art gallery in Beverly Hills, California, for $137,500.
In the 2010's, the site was used as a filming location for many movies and TV shows, including Only Lovers Left Alive, It Follows, and Transformers: The Last Knight.
Current status
Since its abandonment, the plant has been a haven for graffiti artists, urban explorers, paintballers and auto scrappers, and by the early 2010's, most of the wiring and other building materials had been illegally removed from the site. In one incident in 2009, a group of urban explorers pushed a dump truck through an opening on the fourth floor. Karen Nagher, the executive director of the nonprofit organization Preservation Wayne, stated that she was irked to see people come from "all over the world" to poke around Detroit. "Piece by piece, they're disassembling those buildings, making it harder and harder to restore them".
Despite many years of neglect and abuse, the reinforced concrete structures were able to remain mostly intact and structurally sound. Portions of the upper floors of several small sections in various buildings had collapsed or been partly demolished and laid in ruins in the wake of several aborted attempts at demolition over the years. The City of Detroit had pledged legal action to have the property demolished or secured. In early 2012, Dominic Cristini, whose claim of ownership was disputed at the time, was said to have been conducting construction surveys in advance of full-scale demolition.
On February 5, 2013, it was reported that aluminum letter placards spelling the Nazi slogan "Arbeit macht frei" (work makes one free) were placed in the windows of the E. Grand Boulevard bridge. Community volunteers promptly removed the letters.
In April 2013, it was announced that AMC's Low Winter Sun would be filming around the location. In June 2018, Amazon's The Grand Tour filmed their first episode of Season 3 in Detroit which prominently showed the Packard Plant; the episode debuted on January 18, 2019.
On January 23, 2019, the E. Grand Boulevard bridge collapsed with no injuries reported. In February 2019 a section of the plant owned by the city of Detroit was demolished.
Sale
Due to tax delinquency, the 43 parcels composing the plant were put up for auction in September 2013. The starting bid was $975,000 (the amount owed in taxes) and there were no takers.
Another auction in October 2013 posted a starting bid of $21,000, or about $500 per parcel. This auction closed with a top bid of $6,038,000 by Dr. Jill Van Horn, a Texas-based physician who announced in an email that she would team up with "partners and investors from Detroit, Wall Street and international firms," to turn the site into an "economic engine", refurbishing the plant grounds for a manufactured-house assembly facility. However, the deadline for full payment was missed, prompting Wayne County to initiate talks with the second-highest bidder, Bill Hults, a Chicago-area developer who placed a $2,003,000 bid in the October auction. In a separate email, Dr. Van Horn stated, "It seemed (David Szymanski, Deputy Wayne County Treasurer) had already made up his mind to talk to the second bidder". Hults then made several non-refundable down-payments on the plant, but he ultimately failed to raise the entire sum of his bid.
Around the same time in October 2013, a Spanish investor, Fernando Palazuelo, also expressed interest in securing the Packard Plant. It was purchased for $405,000 on December 12, 2013. Palazuelo, who has developed historic buildings in Spain and Peru, planned on moving into the plant by April 9, his 59th birthday. He planned on having six different uses for the Packard Plant Project (residential, retail, offices, light industry, recreation and art), estimated to cost about $350 million over the next 10 to 15 years. He hoped to bring a big-3 automotive-parts manufacturer to the plant in exchange for a few years of free rent. He also hoped to create a work space for local artists and an upscale go-kart track.
As of August 2016, no redevelopment had taken place at the historic 40-acre site on Detroit's east side. At the time, many remained skeptical that the enormous effort would ever succeed — or even get off the ground — given the nearly half-billion-dollar price tag of the project that Palazuelo had envisioned.
Renovation
In 2014 The Display Group a Detroit based event company purchased building 22 a newer addition connected yet separately owned from Kirlin Co. Spending $750,000 in renovation costs over approximately a year to turn the space into their new headquarters. Building 22 never fell to the same kind of decay as the rest of the factory, its more modern style and layout as well as historic significance are likely contributed as did its smaller size and easy ability to partition as an addition. Production of the vaunted Rolls Royce Merlin or Packard-Merlin as the American built version of the P-51 engine is sometimes known took place in building 22.
The Display Group uses Building 22 for audiovisual production, custom prop & display fabrication, warehousing and creative event furniture and decor rentals. The Display Group has since added a full service digital broadcasting studio to adapt towards the event industry's sudden reliance on streaming. In 2024 Display Group helped produce and broadcast the reopening celebrations for Michigan Central Station which included a live concert featuring stars such as Diana Ross, Jack White, Faustina, Eminem, Jelly Roll and Trick Trick. The Display Group continues helps with putting on America's Thanksgiving Day Parade in Detroit as well as being involved in other major recent events like the 2024 NFL Draft, Movement Music Festival and others.
In May 2017, Arte Express, the holding company for Palazuelo, held a ground breaking ceremony for phase I of the project which will include the former 121,000-square-foot administrative building on the site. On August 12, 2017, the inaugural public tour of the property was conducted, which included access to the second floor of the administration building on the complex's western side.
Bust and demolition
The city demolished several structures on parcels it owned at the Packard Plant in 2017. In October 2020, it was announced that the original redevelopment vision for the site had been abandoned, and Palazuelo would be placing the property up for sale, with an eye toward large-scale demolition to repurpose the site for industrial use.
On April 7, 2022, Wayne County Circuit Court Judge Brian Sullivan ordered the demolition of the Packard auto plant in Detroit, finding that it had become a public nuisance. The city began a search for contractors in May 2022. In late July 2022, Detroit City Council approved a nearly $1.7 million contract for the demolition of a portion of the Packard Plant.
On October 27, 2022, demolition began on building 21 of the northern complex; building 21 had been noted as "Leaning" on an occupied building, and causing structural damages. Demolition finished by the end of December; however, some remnants remained.
On January 24, 2023, the city began demolishing a second portion of the plant, building 28, of the southern complex. By the end of March, demolition of building 28 was successful, and all rubble was transported away from the site. However, by early April, it was revealed that the city halted all demolition operations at the Packard Plant, including the (then) ongoing demolition of building 21. The absentee owner was able to pay their property taxes before the deadline came, which allowed them to secure their ownership of the privately owned sections of the Packard Plant. As of early April, new "NO TRESSPASSING/PRIVATELY OWNED" signs have been posted at every privately owned parcel. The city of Detroit may not have the rights to proceed with demolition anymore. The city stated that they will save some buildings of the Packard Plant in order to preserve history but will continue to demolish other portions of the plant throughout 2023.
On March 4, 2024 demolition began again. Concluding the demolition of buildings 34 to 38 at the southernmost end of the complex, preceding the demolition of largest buildings, 1-19, excluding 13. On October 10th, the building's iconic southern water tower was toppled by two guy-wires. The City of Detroit, using funds from the American Rescue Plan, expects to clear the site before the end of 2024. Two of the facades of the structure, administrative building 13 and building 27, that face each other across E. Grand Boulevard, will be preserved for their historical significance.
By late December 2024, all structural components of the plant had been razed, except for two adjacent sections along E. Grand Boulevard which are slated for preservation.
See also
Ford Piquette Avenue Plant
References
Bibliography
External links
1921 photo with Alvan Macauley - Detroit Public Library
1920-1923 Packard photo - Detroit Public Library
1956 factory photo - Detroit Public Library
"Largest Abandoned Factory in the World: The Packard Factory, Detroit." Sometimes Interesting. 15 Aug 2011
Detroit News
The Abandoned Packard Plant at Detroiturbex.com
Packard Plant photos
blog.hemmings.com on Planned demolition mid-2012
Recent photos of the Packard Plant
Detroit Free Press photos - then and now
Packard images in IR
1911 establishments in Michigan
1958 disestablishments in Michigan
Industrial buildings and structures in Detroit
Albert Kahn (architect) buildings
Former motor vehicle assembly plants
Industrial buildings completed in 1911
Modern ruins in the United States
Motor vehicle assembly plants in Michigan
Packard
Unused buildings in Detroit
Mill architecture
Buildings and structures demolished in 2023
Buildings and structures demolished in 2024
Demolished buildings and structures in Detroit | Packard Automotive Plant | [
"Engineering"
] | 2,541 | [
"Mill architecture",
"Architecture"
] |
26,327,537 | https://en.wikipedia.org/wiki/Networked%20Robotics%20Corporation | Networked Robotics Corporation is an American scientific automation company that designs and manufactures electronic devices that monitor scientific instruments, scientific processes, and environmental conditions via the internet.
Networked Robotics technology is used in the biotechnologies industry—including stem cell automation, medical industry, academia, food industry in efforts to enhance U.S. Food and Drug Administration (FDA) regulatory compliance, quality, and loss prevention for their operations.
History
Networked Robotics was founded in 2004 at the Northwestern University Technology Innovation Center by ex-Pfizer informatics researchers. The company's founders worked for almost 20 years in the automation of scientific processes for G.D. Searle & Company, Monsanto, Pharmacia, and Pfizer where they were responsible for the automation of experiments in inflammation. Businessman Charles W. Woodford was a founding board member.
In 2006, Networked Robotics announced the introduction of Tempurity™, a network-based, real-time temperature monitoring system, designed to collect temperatures over a wide area network. Tempurity includes an alarm system in which a user is notified by phone, text messaging, or e-mail when the area or device to be monitored falls outside of a set environmental range. The software was developed to meet FDA standards and works with rooms, ovens, incubators, refrigerators, freezers and commercial ultra low temperature freezers.
Information
In 2005, as the company was developing their automation technology, Networked Robotics hosted an international game server for an online competition of the video game Medal of Honor (video game series). More than 280 daily contests were held, with winners from 25 different nations. Contests for 2004 ended on February 8, 2005, the end of the lunar (Chinese) New Year. The countries with the most winners on each continent were declared Networked Robotics continental champions. Over 250,000 players from 66 countries and all US states have participated in Networked Robotics competition.
References
Computer networking | Networked Robotics Corporation | [
"Technology",
"Engineering"
] | 396 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
26,327,617 | https://en.wikipedia.org/wiki/Middle%20mile | In the broadband Internet industry, the "middle mile" is the segment of a telecommunications network linking a network operator's core network to the local network plant, typically situated in the incumbent telco's central office (British English: telephone exchange) that provides access to the local loop, or in the case of cable television operators, the local cable modem termination system. This includes both the backhaul network to the nearest aggregation point, and any other parts of the network needed to connect the aggregation point to the nearest point of presence on the operator's core network. The term middle mile arose to distinguish this part of the network from the last mile, which means the local links which provide service to the retail customer or end user, such as the local telephone lines from the telephone exchange or the coaxial cables from which connect to the customer's equipment.
Middle-mile provision is a major issue in reducing the price of broadband Internet provision by non-incumbent operators. Internet bandwidth is relatively inexpensive to purchase in bulk at the major Internet peering points, and access to end-customer ports in the incumbent operator's local distribution plant (typically where local loop unbundling is mandated by a telecom regulator) are also relatively inexpensive relative to typical broadband subscription costs.
However, middle-mile access, where bought from the incumbent operator, is often much more expensive than either, and typically forms the major expense of non-incumbent broadband ISPs. The alternative, building out their own fibre networks, is capital-intensive, and thus unavailable to most new operators. For this reason, many proposals for government broadband stimulus initiatives are directed at building out the middle mile. Two examples are the Network New Hampshire Now and Maine Fiber Company in the Northeast US, both funded largely by the National Broadband Plan (United States) to connect all community anchor institutions.
Open access initiatives such as duct sharing, utility pole sharing, and fiber unbundling are also being tried by regulators as mechanisms to ease the middle mile problem by reducing costs to non-incumbents. This sometimes leads to controversies, such as the NRECA opposition to pole attachment tariff changes motivated by the US plan.
Middle-mile, in logistics, coincides with its etymological meaning in the telecommunication network space. The "middle mile" refers to the stage before the last leg i.e., the "last mile" of any supply chain, wherein goods are hauled from a supplier's warehouse, shipper's production facility to a retail store.
See also
Forced access regulation
Last mile
Local loop unbundling
Open access (infrastructure)
References
External links
Broadband stimulus fund applicants sharpen their proposals for the second round
Global Crossing: Stimulus Must Include Middle Mile
Fighting AT&T, Verizon's chokehold on "middle mile"
Big Broadband Stimulus Grants for Middle-Mile Networks in N.C. and Michigan
Network architecture
Telecommunications infrastructure
Network access
Internet architecture | Middle mile | [
"Technology",
"Engineering"
] | 587 | [
"Internet architecture",
"IT infrastructure",
"Network architecture",
"Network access",
"Computer networks engineering",
"Electronic engineering"
] |
26,327,982 | https://en.wikipedia.org/wiki/Annis%20Water%20Resources%20Institute | The Robert B. Annis Water Resources Institute (AWRI), is located in Muskegon, Michigan at the Lake Michigan Center on Muskegon Lake. The mission of the Institute is to integrate research, education, and outreach to enhance and preserve freshwater resources. AWRI is a multidisciplinary research organization within the College of Liberal Arts and Sciences at Grand Valley State University. The Institute conducts research on water resources, including: ecosystem structure and function, contaminants and toxicology, hydrology, land use, watershed, stream, and wetland ecology, water quality, and basic and applied limnology.
Facilities and equipment
Facilities within the Lake Michigan Center include classrooms, conference areas, analytical labs, research labs, mesocosms, dockage, and ship support and storage. The institute also owns and operates its own research vessels, the D.J. Angus and the W.G. Jackson. The D.J. Angus is a 45-foot vessel and weighs 22.5 tons; she is kept at Harbor Island in Grand Haven, Michigan. The W.G. Jackson is 64 feet and 10 inches long and weighs 68.5 tons; she is kept at the Lake Michigan Center in Muskegon.
AWRI received $500,000 in federal money to support a continuing $2 million expansion.
References
External links
AWRI website
Grand Valley State University
Muskegon, Michigan
Research institutes in Michigan
Water | Annis Water Resources Institute | [
"Environmental_science"
] | 291 | [
"Water",
"Hydrology"
] |
21,942,008 | https://en.wikipedia.org/wiki/Cell%20polarity | Cell polarity refers to spatial differences in shape, structure, and function within a cell. Almost all cell types exhibit some form of polarity, which enables them to carry out specialized functions. Classical examples of polarized cells are described below, including epithelial cells with apical-basal polarity, neurons in which signals propagate in one direction from dendrites to axons, and migrating cells. Furthermore, cell polarity is important during many types of asymmetric cell division to set up functional asymmetries between daughter cells.
Many of the key molecular players implicated in cell polarity are well conserved. For example, in metazoan cells, the complex plays a fundamental role in cell polarity. While the biochemical details may vary, some of the core principles such as negative and/or positive feedback between different molecules are common and essential to many known polarity systems.
Examples of polarized cells
Epithelial cells
Epithelial cells adhere to one another through tight junctions, desmosomes and adherens junctions, forming sheets of cells that line the surface of the animal body and internal cavities (e.g., digestive tract and circulatory system). These cells have an apical-basal polarity defined by the apical membrane facing the outside surface of the body, or the lumen of internal cavities, and the basolateral membrane oriented away from the lumen. The basolateral membrane refers to both the lateral membrane where cell-cell junctions connect neighboring cells and to the basal membrane where cells are attached to the basement membrane, a thin sheet of extracellular matrix proteins that separates the epithelial sheet from underlying cells and connective tissue. Epithelial cells also exhibit planar cell polarity, in which specialized structures are orientated within the plane of the epithelial sheet. Some examples of planar cell polarity include the scales of fish being oriented in the same direction and similarly the feathers of birds, the fur of mammals, and the cuticular projections (sensory hairs, etc.) on the bodies and appendages of flies and other insects. Computational models have been suggested to simulate how a group of epithelial cells can form a variety of biological morphologies.
Neurons
A neuron receives signals from neighboring cells through branched, cellular extensions called dendrites. The neuron then propagates an electrical signal down a specialized axon extension from the basal pole to the synapse, where neurotransmitters are released to propagate the signal to another neuron or effector cell (e.g., muscle or gland). The polarity of the neuron thus facilitates the directional flow of information, which is required for communication between neurons and effector cells.
Migratory cells
Many cell types are capable of migration, such as leukocytes and fibroblasts, and in order for these cells to move in one direction, they must have a defined front and rear. At the front of the cell is the leading edge, which is often defined by a flat ruffling of the cell membrane called the lamellipodium or thin protrusions called filopodia. Here, actin polymerization in the direction of migration allows cells to extend the leading edge of the cell and to attach to the surface. At the rear of the cell, adhesions are disassembled and bundles of actin microfilaments, called stress fibers, contract and pull the trailing edge forward to keep up with the rest of the cell. Without this front-rear polarity, cells would be unable to coordinate directed migration.
Budding yeast
The budding yeast, Saccharomyces cerevisiae, is a model system for eukaryotic biology in which many of the fundamental elements of polarity development have been elucidated. Yeast cells share many features of cell polarity with other organisms, but feature fewer protein components. In yeast, polarity is biased to form at an inherited landmark, a patch of the protein Rsr1 in the case of budding, or a patch of Rax1 in mating projections. In the absence of polarity landmarks (i.e. in gene deletion mutants), cells can perform spontaneous symmetry breaking, in which the location of the polarity site is determined randomly. Spontaneous polarization still generates only a single bud site, which has been explained by positive feedback increasing polarity protein concentrations locally at the largest polarity patch while decreasing polarity proteins globally by depleting them. The master regulator of polarity in yeast is Cdc42, which is a member of the eukaryotic Ras-homologous Rho-family of GTPases, and a member of the super-family of small GTPases, which include Rop GTPases in plants and small GTPases in prokaryotes. For polarity sites to form, Cdc42 must be present and capable of cycling GTP, a process regulated by its guanine nucleotide exchange factor (GEF), Cdc24, and by its GTPase-activating proteins (GAPs). Cdc42 localization is further regulated by cell cycle queues, and a number of binding partners. A recent study to elucidate the connection between cell cycle timing and Cdc42 accumulation in the bud site uses optogenetics to control protein localization using light. During mating, these polarity sites can relocate. Mathematical modeling coupled with imaging experiments suggest the relocation is mediated by actin-driven vesicle delivery.
Vertebrate development
The bodies of vertebrate animals are asymmetric along three axes: anterior-posterior (head to tail), dorsal-ventral (spine to belly), and left-right (for example, our heart is on the left side of our body). These polarities arise within the developing embryo through a combination of several processes: 1) asymmetric cell division, in which two daughter cells receive different amounts of cellular material (e.g. mRNA, proteins), 2) asymmetric localization of specific proteins or RNAs within cells (which is often mediated by the cytoskeleton), 3) concentration gradients of secreted proteins across the embryo such as Wnt, Nodal, and Bone Morphogenic Proteins (BMPs), and 4) differential expression of membrane receptors and ligands that cause lateral inhibition, in which the receptor-expressing cell adopts one fate and its neighbors another.
In addition to defining asymmetric axes in the adult organism, cell polarity also regulates both individual and collective cell movements during embryonic development such as apical constriction, invagination, and epiboly. These movements are critical for shaping the embryo and creating the complex structures of the adult body.
Molecular basis
Cell polarity arises primarily through the localization of specific proteins to specific areas of the cell membrane. This localization often requires both the recruitment of cytoplasmic proteins to the cell membrane and polarized vesicle transport along cytoskeletal filaments to deliver transmembrane proteins from the golgi apparatus. Many of the molecules responsible for regulating cell polarity are conserved across cell types and throughout metazoan species. Examples include the PAR complex (Cdc42, PAR3/ASIP, PAR6, atypical protein kinase C), Crumbs complex (Crb, PALS, PATJ, Lin7), and Scribble complex (Scrib, Dlg, Lgl). These polarity complexes are localized at the cytoplasmic side of the cell membrane, asymmetrically within cells. For example, in epithelial cells the PAR and Crumbs complexes are localized along the apical membrane and the Scribble complex along the lateral membrane. Together with a group of signaling molecules called Rho GTPases, these polarity complexes can regulate vesicle transport and also control the localization of cytoplasmic proteins primarily by regulating the phosphorylation of phospholipids called phosphoinositides. Phosphoinositides serve as docking sites for proteins at the cell membrane, and their state of phosphorylation determines which proteins can bind.
Polarity establishment
While many of the key polarity proteins are well conserved, different mechanisms exist to establish cell polarity in different cell types. Here, two main classes can be distinguished: (1) cells that are able to polarize spontaneously, and (2) cells that establish polarity based on intrinsic or environmental cues.
Spontaneous symmetry breaking can be explained by amplification of stochastic fluctuations of molecules due to non-linear chemical kinetics. The mathematical basis for this biological phenomenon was established by Alan Turing in his 1953 paper 'The chemical basis of morphogenesis.' While Turing initially attempted to explain pattern formation in a multicellular system, similar mechanisms can also be applied to intracellular pattern formation. Briefly, if a network of at least two interacting chemicals (in this case, proteins) exhibits certain types of reaction kinetics, as well as differential diffusion, stochastic concentration fluctuations can give rise to the formation of large-scale stable patterns, thus bridging from a molecular length scale to a cellular or even tissue scale.
A prime example for the second type of polarity establishment, which relies on extracellular or intracellular cues, is the C. elegans zygote. Here, mutual inhibition between two sets of proteins guides polarity establishment and maintenance. On the one hand, PAR-3, PAR-6 and aPKC (called anterior PAR proteins) occupy both the plasma membrane and cytoplasm prior to symmetry breaking. PAR-1, the C. elegans-specific ring-finger-containing protein PAR-2, and LGL-1 (called posterior PAR proteins) are present mostly in the cytoplasm. The male centrosome provides a cue, which breaks an initially homogenous membrane distribution of anterior PARs by inducing cortical flows. These are thought to advect anterior PARs towards one side of the cell, allowing posterior PARs to bind to other pole (posterior). Anterior and posterior PAR proteins then maintain polarity until cytokinesis by mutually excluding each other from their respective cell membrane areas.
See also
Epithelial polarity
Cell migration
Embryogenesis
Embryonic development
Asymmetric cell division
3D cell culture
Cell culture assay
Madin-Darby canine kidney cells
References
Cell biology | Cell polarity | [
"Biology"
] | 2,158 | [
"Cell biology"
] |
21,944,919 | https://en.wikipedia.org/wiki/Dampier%20Salt | Dampier Salt is an Australian salt company located in Western Australia, with operations in Dampier, Port Hedland and Lake MacLeod, and headquarters in Perth. Since beginning operations at Dampier in 1972, the company has developed into one of the world's largest private salt producers, with production capacity of over four million tonnes per annum at Dampier and nine million tonnes per annum company-wide. Most of this salt is naturally sourced from the Punt Road region and is known for its high purity.
The company also produces gypsum, with a 1.5 million tonne per annum capacity, at its Lake MacLeod facility.
Dampier Salt is 68.4% owned by the Rio Tinto Group, 21.5% by Marubeni, and the remaining 10.1% by Sojitz.
Important Bird Areas
The 52 km2 solar evaporation pond complex at Dampier has been identified by BirdLife International as the Dampier Saltworks Important Bird Area, while the 78 km2 complex near Port Hedland has been identified as the Port Hedland Saltworks Important Bird Area.
References
Companies based in Perth, Western Australia
Rio Tinto (corporation) subsidiaries
Salt production
Pilbara
Sojitz
Marubeni | Dampier Salt | [
"Chemistry"
] | 250 | [
"Salt production",
"Salts"
] |
21,945,878 | https://en.wikipedia.org/wiki/Solarmer%20Energy%2C%20Inc. | Solarmer Energy, Inc. was a solar energy company that was developing polymer solar cells, a new type of solar cell; specifically, a subtype of organic photovoltaic cells (OPV). They claim their solar panels can be made flexible, transparent, and will cost less to manufacture than traditional cells.
Solarmer was initially founded in March 2006 to commercialize a portfolio of technology developed by Prof. Yang Yang at the University of California, Los Angeles and has since established its own facility in El Monte, California. In addition to this portfolio, Solarmer also licensed the patent rights to a new semi-conducting material invented at the University of Chicago.
Technology
Solarmer’s technology is based on using a semiconducting plastic as the active material in the solar cell, which is what converts light into electricity. In contrast to other photovoltaic technologies, this technology is capable of producing electricity using any kind of lighting, although the lower the light, the less electricity produced. However a major setback of this technology is the relatively low efficiency and, more importantly, lack of stability. Solarmer does have one of the highest efficiencies in the industry with 8.13% (July 2010), but this is still lower than other solar technologies.
The plastic active layer is extremely thin (only a few tenths of a micrometer thick), which is why these solar cells can be made both flexible and translucent. It is also part of the reason that the manufacturing process is likely to cost less, since only very small amounts of material are needed to make these solar cells. The other reason is that the materials can be printed, like inks, in a much less capital-intensive process than traditional silicon-based solar cells.
There are several different polymers that can be used as the active layer and these polymers come in a variety of colors. Solarmer is planning to use this feature of the technology to make solar cells in different colors.
Applications
Organic solar cells are also typically cheaper than conventional solar cells. Solarmer claims that their organic solar cells could be used on any portable device that requires power, as well as smart fabrics and building materials.
Production
Solarmer is developing a pilot line capable of manufacturing OPV samples and plans to initiate the process of component integration in 2009. They expect to complete their pilot line by the 2nd Quarter of 2010, delivery of samples by the end of 2010, and product launch by early 2011.
See also
Organic electronics
Printed electronics
Organic photovoltaic
Building-integrated photovoltaics
References
Notes
Solarmer Energy, Inc. Breaks Psychological Barrier with 8.13% OPV Efficiency
Solarmer Breaks World Records for Plastic Solar Technology
Organic photovoltaics promise more
Solarmer Energy, Inc. Picks Up Speed in Flexible Solar Panel Development
Plastic solar cells for portable electronic devices
Solarmer Energy Sees Excellent Potential for Plastic Solar Cells
Plastic Solar Cells For Portable Electronic Devices Coming Soon
External links
Energy companies of the United States
American companies established in 2006
Companies based in Los Angeles County, California
Organic solar cells
Photovoltaics manufacturers
El Monte, California
Energy companies established in 2006 | Solarmer Energy, Inc. | [
"Chemistry",
"Materials_science",
"Engineering"
] | 626 | [
"Organic solar cells",
"Photovoltaics manufacturers",
"Polymer chemistry",
"Engineering companies"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.