text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The Strange–Rahman–Smith equation is used in the cryoporometry method of measuring porosity .
NMR cryoporometry [ 1 ] [ 2 ] [ 3 ] is a recent technique for measuring total porosity and pore size distributions. NMRC is based on two equations : the Gibbs–Thomson equation , which maps the melting point depression to pore size, and the Strange–Rahman–Smith equation, [ 1 ] which maps the melted signal amplitude at a particular temperature to pore volume.
If the pores of the porous material are filled with a liquid , then the incremental volume of the pores Δ v {\displaystyle \Delta v} with pore diameter between x {\displaystyle x} and x + Δ x {\displaystyle x+\Delta \,x} may be obtained from the increase in melted liquid volume for an increase of temperature between T {\displaystyle T} and T + Δ T {\displaystyle T+\Delta T} by: [ 1 ]
d v d x = d v d T k G T x 2 {\displaystyle {\frac {dv}{dx}}={\frac {dv}{d\,T}}{\frac {k_{GT}}{x^{2}}}}
Where: k G T {\displaystyle k_{GT}} is the Gibbs–Thomson coefficient for the liquid in the pores. | https://en.wikipedia.org/wiki/Strange–Rahman–Smith_equation |
Stranski–Krastanov growth ( SK growth , also Stransky–Krastanov or ' Stranski–Krastanow' ) is one of the three primary modes by which thin films grow epitaxially at a crystal surface or interface. Also known as 'layer-plus-island growth', the SK mode follows a two step process: initially, complete films of adsorbates , up to several monolayers thick, grow in a layer-by-layer fashion on a crystal substrate. Beyond a critical layer thickness, which depends on strain and the chemical potential of the deposited film, growth continues through the nucleation and coalescence of adsorbate 'islands'. [ 1 ] [ 2 ] [ 3 ] [ 4 ] This growth mechanism was first noted by Ivan Stranski and Lyubomir Krastanov in 1938. [ 5 ] It wasn't until 1958 however, in a seminal work by Ernst Bauer published in Zeitschrift für Kristallographie , that the SK, Volmer–Weber, and Frank–van der Merwe mechanisms were systematically classified as the primary thin-film growth processes. [ 6 ] Since then, SK growth has been the subject of intense investigation, not only to better understand the complex thermodynamics and kinetics at the core of thin-film formation, but also as a route to fabricating novel nanostructures for application in the microelectronics industry.
The growth of epitaxial (homogeneous or heterogeneous) thin films on a single crystal surface depends critically on the interaction strength between adatoms and the surface. While it is possible to grow epilayers from a liquid solution, most epitaxial growth occurs via a vapor phase technique such as molecular beam epitaxy (MBE). In Volmer–Weber (VW) growth, adatom–adatom interactions are stronger than those of the adatom with the surface, leading to the formation of three-dimensional adatom clusters or islands. [ 3 ] Growth of these clusters, along with coarsening , will cause rough multi-layer films to grow on the substrate surface. Antithetically, during Frank–van der Merwe (FM) growth , adatoms attach preferentially to surface sites resulting in atomically smooth, fully formed layers. This layer-by-layer growth is two-dimensional, indicating that complete films form prior to growth of subsequent layers. [ 2 ] [ 3 ] Stranski–Krastanov growth is an intermediary process characterized by both 2D layer and 3D island growth. Transition from the layer-by-layer to island-based growth occurs at a critical layer thickness which is highly dependent on the chemical and physical properties, such as surface energies and lattice parameters, of the substrate and film. [ 1 ] [ 2 ] [ 3 ] Figure 1 is a schematic representation of the three main growth modes for various surface coverages.
Determining the mechanism by which a thin film grows requires consideration of the chemical potentials of the first few deposited layers. [ 2 ] [ 7 ] A model for the layer chemical potential per atom has been proposed by Markov as: [ 7 ]
where μ ∞ {\displaystyle \mu _{\infty }} is the bulk chemical potential of the adsorbate material, φ a {\displaystyle \varphi _{a}} is the desorption energy of an adsorbate atom from a wetting layer of the same material, φ a ′ ( n ) {\displaystyle \varphi _{a}'(n)} the desorption energy of an adsorbate atom from the substrate, ε d ( n ) {\displaystyle \varepsilon _{d}(n)} is the per atom misfit dislocation energy, and ε e ( n ) {\displaystyle \varepsilon _{e}(n)} the per atom homogeneous strain energy. In general, the values of φ a {\displaystyle \varphi _{a}} , φ a ′ ( n ) {\displaystyle \varphi _{a}'(n)} , ε d ( n ) {\displaystyle \varepsilon _{d}(n)} , and ε e ( n ) {\displaystyle \varepsilon _{e}(n)} depend in a complex way on the thickness of the growing layers and lattice misfit between the substrate and adsorbate film. In the limit of small strains, ε d , e ( n ) ≪ μ ∞ {\displaystyle \varepsilon _{d,e}(n)\ll \mu _{\infty }} , the criterion for a film growth mode is dependent on d μ d n {\displaystyle {\frac {d\mu }{dn}}} .
SK growth can be described by both of these inequalities. While initial film growth follows an FM mechanism, i.e. positive differential μ, nontrivial amounts of strain energy accumulate in the deposited layers. At a critical thickness, this strain induces a sign reversal in the chemical potential, i.e. negative differential μ, leading to a switch in the growth mode. At this point it is energetically favorable to nucleate islands and further growth occurs by a VW type mechanism. [ 7 ] A thermodynamic criterion for layer growth similar to the one presented above can be obtained using a force balance of surface tensions and contact angle . [ 8 ]
Since the formation of wetting layers occurs in a commensurate fashion at a crystal surface, there is often an associated misfit between the film and the substrate due to the different lattice parameters of each material. Attachment of the thinner film to the thicker substrate induces a misfit strain at the interface given by a f − a s a s {\displaystyle {\frac {a_{f}-a_{s}}{a_{s}}}} . Here a f {\displaystyle a_{f}} and a s {\displaystyle a_{s}} are the film and substrate lattice constants, respectively. As the wetting layer thickens, the associated strain energy increases rapidly. In order to relieve the strain, island formation can occur in either a dislocated or coherent fashion. In dislocated islands, strain relief arises by forming interfacial misfit dislocations . The reduction in strain energy accommodated by introducing a dislocation is generally greater than the concomitant cost of increased surface energy associated with creating the clusters. The thickness of the wetting layer at which island nucleation initiates, called the critical thickness h C {\displaystyle h_{C}} , is strongly dependent on the lattice mismatch between the film and substrate, with a greater mismatch leading to smaller critical thicknesses. [ 9 ] Values of h C {\displaystyle h_{C}} can range from submonlayer coverage to up to several monolayers thick. [ 1 ] [ 10 ] Figure 2 illustrates a dislocated island during SK growth after reaching a critical layer height. A pure edge dislocation is shown at the island interface to illustrate the relieved structure of the cluster.
In some cases, most notably the Si / Ge system, nanoscale dislocation-free islands can be formed during SK growth by introducing undulations into the near surface layers of the substrate. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 10 ] These regions of local curvature serve to elastically deform both the substrate and island, relieving accumulated strain and bringing the wetting layer and island lattice constant closer to its bulk value. This elastic instability at h C {\displaystyle h_{C}} is known as the Grinfeld instability (formerly Asaro–Tiller–Grinfeld; ATG). [ 7 ] The resulting islands are coherent and defect-free, garnering them significant interest for use in nanoscale electronic and optoelectronic devices. Such applications are discussed briefly later. A schematic of the resulting epitaxial structure is shown in figure 3 which highlights the induced radius of curvature at the substrate surface and in the island. Finally, strain stabilization indicative of coherent SK growth decreases with decreasing inter-island separation. At large areal island densities (smaller spacing), curvature effects from neighboring clusters will cause dislocation loops to form leading to defected island creation. [ 11 ]
Analytical techniques such as Auger electron spectroscopy (AES), low-energy electron diffraction (LEED), and reflection high energy electron diffraction (RHEED), have been extensively used to monitor SK growth. AES data obtained in situ during film growth in a number model systems, such as Pd / W (100), Pb / Cu (110), Ag /W(110), and Ag/ Fe (110), show characteristic segmented curves like those presented in figure 4. [ 1 ] [ 2 ] [ 11 ] Height of the film Auger peaks plotted as a function of surface coverage Θ, initially exhibits a straight line, which is indicative of AES data for FM growth. There is a clear break point at a critical adsorbate surface coverage followed by another linear segment at a reduced slope. The paired break point and shallow line slope is characteristic of island nucleation; a similar plot for FM growth would exhibit many such line and break pairs while a plot of the VW mode would be a single line of low slope. In some systems, reorganization of the 2D wetting layer results in decreasing AES peaks with increasing adsorbate coverage. [ 11 ] Such situations arise when many adatoms are required to reach a critical nucleus size on the surface and at nucleation the resulting adsorbed layer constitutes a significant fraction of a monolayer. After nucleation, metastable adatoms on the surface are incorporated into the nuclei, causing the Auger signal to fall. This phenomenon is particularly evident for deposits on a molybdenum substrate.
Evolution of island formation during a SK transitions have also been successfully measured using LEED and RHEED techniques. Diffraction data obtained via various LEED experiments have been effectively used in conjunction with AES to measure the critical layer thickness at the onset of island formation. [ 2 ] [ 11 ] In addition, RHEED oscillations have proven very sensitive to the layer-to-island transition during SK growth, with the diffraction data providing detailed crystallographic information about the nucleated islands. Following the time dependence of LEED, RHEED, and AES signals, extensive information on surface kinetics and thermodynamics has been gathered for a number of technologically relevant systems.
Unlike the techniques presented in the last section in which probe size can be relatively large compared to island size, surface microscopies such scanning electron microscopy (SEM), transmission electron microscopy (TEM), scanning tunneling microscopy (STM), and Atomic force microscopy (AFM) offer the opportunity for direct viewing of deposit/substrate combination events. [ 1 ] [ 3 ] [ 11 ] The extreme magnifications afforded by these techniques, often down to the nanometer length scale, make them particularly applicable for visualizing the strongly 3D islands. UHV-SEM and TEM are routinely used to image island formation during SK growth, enabling a wide range of information to be gathered, ranging from island densities to equilibrium shapes. [ 1 ] [ 2 ] [ 3 ] AFM and STM have become increasingly utilized to correlate island geometry to the surface morphology of the surrounding substrate and wetting layer. [ 14 ] These visualization tools are often used to complement quantitative information gathered during wide-beam analyses.
As mentioned previously, coherent island formation during SK growth has attracted increased interest as a means for fabricating epitaxial nanoscale structures, particularly quantum dots (QDs). [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] Widely used quantum dots grown in the SK-growth-mode are based on the material combinations Si / Ge or InAs / GaAs . [ 17 ] Significant effort has been spent developing methods to control island organization, density, and size on a substrate. Techniques such as surface dimpling with a pulsed laser and control over growth rate have been successfully applied to alter the onset of the SK transition or even suppress it altogether. [ 14 ] [ 18 ] The ability to control this transition either spatially or temporally enables manipulation of physical parameters of the nanostructures, like geometry and size, which, in turn, can alter their electronic or optoelectronic properties (i.e. band gap). For example, Schwarz–Selinger, et al. have used surface dimpling to create surface miscuts on Si that provide preferential Ge island nucleation sites surrounded by a denuded zone. [ 14 ] In a similar fashion, lithographically patterned substrates have been used as nucleation templates for SiGe clusters. [ 13 ] [ 15 ] Several studies have also shown that island geometries can be altered during SK growth by controlling substrate relief and growth rate. [ 14 ] [ 16 ] Bimodal size distributions of Ge islands on Si are a striking example of this phenomenon in which pyramidal and dome-shaped islands coexist after Ge growth on a textured Si substrate. [ 14 ] Such ability to control the size, location, and shape of these structures could provide invaluable techniques for 'bottom-up' fabrication schemes of next-generation devices in the microelectronics industry. | https://en.wikipedia.org/wiki/Stranski–Krastanov_growth |
A strap footing is a component of a building's foundation . It is a type of combined footing , [ 1 ] consisting of two or more column footings connected by a concrete beam. This type of beam is called a strap beam . It is used to help distribute the weight of either heavily or eccentrically loaded column footings to adjacent footings. [ 2 ]
A strap footing is often used in conjunction with columns that are located along a building's property or lot line. Typically, columns are centered on column footings, but in conditions where columns are located directly adjacent to the property line, the column footings may be offset so that they do not encroach onto the adjacent property. [ 3 ] This results in an eccentric load on a portion of the footing, causing it to tilt to one side. The strap beam restrains the tendency of the footing to overturn by connecting it to nearby footings. [ 1 ]
This architectural element –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Strap_footing |
The Strasbourg Agreement of 27 August 1675 is the first international agreement banning the use of chemical weapons . The treaty was signed between France and the Holy Roman Empire , and was created in response to the use of poisoned bullets . [ 1 ] The use of this weaponry was preceded by Leonardo da Vinci 's invention of arsenic and sulfur -packed shells that can be fired against ships. [ 2 ] These weapons had been used by Christoph Bernhard von Galen , Bishop of Munster , in the Siege of Groningen (1672) – thus provoking the Strasbourg Agreement between the belligerents of the Franco-Dutch War .
The Hague Convention of 1899 also contained a provision that rejected the use of projectiles capable of diffusing asphyxiating or deleterious gases. [ 3 ] The next major agreement on chemical weapons did not occur until the 1925 Geneva Protocol . Today, the prohibition on the use of chemical weapons is different from the use of poison as a method of warfare and is particularly noted by the International Committee of the Red Cross as existing independently. [ 4 ]
This European history –related article is a stub . You can help Wikipedia by expanding it .
This article on military history is a stub . You can help Wikipedia by expanding it .
This article related to weaponry is a stub . You can help Wikipedia by expanding it .
This article related to a treaty is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Strasbourg_Agreement_(1675) |
The Strasbourg Institute of Material Physics and Chemistry ( IPCMS — French : Institut de Physique et Chimie des Matériaux de Strasbourg ) [ 1 ] is a joint research unit (UMR 7504) between the French National Center for Scientific Research (CNRS) and the University of Strasbourg . It was founded in 1987 and is located in the district of Cronenbourg in Strasbourg , France .
The IPCMS was born from a reflection initiated in the early eighties on the need to refocus and coordinate research in the physics and chemistry of condensed matter and materials . In the context of the then emerging Materials Center in Strasbourg, a first reorganization project for condensed matter physics was formalized in 1983. Then, in the same years, the strategic importance of materials for innovation is recognized, justifying the extension of the initial project to chemists, to constitute the backbone of the future institute by bringing together physicists and chemists on the objective of designing and studying new materials (metals, ceramics, ...) for their electronic properties (magnetic, optical, dielectric, etc.). CNRS-ULP [ a ] -EHICS [ b ] joint unit, the IPCMS is officially created in 1987 with François Gautier as Director and Jean-Claude Bernier as deputy director. [ 2 ] Originally located on five different sites of the University of Strasbourg, it was in 1994 that members of the IPCMS were grouped together in the current building on the Campus of Cronenbourg. The IPCMS is then organized into five research groups around three types of materials - polymers and organic materials, metallic materials, ceramics and inorganic materials - and two topics of study: nonlinear optics and optoelectronics on one hand, surfaces and interfaces on the other hand.
The multi-disciplinary nature of the IPCMS is expressed by leading activities in spin electronics , magnetism , ultra-fast optics , electron microscopy and local probes, biomaterials as well as in the synthesis and characterization of functional organic, inorganic or hybrid materials. All scales are considered from the isolated molecule to organized nanostructures on surfaces and single or two-dimensional objects, up to nano-devices . To carry out these studies, the institute has an important instrumental park for the fabrication and characterization of materials at all scales. The developments are also based on recognized theoretical skills. The projects LabEX NIE and EquipEX UNION and UTEM that the IPCMS directs reflect the recognized position of the laboratory. Located on the Campus of Cronenbourg, IPCMS is affiliated with the institutes of physics and chemistry of the CNRS as well as the Faculty of Physics and Engineering, it is also affiliated with the European School of Chemistry, Polymers and Materials (ECPM), Télécom Physique Strasbourg , and the Faculty of Chemistry of the University of Strasbourg. The IPCMS is very attached to maintain strong links with the industrial laboratories carrying out research in its fields of competence. [ 3 ]
The IPCMS now employs a staff of 240 including about 80 researchers and teacher-researchers and 60 technical and administrative engineers, whose activities are divided into five departments:
Department pages: | https://en.wikipedia.org/wiki/Strasbourg_Institute_of_Material_Physics_and_Chemistry |
The Strata Diocletiana ( Latin for "Road of Diocletian ") was a fortified road that ran along the eastern desert border, the limes Arabicus , of the Roman Empire . [ 1 ] [ 2 ] As its name suggests and as it appears on milestones , [ 3 ] it was constructed under Emperor Diocletian (r. 284–305 AD) as part of a wide-ranging fortification drive in the later Roman Empire. [ 4 ] The strata was lined with a series of similarly-built rectangular forts ( quadriburgia ) situated at one day's march (ca. 20 Roman miles ) from each other. It began at the southern bank of the river Euphrates and stretched south and west, passing east of Palmyra and Damascus down to northeast Arabia.
This article about an ancient Roman building or structure is a stub . You can help Wikipedia by expanding it .
This article about the military history of ancient Rome is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Strata_Diocletiana |
The Strata Exam is a CompTIA certification. It covers the fundamentals of various other areas of IT study. Below is a chart of each section of the Strata test and how much it comprises the exam:
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Strata_Exam |
The Strategic Advisory Group of Experts ( SAGE ) is the principal advisory group to World Health Organization (WHO) for vaccines and immunization . Established in 1999 through the merging of two previous committees, notably the Scientific Advisory Group of Experts (which served the Program for Vaccine Development) and the Global Advisory Group (which served the EPI program) by Director-General of the WHO Gro Harlem Brundtland . It is charged with advising WHO on overall global policies and strategies, ranging from vaccines and biotechnology , research and development, to delivery of immunization and its linkages with other health interventions. [ 1 ] SAGE is concerned not just with childhood vaccines and immunization, but all vaccine-preventable diseases. [ 2 ] SAGE provide global recommendations on immunization policy and such recommendations will be further translated by advisory committee at the country level. [ 3 ]
The SAGE has 15 members, who are recruited and selected as acknowledged experts from around the world in the fields of epidemiology , public health , vaccinology , paediatrics , internal medicine , infectious diseases , immunology , drug regulation , programme management, immunization delivery, health-care administration, health economics , and vaccine safety. [ 4 ] Members are appointed by Director-General of the WHO to serve an initial term of 3 years, and can only be renewed once. [ 1 ]
SAGE meets at least twice annually in April and November, with working groups established for detailed review of specific topics prior to discussion by the full group. Priorities of work and meeting agendas are developed by the Group in consultation with WHO. [ 5 ]
UNICEF , the Secretariat of the GAVI Alliance , and WHO Regional Offices participate as observers in SAGE meetings and deliberations. WHO also invites other observers to SAGE meetings, including representatives from WHO regional technical advisory groups, non-governmental organizations, international professional organizations, technical agencies, donor organizations and associations of manufacturers of vaccines and immunization technologies. Additional experts may be invited, as appropriate, to further contribute to specific agenda items. [ 1 ] [ 5 ]
As of February 2024 [update] , working groups were established for the following vaccines: [ 5 ]
This article about the COVID-19 pandemic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Strategic_Advisory_Group_of_Experts |
The Strategic Approach to International Chemicals Management (SAICM) is a global policy framework to foster the sound management of chemicals. The SAICM Secretariat [ 1 ] is hosted by the United Nations Environment Programme .
"The sound management of chemicals is essential if we are to achieve sustainable development, including the eradication of poverty and disease, the improvement of human health and the environment and the elevation and maintenance of the standard of living in countries at all levels of development." - Dubai, 2006 [ 2 ]
It was adopted by the International Conference on Chemicals Management in Dubai, United Arab Emirates, on 6 February 2006. The first session of the Conference and the process to develop the Strategic Approach to International Chemicals Management were co-convened by the United Nations Environment Programme (UN Environment), the Inter-Organization Programme for the Sound Management of Chemicals (IOMC [ 3 ] ) and the Intergovernmental Forum on Chemical Safety (IFCS [ 4 ] ).
The Strategic Approach supports the achievement of the goal agreed at the 2002 Johannesburg World Summit on Sustainable Development of ensuring that, by the year 2020, chemicals will be produced and used in ways that minimize significant adverse impacts on the environment and human health. It acknowledges the essential contributions of chemicals in the current societies and economies, while recognizing the potential threat to sustainable development if chemicals are not managed soundly. [ 5 ]
(As of 12 June 2015) The Strategic Approach focal points [ 6 ] include 175 Governments, 85 NGOs, including a broad range of representatives from industry and civil society.
SAICM commitments are expressed through the Dubai Declaration, Overarching Policy Strategy and the Global Plan of Action.
The Strategic Approach has a scope [ 7 ] that includes:
a. Environmental, economic, social, health and labour aspects of chemical safety ,
b. Agricultural and industrial chemicals, with a view to promoting sustainable development and covering chemicals at all stages of their life-cycle, including in products.
The main objectives [ 8 ] of the Strategic Approach are:
A. Risk reduction
B. Knowledge and information
C. Governance
D. Capacity-building and technical cooperation
E. Illegal international traffic
The Quick Start Programme (QSP) is a programme under SAICM to support initial enabling capacity building and implementation activities in developing countries, least developed countries, small island developing States and countries with economies in transition. The QSP includes a voluntary, time-limited trust fund, administered by the United Nations Environment Programme , and multilateral, bilateral and other forms of cooperation. The QSP Trust Fund portfolio includes 184 approved projects in 108 countries, of which 54 are Least Developed Countries or Small Island Developing States, for an approximate funding $37 million. [ 9 ]
The International Conference on Chemicals Management (ICCM) undertakes periodic reviews of SAICM.
The first session (ICCM 1) was held in Dubai, United Arab Emirates, from 4–6 February 2006, finalized and adopted SAICM.
The second session (ICCM 2) was held in Geneva, Switzerland, on 11–15 May 2009 and undertook the first periodic review of SAICM's implementation.
The third session (ICCM 3) was held in Nairobi, Kenya, 17–21 September 2012 and reviewed progress in the implementation of SAICM with tangible data on 20 indicators of progress adopted at ICCM2, addressed emerging policy issues and adopted the Health Sector Strategy. [ 10 ]
The fourth session (ICCM 4) was held in Geneva, Switzerland, from 28 September to 2 October 2015. The overall orientation and guidance towards the achievement of the 2020 goal is the strategic policy outcome of ICCM4, setting the stage for action in 2020. ICCM4 also reviewed implementation aspects of emerging policy issues and other issues of concern, considered the Sustainable Development Goals , discussed sound management of chemicals and waste beyond 2020, and reviewed the proposed activities.
ICCM4, through resolution IV/4, initiated an inter-sessional process to prepare recommendations regarding the Strategic Approach and the sound management of chemicals and waste beyond 2020. [ 11 ] The First meeting of the intersessional process considering the Strategic Approach and the sound management of chemicals and waste beyond 2020 was held in Brasilia, Brazil, from 7 to 9 February 2017. [ 12 ]
The fifth session of the International Conference on Chemicals Management is scheduled to be held in Bonn, Germany, on 25 - 29 September 2023.
The ICCM provides a platform to call for appropriate action on emerging policy issues (EPI) as they arise and to forge consensus on priorities for cooperative action. So far, resolutions have been adopted on the following issues:
A. Lead in paint [ 13 ] [ 14 ]
B. Chemicals in products [ 15 ]
C. Hazardous substance within the life cycle of electrical and electronic products [ 16 ]
D. Nanotechnology and manufactured nanomaterials [ 17 ]
E. Endocrine-disrupting chemicals [ 18 ]
F. Environmentally Persistent Pharmaceutical Pollutants [ 19 ]
Other issues of concern have been acknowledged:
G. Perfluorinated chemicals [ 20 ]
H. Highly Hazardous Pesticides [ 21 ] | https://en.wikipedia.org/wiki/Strategic_Approach_to_International_Chemicals_Management |
The United States government's Strategic Computing Initiative funded research into advanced computer hardware and artificial intelligence from 1983 to 1993. The initiative was designed to support various projects that were required to develop machine intelligence in a prescribed ten-year time frame, from chip design and manufacture , computer architecture to artificial intelligence software. The Department of Defense spent a total of $1 billion on the project. [ 1 ]
The inspiration for the program was Japan's fifth generation computer project, an enormous initiative that set aside billions for research into computing and artificial intelligence. As with Sputnik in 1957, the American government saw the Japanese project as a challenge to its technological dominance. [ 2 ] The British government also funded a program of their own around the same time, known as Alvey , and a consortium of U.S. companies funded another similar project, the Microelectronics and Computer Technology Corporation . [ 3 ] [ 4 ]
The goal of SCI, and other contemporary projects, was nothing less than full machine intelligence. "The machine envisioned by SC", according to Alex Roland and Philip Shiman, "would run ten billion instructions per second to see, hear, speak, and think like a human. The degree of integration required would rival that achieved by the human brain, the most complex instrument known to man." [ 5 ]
The initiative was conceived as an integrated program, similar to the Apollo moon program , [ 5 ] where different subsystems would be created by various companies and academic projects and eventually brought together into a single integrated system. Roland and Shiman wrote that "While most research programs entail tactics or strategy, SC boasted grand strategy, a master plan for an entire campaign." [ 1 ]
The project was funded by the Defense Advanced Research Projects Agency and directed by the Information Processing Technology Office (IPTO). By 1985 it had spent $100 million, and 92 projects were underway at 60 institutions: half in industry, half in universities and government labs. [ 2 ] Robert Kahn , who directed IPTO in those years, provided the project with its early leadership and inspiration. [ 6 ] Clint Kelly managed the SC Initiative for three years and developed many of the specific application programs for DARPA, such as the Autonomous Land Vehicle. [ 7 ]
By the late 1980s, it was clear that the project would fall short of realizing the hoped-for levels of machine intelligence. Program insiders pointed to issues with integration, organization, and communication. [ 8 ] When Jack Schwarz ascended to the leadership of IPTO in 1987, he cut funding to artificial intelligence research (the software component) "deeply and brutally", "eviscerating" the program (wrote Pamela McCorduck ). [ 8 ] Schwarz felt that DARPA should focus its funding only on those technologies which showed the most promise. In his words, DARPA should "surf", rather than "dog paddle", and he felt strongly AI was not "the next wave". [ 8 ]
The project was superseded in the 1990s by the Accelerated Strategic Computing Initiative and then by the Advanced Simulation and Computing Program . These later programs did not include artificial general intelligence as a goal, but instead focused on supercomputing for large scale simulation, such as atomic bomb simulations. The Strategic Computing Initiative of the 1980s is distinct from the 2015 National Strategic Computing Initiative —the two are unrelated.
Although the program failed to meet its goal of high-level machine intelligence, [ 1 ] it did meet some of its specific technical objectives, for example those of autonomous land navigation. [ 9 ] The Autonomous Land Vehicle program and its sister Navlab project at Carnegie Mellon University, in particular, laid the scientific and technical foundation for many of the driverless vehicle programs that came after it, such as the Demo II and III programs (ALV being Demo I), Perceptor, and the DARPA Grand Challenge . [ 10 ] The use of video cameras plus laser scanners and inertial navigation units pioneered by the SCI ALV program form the basis of almost all commercial driverless car developments today. It also helped to advance the state of the art of computer hardware to a considerable degree.
On the software side, the initiative funded development of the Dynamic Analysis and Replanning Tool (DART), a program that handled logistics using artificial intelligence techniques. This was a huge success, saving the Department of Defense billions during Desert Storm . [ 4 ] Introduced in 1991, DART had by 1995 offset the monetary equivalent of all funds DARPA had channeled into AI research for the previous 30 years combined. [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Strategic_Computing_Initiative |
The Strategic National Stockpile ( SNS ), originally called the National Pharmaceutical Stockpile ( NPS ), is the United States' national repository of antibiotics , vaccines , chemical antidotes , antitoxins , and other critical medical supplies . Its website states:
"The Strategic National Stockpile's role is to supplement state and local supplies during public health emergencies. Many states have products stockpiled, as well. The supplies, medicines, and devices for life-saving care contained in the stockpile can be used as a short-term stopgap buffer when the immediate supply of adequate amounts of these materials may not be immediately available." [ 1 ] [ 2 ] [ 3 ]
The actual supply of drugs and supplies that make up the SNS are located in 12 [ 4 ] secret locations strategically placed throughout the US. The locations appear to look like ordinary commercial warehouses. Inside the warehouses, supplies are stacked on shelves that can measure five stories high. [ 5 ] Armed personnel guard the warehouse contents and, according to NPR in 2020, during the COVID-19 global pandemic, "rows of ventilators, which can support people who are having trouble breathing, are kept charged up and ready to roll at a moment's notice." [ 6 ]
The SNS holds a variety of items that would be helpful to the general population in the event of a widespread disease outbreak. [ citation needed ]
Each push pack weighs about 50 short tons (100,000 lb; 45 t; 45,000 kg). [ 5 ] Its contents include broad-spectrum oral and intravenous antibiotics, emergency medicines, IV fluids and kits, airway equipment, bandages, vaccines, antitoxins, and ventilators. [ 7 ] The material deploys by unmarked trucks and airplanes within 12 hours of the receipt of a request by the CDC. The U.S. Marshals Service provides armed security from these federal sites to local destinations. The SNS has adequate vaccines and countermeasures in its stockpile, including 300 million smallpox treatment courses and enough anthrax vaccines to handle a three-city incident. [ 8 ]
CHEMPACKs contain nerve agent antidotes to help in the event of a nerve agent attack or industrial accident. [ 9 ] As of 2015, 1,960 CHEMPACKs were forward-deployed in more than 1,340 locations across each state and territory of the United States. [ 10 ]
During the first decade of the Cold War , the United States accumulated a civil defense medical stockpile at 32 storage facilities. The supplies began to degrade in the 1960s, and were disposed of and the stockpile program closed in 1974. [ 11 ]
In April 1998, President Bill Clinton read the Richard Preston novel The Cobra Event , a fiction book about a mad scientist spreading a virus throughout New York City. As a result, Clinton held a meeting with scientists and cabinet officials to discuss the threat of bioterrorism. He was so impressed that he asked the experts to meet with senior-level aides at the Department of Defense and in the Department of Health and Human Services. [ 12 ] At that time, the government had stockpiles of medications for military personnel, but did not have them for civilians. Shortly after, The Washington Post wrote that Clinton surprised many in Washington at how fast he and his National Security Council had moved to change that. By October, Clinton signed into law [ 13 ] a new budget of $51 million for pharmaceutical and vaccine stockpiling to be carried out by the CDC. [ 14 ]
The US Congress appropriated funds for the CDC to create a pharmaceutical and vaccine stockpile to handle biological and chemical threats from disease that could affect large numbers of the US civilian population, in Public Law 105–277 dated October 21, 1998. [ 15 ] The original name was the National Pharmaceutical Stockpile (NPS) program, but additional materials have been added to the stockpile since the original authorization . [ citation needed ]
The federal government implemented a pandemic blueprint for distribution of Personal Protective Equipment (P.P.E.) from the Strategic National Stockpile, in coordination with public and private efforts. [ 12 ]
On March 1, 2003, the NPS was renamed the Strategic National Stockpile (SNS) program with joint management by Department of Homeland Security and Department of Health and Human Services . [ 16 ]
In 2005 and in preparation for a predictable pandemic influenza, the Bush administration called for the coordination of domestic production and stockpiling of protective personal equipment. [ 17 ] In 2006, the US Congress funded the integration of protective equipment to a Strategic National Stockpile: 52 million surgical masks and 104 million N95 air-filtration masks were acquired and added. [ 17 ]
Public Health Emergency lists large-scale deployments from the SNS in response to emergencies. [ 16 ]
The SNS successfully deployed 12-hour "push packages" to New York City and Washington, D.C. , in response to the September 11 attacks , and managed inventory (MI) to numerous locations in response to the anthrax terrorist attacks of 2001 .
Following the landfall of Hurricanes Katrina and Rita on the Gulf coast of Mississippi and Louisiana in September 2005, the CDC deployed SNS assets, technical assistance and response units, plus the newly developed and rapidly deployable "federal medical contingency stations" to state-approved locations near or in the disaster areas. The contingency stations, later renamed Federal Medical Stations (FMS), are caches of equipment and supplies provided by the SNS, set up in local "buildings of opportunity" and staffed by local or federal medical personnel to provide triage , low acuity care, and temporary holding of displaced patients for whom local acute care systems are damaged or destroyed.
Since the original deployment following Hurricane Katrina , FMSs have been deployed to support other major disaster responses including Superstorm Sandy . The FMS program is a collaboration between CDC and the Office of Emergency Management under the HHS Assistant Secretary for Preparedness and Response. In 2014, responding to stakeholder feedback, a 50-bed FMS cache was developed and made available in addition to the original 250-bed FMS. [ 18 ]
The SNS released one-quarter of its antiviral drug inventory ( Tamiflu and Relenza ), personal protective equipment ( PPE ), and respiratory protection devices, to help every US state respond to the H1N1 Influenza 2009 swine influenza outbreak in the United States. [ 19 ]
After the 2009 flu pandemic in which tens of millions of masks were distributed, fiscal constraints imposed by the agency's $600 million annual budget led officials to decide that replenishing a large inventory of N95 face masks was of less priority than stockpiling other equipment and drugs for diseases and disasters. [ 20 ]
During the first Trump administration , Trump falsely claimed his administration inherited an ‘empty’ stockpile from the previous administration. [ 21 ]
From 2017 to 2019, the Trump administration failed to replace masks and other supplies used in earlier disasters. In May 2020, in a House subcommittee meeting, whistle-blower Dr. Rick Bright, previous director of Department of Health and Human Services’ Biomedical Advanced Research and Development Authority, explained that the Trump administration had ignored his early warnings to stock up on masks and other supplies to combat the coronavirus. [ 22 ]
The Office of the Assistant Secretary for Preparedness and Response (ASPR) of the Department of Health and Human Services managed the Strategic National Stockpile from October 1, 2018. Prior to that, the stockpile was managed by the Centers for Disease Control and Prevention (CDC).
At the beginning of the COVID-19 pandemic , SNS was involved in providing supplies to the repatriation efforts of State Department employees from China and Japan . It also shipped thousands of N95 masks to the states of Washington , Massachusetts , and New York to respond to their surging infection rates. [ 23 ] In March 2020, SNS director Steven Adams said it had stockpiled 13 million masks and had placed an order for 500 million more by September 2021. [ 4 ] The SNS was criticized for containing over 5 million expired masks. [ 4 ]
In the early stages of the pandemic the availability of mechanical ventilators to sustain patients became a serious concern. The stockpile had added ventilators to its inventory during the 2000s and 2010s and established plans to distribute them, although a series of studies and reports expressed doubt that the ventilators would be sufficient in an influenza pandemic. [ 24 ] Although ventilators were ordered for the stockpile under the Defense Production Act , medical practices shifted toward other respiratory treatments as understanding of the disease improved, resulting in a surplus of unused machines. [ 25 ]
On March 29, 2020, HHS accepted a donation of 30 million doses of hydroxychloroquine sulfate from Sandoz and one million doses of Resochin ( chloroquine phosphate ) from Bayer Pharmaceuticals for use in treating hospitalized COVID-19 patients or in clinical trials. The SNS will work with the Federal Emergency Management Agency (FEMA) to deliver the doses to states. [ 26 ] The U.S. Food and Drug Administration later withdrew Emergency Use Authorization for these drugs after studies found they had no benefit for treating COVID-19.
On April 1, 2020, Department of Homeland Security officials told reporters that the cache of personal protective equipment stored by the SNS was almost depleted due to the COVID-19 pandemic in the United States . This was later confirmed by President Donald Trump . PPE from the SNS was sent directly to health facilities across the country. [ 27 ]
During the COVID-19 pandemic , states criticized the lack of availability of medical supplies from the federal stockpile. At a White House press conference on April 2, 2020, senior advisor Jared Kushner commented "The notion of the federal stockpile was it's supposed to be our stockpile. It's not supposed to be states' stockpiles that they then use." [ 28 ] [ 29 ] [ 30 ] The idea that the stockpile was not a backup for states that run out of supplies was disputed by Governor Laura Kelly of Kansas, [ 31 ] among others.
The description of the stockpile, as listed on its website, was changed the day after Kushner's remarks to better align with them, from:
"Strategic National Stockpile is the nation’s largest supply of life-saving pharmaceuticals and medical supplies for use in a public health emergency severe enough to cause local supplies to run out. When state, local, tribal, and territorial responders request federal assistance to support their response efforts, the stockpile ensures that the right medicines and supplies get to those who need them most during an emergency. Organized for scalable response to a variety of public health threats, this repository contains enough supplies to respond to multiple large-scale emergencies simultaneously." [ 32 ] [ 1 ] [ 2 ] [ 3 ]
to:
"The Strategic National Stockpile's role is to supplement state and local supplies during public health emergencies. Many states have products stockpiled, as well. The supplies, medicines, and devices for life-saving care contained in the stockpile can be used as a short-term stopgap buffer when the immediate supply of adequate amounts of these materials may not be immediately available." [ 1 ] [ 2 ] [ 3 ]
Washington State announced on April 5, 2020, that it would return more than 400 ventilators it had received from the Stockpile "... to help states facing higher numbers of COVID-19 cases." [ 33 ]
On April 8, 2020, HHS contracted with DuPont for 2.25 million Tyvek suits to be delivered to the SNS to be used as PPE for frontline healthcare workers. [ 34 ] By April 13, 2020, HHS used the Defense Production Act (DPA) to contract for ventilator production with General Electric , Hill-Rom , Medtronic , ResMed , and Vyaire. Additionally, they contracted with Hamilton and Zoll for ventilator production without using the DPA. The seven contracts were expected to produce 137,431 ventilators by the end of 2020 at a total cost of $1.435 billion. [ 35 ]
In January 2022, amidst a surge in cases caused by the more contagious Omicron variant , the CDC updated its guidance [ 36 ] to emphasize the greater protection from wearing N95 masks in indoor public spaces. [ 37 ] The Biden administration announced it would distribute 400 million N95 masks from the SNS [ 38 ] which started arriving in late January and early February. [ 39 ]
The stockpile was again used during the 2022 monkeypox outbreak . In May 2022, the Centers for Disease Control (CDC) confirmed the United States released some of their Jynneos monkeypox vaccine supply from their Strategic National Stockpile for people who are "high-risk". [ 40 ] On May 27, 2022, the CDC specified the indications for the Jynneos vaccine: research laboratory personnel, clinical laboratory personnel performing diagnostic testing for orthopoxviruses, designated response team members, and health care personnel who administer live smallpox Vaccine or care for patients infected with orthopoxviruses. [ 41 ]
In March of 2021, the New York Times alleged mismanagement involving the Strategic National Stockpile, stating, "In one telling example, The Times found, the government approved a plan in 2015 to buy tens of millions of N95 respirators — lifesaving equipment for medical workers that has been in short supply because of Covid-19 — but the masks repeatedly lost out in the competition for funding over the years leading up to the pandemic, according to five former federal health officials involved in the effort. During the same period, Emergent sold the government nearly $1 billion in anthrax vaccines, financial disclosures show." [ 42 ]
Section 403 of the Pandemic and All-Hazards Preparedness Reauthorization Act of 2013 (H.R. 307; 113th Congress) reauthorized the Strategic National Stockpile for FY2014-FY2018. It required the Secretary of Health and Human Services to: | https://en.wikipedia.org/wiki/Strategic_National_Stockpile |
In economics and game theory , the decisions of two or more players are called strategic complements if they mutually reinforce one another, and they are called strategic substitutes if they mutually offset one another. These terms were originally coined by Bulow, Geanakoplos, and Klemperer (1985). [ 1 ]
To see what is meant by 'reinforce' or 'offset', consider a situation in which the players all have similar choices to make, as in the paper of Bulow et al., where the players are all imperfectly competitive firms that must each decide how much to produce. Then the production decisions are strategic complements if an increase in the production of one firm increases the marginal revenues of the others, because that gives the others an incentive to produce more too. This tends to be the case if there are sufficiently strong aggregate increasing returns to scale and/or the demand curves for the firms' products have a sufficiently low own-price elasticity . On the other hand, the production decisions are strategic substitutes if an increase in one firm's output decreases the marginal revenues of the others, giving them an incentive to produce less.
According to Russell Cooper and Andrew John, strategic complementarity is the basic property underlying examples of multiple equilibria in coordination games . [ 2 ]
Mathematically, consider a symmetric game with two players that each have payoff function Π ( x i , x j ) {\displaystyle \,\Pi (x_{i},x_{j})} , where x i {\displaystyle \,x_{i}} represents the player's own decision, and x j {\displaystyle \,x_{j}} represents the decision of the other player. Assume Π {\displaystyle \,\Pi } is increasing and concave in the player's own strategy x i {\displaystyle \,x_{i}} . Under these assumptions, the two decisions are strategic complements if an increase in each player's own decision x i {\displaystyle \,x_{i}} raises the marginal payoff ∂ Π j ∂ x j {\displaystyle {\frac {\partial \Pi _{j}}{\partial x_{j}}}} of the other player. In other words, the decisions are strategic complements if the second derivative ∂ 2 Π j ∂ x j ∂ x i {\displaystyle {\frac {\partial ^{2}\Pi _{j}}{\partial x_{j}\partial x_{i}}}} is positive for i ≠ j {\displaystyle i\neq j} . Equivalently, this means that the function Π {\displaystyle \,\Pi } is supermodular .
On the other hand, the decisions are strategic substitutes if ∂ 2 Π j ∂ x j ∂ x i {\displaystyle {\frac {\partial ^{2}\Pi _{j}}{\partial x_{j}\partial x_{i}}}} is negative, that is, if Π {\displaystyle \,\Pi } is submodular .
In their original paper, Bulow et al. use a simple model of competition between two firms to illustrate their ideas.
The revenue for firm x with production rates ( x 1 , x 2 ) {\displaystyle (x_{1},x_{2})} is given by
while the revenue for firm y with production rate y 2 {\displaystyle y_{2}} in market 2 is given by
At any interior equilibrium, ( x 1 ∗ , x 2 ∗ , y 2 ∗ ) {\displaystyle (x_{1}^{*},x_{2}^{*},y_{2}^{*})} , we must have
Using vector calculus, geometric algebra, or differential geometry, Bulow et al. showed that the sensitivity
of the Cournot equilibrium to changes in p 1 {\displaystyle p_{1}} can be calculated in terms of second partial derivatives
of the payoff functions:
When 1 / 4 ≤ p 1 ≤ 2 / 3 {\displaystyle 1/4\leq p_{1}\leq 2/3} ,
This, as price is increased in market 1, Firm x sells more in market 1 and less in market 2, while firm y sells more in market 2. If the Cournot equilibrium of this model is calculated explicitly, we find
A game with strategic complements is also called a supermodular game . This was first formalized by Topkis, [ 3 ] and studied by Vives. [ 4 ] There are efficient algorithms for finding pure-strategy Nash equilibria in such games. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Strategic_complements |
Strategic design is the application of future-oriented design principles in order to increase an organization 's innovative and competitive qualities. Its foundations lie in the analysis of external and internal trends and data, which enables design decisions to be made on the basis of facts rather than aesthetics or intuition. The discipline is mostly practiced by design agencies or by internal development departments.
"Traditional definitions of design often focus on creating discrete solutions—be it a product, a building, or a service. Strategic design is about applying some of the principles of traditional design to "big picture" systemic challenges like business growth, health care , education , and climate change . It redefines how problems are approached, identifies opportunities for action, and helps deliver more complete and resilient solutions." [ 1 ] The traditional concept of design is mainly associated with artistic work. The addition of the term strategic expands such conception so that creativity is linked with innovation , allowing ideas to become practical and profitable applications "that can be managed effectively, acquired, used and/or consumed by target audiences." [ 2 ] Strategic design draws from the body of literature that emerged in recent years, which outline strategic design principles that provide insights and new methods in the areas of merchandising , consuming , and ownership. [ 3 ] There are at least four factors that demonstrate the value of strategic design and these are:
Businesses are the main consumers of strategic design, but the public, political and not-for-profit sectors are also making increasing use of the discipline. Its applications are varied, yet often aim to strengthen one of the following: product branding , product development , corporate identity , corporate branding, operating and business models , and service delivery .
Strategic design has become increasingly crucial in recent years, as businesses and organisations compete for a share of today's global and fast-paced marketplace.
"To survive in today’s rapidly changing world, products and services must not only anticipate change, but drive it. Businesses that won’t lose market share to those that do. There have been many examples of strategic design breakthroughs over the years and in an increasingly competitive global market with rapid product cycles, strategic design is becoming more important". [ 5 ]
Examples
Strategic design can play a role in helping to resolve the following common problems: | https://en.wikipedia.org/wiki/Strategic_design |
In game theory , a strategy A dominates another strategy B if A will always produce a better result than B , regardless of how any other player plays. Some very simple games (called straightforward games) can be solved using dominance.
A player can compare two strategies, A and B, to determine which one is better. The result of the comparison is one of:
This notion can be generalized beyond the comparison of two strategies.
Strategy: A complete contingent plan for a player in the game. A complete contingent plan is a full specification of a player's behavior, describing each action a player would take at every possible decision point. Because information sets represent points in a game where a player must make a decision, a player's strategy describes what that player will do at each information set. [ 2 ]
Rationality: The assumption that each player acts in a way that is designed to bring about what he or she most prefers given probabilities of various outcomes; von Neumann and Morgenstern showed that if these preferences satisfy certain conditions, this is mathematically equivalent to maximizing a payoff. A straightforward example of maximizing payoff is that of monetary gain, but for the purpose of a game theory analysis, this payoff can take any desired outcome—cash reward, minimization of exertion or discomfort, or promoting justice can all be modeled as amassing an overall “utility” for the player. The assumption of rationality states that players will always act in the way that best satisfies their ordering from best to worst of various possible outcomes. [ 2 ]
Common Knowledge : The assumption that each player has knowledge of the game, knows the rules and payoffs associated with each course of action, and realizes that every other player has this same level of understanding. This is the premise that allows a player to make a value judgment on the actions of another player, backed by the assumption of rationality, into consideration when selecting an action. [ 2 ]
If a strictly dominant strategy exists for one player in a game, that player will play that strategy in each of the game's Nash equilibria . If both players have a strictly dominant strategy, the game has only one unique Nash equilibrium, referred to as a "dominant strategy equilibrium". However, that Nash equilibrium is not necessarily "efficient", meaning that there may be non-equilibrium outcomes of the game that would be better for both players. The classic game used to illustrate this is the Prisoner's Dilemma .
Strictly dominated strategies cannot be a part of a Nash equilibrium, and as such, it is irrational for any player to play them. On the other hand, weakly dominated strategies may be part of Nash equilibria. For instance, consider the payoff matrix pictured at the right.
Strategy C weakly dominates strategy D. Consider playing C : If one's opponent plays C, one gets 1; if one's opponent plays D, one gets 0. Compare this to D, where one gets 0 regardless. Since in one case, one does better by playing C instead of D and never does worse, C weakly dominates D . Despite this, ( D , D ) {\displaystyle (D,D)} is a Nash equilibrium. Suppose both players choose D . Neither player will do any better by unilaterally deviating—if a player switches to playing C, they will still get 0. This satisfies the requirements of a Nash equilibrium. Suppose both players choose C. Neither player will do better by unilaterally deviating—if a player switches to playing D, they will get 0. This also satisfies the requirements of a Nash equilibrium.
The iterated elimination (or deletion, or removal) of dominated strategies (also denominated as IESDS, or IDSDS, or IRSDS) is one common technique for solving games that involves iteratively removing dominated strategies. In the first step, all dominated strategies are removed from the strategy space of each of the players, since no rational player would ever play these strategies. This results in a new, smaller game. Some strategies—that were not dominated before—may be dominated in the smaller game. The first step is repeated, creating a new even smaller game, and so on.
This process is valid since it is assumed that rationality among players is common knowledge , that is, each player knows that the rest of the players are rational, and each player knows that the rest of the players know that he knows that the rest of the players are rational, and so on ad infinitum (see Aumann, 1976). | https://en.wikipedia.org/wiki/Strategic_dominance |
Strategic environmental assessment ( SEA ) is a systematic decision support process aiming to ensure that environmental and possibly other sustainability aspects are considered effectively in policy, plan, and program making. In this context, following Fischer (2007) [ 1 ] SEA may be seen as:
Effective SEA works within a structured and tiered decision framework, aiming to support more effective and efficient decision-making for sustainable development and improved governance by providing for a substantive focus regarding questions, issues and alternatives to be considered in policy, plan and program (PPP) making.
SEA is an evidence-based instrument aiming to add scientific rigor to PPP making by using suitable assessment methods and techniques. Ahmed and Ernesto, Sánchez-Triana (2008) developed an approach to the design and implementation of public policies that follows a continuous process rather than as a discrete intervention. [ 2 ]
The European Union Directive on Environmental Impact Assessments (85/337/EEC,also known as the EIA Directive ) only applied to certain projects. [ 3 ] This was seen as deficient as it only dealt with specific effects at the local level whereas many environmentally damaging decisions had already been made at a more strategic level (for example the fact that new infrastructure may generate an increased demand for travel).
The concept of strategic assessments originated from regional development / land use planning in the developed world. In 1981 the U.S. Housing and Urban Development Department published the Area-wide Impact Assessment Guidebook . In Europe the Convention on Environmental Impact Assessment in a Transboundary Context the so-called Espoo Convention laid the foundations for the introduction of SEA in 1991. In 2003, the Espoo Convention was supplemented by a Protocol on Strategic Environmental Assessment .
The European SEA Directive 2001/42/EC required that all member states of the European Union should have ratified the Directive into their own country's law by 21 July 2004. [ 4 ]
Countries of the EU started implementing the land use aspects of SEA first, some took longer to adopt the directive than others, but the implementation of the directive can now be seen as completed. Many EU nations have a longer history of strong Environmental Appraisal including Denmark , the Netherlands , Finland and Sweden . The newer member states to the EU have hurried in implementing the directive.
For the most part, an SEA is conducted before a corresponding EIA is undertaken. This means that information on the environmental impact of a plan can cascade down through the tiers of decision making and can be used in an EIA at a later stage. This should reduce the amount of work that needs to be undertaken. A handover procedure is foreseen.
The SEA Directive only applies to plans and programmes, not policies, although policies within plans are likely to be assessed and SEA can be applied to policies if needed and in the UK certainly, very often is.
The structure of SEA (under the Directive) is based on the following phases:
The EU directive also includes impacts other than the environmental, such as material assets and archaeological sites. In most Western European states, this has been broadened further to include economic and social aspects of sustainability .
SEA should ensure that plans and programs consider the environmental effects they cause. If those environmental effects are part of the overall decision taking, it is called Strategic Impact Assessment .
SEA is a legally enforced assessment procedure required by Directive 2001/42/EC (known as the SEA Directive). [ 4 ] The SEA Directive aims at introducing systematic assessment of the environmental effects of strategic land use related plans and programs. It typically applies to regional and local, development, waste and transport plans, within the European Union. Some plans, such as finance and budget plans or civil defence plans are exempt from the SEA Directive, it also only applies to plans that are required by law, which excludes national government's plans and programs, as their plans are 'voluntary', whereas local and regional governments are usually required to prepare theirs.
SEA within the UK is complicated by different Regulations, guidance and practice between England, Scotland, Wales and Northern Ireland. In particular the SEA Legislation in Scotland (and in Northern Ireland, which specifically refers to the Regional Development Strategy) contains an expectation that SEA will apply to strategies as well as plans and programmes. In the UK, SEA is inseparable from the term ' sustainability ', and an SEA is expected to be carried out as part of a wider Sustainability Appraisal (SA), which was already a requirement for many types of plan before the SEA directive and includes social, and economic factors in addition to environmental. Essentially an SA is intended to better inform decision makers on the sustainability aspects of the plan and ensure the full impact of the plan on sustainability is understood.
The United Kingdom in its strategy for sustainable development, A Better Quality of Life (May 1999), explained sustainable development in terms of four objectives. These are:
These headline objectives are usually used and applied to local situations in order to assess the impact of the plan or program.
The Protocol on Strategic Environmental Assessment was negotiated by the member States of the UNECE (in this instance Europe, Caucasus and Central Asia). It required ratification by 16 States to come into force, which it did in July 2010. It is now open to all UN Member States. Besides its potentially broader geographical application (global), the Protocol differs from the corresponding European Union Directive in its non-mandatory application to policies and legislation – not just plans and programmes. The Protocol also places a strong emphasis on the consideration of health, and there are other more subtle differences between the two instruments.
SEA in New Zealand is part of an integrated planning and assessment process and unlike the US is not used in the manner of Environmental impact assessment . The Resource Management Act 1991 has, as a principal objective, the aim of sustainable management. SEA is increasingly being considered for transportation projects. [ 5 ]
Development assistance is increasingly being provided through strategic-level interventions, aimed to make aid more effective. SEA meets the need to ensure environmental considerations are taken into account in this new aid context. Applying SEA to development co-operation provides the environmental evidence to support more informed decision making, and to identify new opportunities by encouraging a systematic and thorough examination of development options.
The OECD Development Assistance Committee (DAC) Task Team on SEA has developed guidance on how to apply SEA to development co-operation. The document explains the benefits of using SEA in development co-operation and sets out key steps for its application, based on recent experiences. | https://en.wikipedia.org/wiki/Strategic_environmental_assessment |
Strategic fair division studies problems of fair division , in which participants cooperate to subdivide goods or resources fairly, from a point of view in which the participants are assumed to hide their preferences and act strategically in order to maximize their own utility, rather than playing sincerely according to their true preferences.
To illustrate the difference between strategic fair division and classic fair division, consider the divide and choose procedure for dividing a cake among two agents. In classic fair division, it is assumed that the cutter cuts the cake into two pieces that are equal in his eyes, and thus he always gets a piece that he values at exactly 1/2 of the total cake value. However, if the cutter knows the chooser's preferences, he can get much more than 1/2 by acting strategically. [ 1 ] For example, suppose the cutter values a piece by its size while the chooser values a piece by the amount of chocolate in it. So the cutter can cut the cake into two pieces with almost the same amount of chocolate, such that the smaller piece has slightly more chocolate. Then, the chooser will take the smaller piece and the cutter will win the larger piece, which may be worth much more than 1/2 (depending on how the chocolate is distributed).
The research in strategic fair division has two main branches.
One branch is related to game theory and studies the equilibria in games created by fair division algorithms:
The other branch is related to mechanism design and aims to find truthful mechanisms for fair division, in particular: | https://en.wikipedia.org/wiki/Strategic_fair_division |
A strategic information system (SIS) is a computer system used by organizations to analyze market and competitor information, helping them plan and make their business more successful. It shapes the corporate strategy of an organization by providing a connection between the organization's demands and the latest information technology. This connection helps the organization adapt to the continuous changes in the corporate environment; thereby gaining a competitive advantage [ 1 ]
SIS supports decision-making by providing valuable insights to executives and managers . By integrating data from multiple internal and external sources, SIS provides a comprehensive view of an organization's performance and market trends . [ 2 ]
SIS can give a business an advantage over its competitors by offering insightful data. It also helps in identifying opportunities and risks. [ 3 ]
SIS aids in achieving a company's long-term goals and objectives. [ 4 ] | https://en.wikipedia.org/wiki/Strategic_information_system |
A strategic move in game theory is an action taken by a player outside the defined actions of the game in order to gain a strategic advantage and increase one's payoff. Strategic moves can either be unconditional moves or response rules . The key characteristics of a strategic move are that it involves a commitment from the player, meaning the player can only restrict their own choices and that the commitment has to be credible , meaning that once employed it must be in the interest of the player to follow through with the move. Credible moves should also be observable to the other players. [ 1 ] [ 2 ]
Strategic moves are not warnings or assurances. Warnings and assurances are merely statements of a player's interest, rather than an actual commitment from the player.
The term was coined by Thomas Schelling in his 1960 book, The Strategy of Conflict , and has gained wide currency in political science and industrial organization . [ 3 ]
This game theory article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Strategic_move |
Strategic pluralism (also known as the dual-mating strategy ) is a theory in evolutionary psychology regarding human mating strategies that suggests women have evolved to evaluate men in two categories: whether they are reliable long term providers, and whether they contain high quality genes. [ 1 ] The theory of strategic pluralism was proposed by Steven Gangestad and Jeffry Simpson, two professors of psychology at the University of New Mexico and Texas A&M University, respectively.
Although strategic pluralism is believed to occur for both animals and humans, the majority of experiments have been performed with humans. One experiment concluded that between short term and long-term relationships, males and females prioritized different things. It was shown that both preferred physical attractiveness for short term mates. However, for long term, females preferred males with traits that indicated that they could be better caretakers, whereas the males did not change their priorities. [ 2 ]
The experimenters used the following setup: subjects were given an overall 'budget' and asked to assign points to different traits. [ 3 ] For long-term mates, women gave more points to social and kindness traits, agreeing with results found in other studies suggesting that females prefer long-term mates who would provide resources and emotional security for them, as opposed to physically attractive mates. [ 4 ] [ 5 ] The females also prefer males who can offer them more financial security as this would help them raise their offspring. [ 6 ]
Females have also chosen males who have more feminine appearances because of a (hypothesized) inverse relationship between a male's facial attractiveness and effort willing to spend in raising offspring. That is, in theory, a more attractive male would put in less work as a caretaker while a less attractive male would put in more work. [ 7 ] On average, there is a wider amount of variability in male characteristics than in females. This suggests there are enough of both males more suited for short-term relationships and those more suited for longer relationships. [ 8 ]
Bellis and Baker calculated that if double-mating strategy does occur, the rate of paternal discrepancy would be between 6.9 and 13.8%. [ 9 ] When taking kin selection into account, Gaulin, McBurney, and Brakeman-Wartell hypothesised that mother’s side of family is more certain that the child is their kin and therefore invest more. Based on this matrilateral bias they calculated the rate of cuckoldry to be roughly 13% to 20%. [ 10 ] These estimates were refuted by Y-chromosome tracking [ 11 ] and HLA tracking [ 12 ] [ 13 ] that put the estimates between 1-2%. David Buss , prominent evolutionary psychologist, cited this evidence as a reason to be sceptical of dual-mating strategy hypothesis. [ 14 ] | https://en.wikipedia.org/wiki/Strategic_pluralism |
Strategies for engineered negligible senescence ( SENS ) is a range of proposed regenerative medical therapies, either planned or currently in development, for the periodic repair of all age-related damage to human tissue. These therapies have the ultimate aim of maintaining a state of negligible senescence in patients and postponing age-associated disease . [ 1 ] SENS was first defined by British biogerontologist Aubrey de Grey . Many mainstream scientists believe that it is a fringe theory . [ 2 ] De Grey later highlighted similarities and differences of SENS to subsequent categorization systems of the biology of aging, such as the highly influential Hallmarks of Aging published in 2013. [ 3 ] [ 4 ]
While some biogerontologists support the SENS program, others contend that the ultimate goals of de Grey's programme are too speculative given the current state of technology. [ 5 ] [ 6 ] The 31-member Research Advisory Board of de Grey's SENS Research Foundation have signed an endorsement of the plausibility of the SENS approach. [ 7 ]
The term " negligible senescence " was first used in the early 1990s by professor Caleb Finch to describe organisms such as lobsters and hydras , which do not show symptoms of aging. The term "engineered negligible senescence" first appeared in print in Aubrey de Grey 's 1999 book The Mitochondrial Free Radical Theory of Aging . [ 8 ] De Grey defined SENS as a "goal-directed rather than curiosity-driven" [ 9 ] approach to the science of aging, and "an effort to expand regenerative medicine into the territory of aging". [ 10 ]
The ultimate objective of SENS is the eventual elimination of age-related diseases and infirmity by repeatedly reducing the state of senescence in the organism. The SENS project consists in implementing a series of periodic medical interventions designed to repair, prevent or render irrelevant all the types of molecular and cellular damage that cause age-related pathology and degeneration, in order to avoid debilitation and death from age-related causes. [ 1 ]
As described by SENS, the following table details major ailments and the program's proposed preventative strategies: [ 11 ]
While some fields mentioned as branches of SENS are supported by the medical research community, e.g., stem cell research , anti-Alzheimers research and oncogenomics , the SENS programme as a whole has been a highly controversial proposal. Many of its critics argued in 2005 that the SENS agenda was fanciful and that the complicated biomedical phenomena involved in aging contain too many unknowns for SENS to be fully implementable in the foreseeable future. [ 12 ]
Cancer may deserve special attention as an aging-associated disease , but the SENS claim that nuclear DNA damage only matters for aging because of cancer has been challenged in other literature, [ 13 ] as well as by material studying the DNA damage theory of aging . More recently, biogerontologist Marios Kyriazis has criticised the clinical applicability of SENS [ 14 ] [ 15 ] by claiming that such therapies, even if developed in the laboratory, would be practically unusable by the general public. [ 16 ] De Grey responded to one such criticism. [ further explanation needed ] [ 17 ]
In November 2005, 28 biogerontologists published a statement of criticism in EMBO Reports , "Science fact and the SENS agenda: what can we reasonably expect from ageing research?," [ 12 ] arguing "each one of the specific proposals that comprise the SENS agenda is, at our present stage of ignorance, exceptionally optimistic," [ 12 ] and that some of the specific proposals "will take decades of hard work [to be medically integrated], if [they] ever prove to be useful." [ 12 ] The researchers argue that while there is "a rationale for thinking that we might eventually learn how to postpone human illnesses to an important degree," [ 12 ] increased basic research, rather than the goal-directed approach of SENS, is currently the scientifically appropriate goal.
In February 2005, the MIT Technology Review published an article by Sherwin Nuland , a Clinical Professor of Surgery at Yale University and the author of How We Die , [ 18 ] that drew a skeptical portrait of SENS, at the time de Grey was a computer associate in the Flybase Facility of the Department of Genetics at the University of Cambridge . [ 19 ] While Nuland praised de Grey's intellect and rhetoric, he criticized the SENS framework both for oversimplifying "enormously complex biological problems" and for promising relatively near-at-hand solutions to those unsolved problems. [ 19 ]
During June 2005, David Gobel , CEO and co-founder of the Methuselah Foundation with de Grey, offered Technology Review $20,000 to fund a prize competition to publicly clarify the viability of the SENS approach. In July 2005, Jason Pontin announced a $20,000 prize, funded 50/50 by Methuselah Foundation and MIT Technology Review . The contest was open to any molecular biologist, with a record of publication in biogerontology, who could prove that the alleged benefits of SENS were "so wrong that it is unworthy of learned debate." [ 20 ] Technology Review received five submissions to its challenge. In March 2006, Technology Review announced that it had chosen a panel of judges for the Challenge: Rodney Brooks , Anita Goel , Nathan Myhrvold , Vikram Sheel Kumar , and Craig Venter . [ 21 ] Three of the five submissions met the terms of the prize competition. They were published by Technology Review on June 9, 2006. On July 11, 2006, Technology Review published the results of the SENS Challenge. [ 22 ]
In the end, no one won the $20,000 prize. The judges felt that no submission met the criterion of the challenge and discredited SENS, although they unanimously agreed that one submission, by Preston Estep and his colleagues, was the most eloquent. Craig Venter succinctly expressed the prevailing opinion: "Estep et al. ... have not demonstrated that SENS is unworthy of discussion, but the proponents of SENS have not made a compelling case for it." [ 22 ] Summarizing the judges' deliberations, Pontin wrote in 2006 that SENS is "highly speculative" and that many of its proposals could not be reproduced with current scientific technology. Myhrvold described SENS as belonging to a kind of "antechamber of science" where they wait until technology and scientific knowledge advance to the point where it can be tested. [ 22 ] [ 23 ] Estep and his coauthors challenged the result of the contest by saying both that the judges had ruled "outside their area of expertise" and had failed to consider de Grey's frequent misrepresentations of the scientific literature. [ 24 ]
The SENS Research Foundation is a non-profit organization co-founded by Michael Kope, Aubrey de Grey , Jeff Hall, Sarah Marr and Kevin Perrott, which is based in California , United States. Its activities include SENS-based research programs and public relations work for the acceptance of and interest in related research. [ citation needed ] | https://en.wikipedia.org/wiki/Strategies_for_engineered_negligible_senescence |
In combinatorial game theory , the strategy-stealing argument is a general argument that shows, for many two-player games , that the second player cannot have a guaranteed winning strategy . The strategy-stealing argument applies to any symmetric game (one in which either player has the same set of available moves with the same results, so that the first player can "use" the second player's strategy) in which an extra move can never be a disadvantage. [ 1 ] A key property of a strategy-stealing argument is that it proves that the first player can win (or possibly draw) the game without actually constructing such a strategy. So, although it might prove the existence of a winning strategy, the proof gives no information about what that strategy is.
The argument works by obtaining a contradiction . A winning strategy is assumed to exist for the second player, who is using it. But then, roughly speaking, after making an arbitrary first move – which by the conditions above is not a disadvantage – the first player may then also play according to this winning strategy. The result is that both players are guaranteed to win – which is absurd, thus contradicting the assumption that such a strategy exists.
Strategy-stealing was invented by John Nash in the 1940s to show that the game of hex is always a first-player win, as ties are not possible in this game. [ 2 ] However, Nash did not publish this method, and József Beck credits its first publication to Alfred W. Hales and Robert I. Jewett, in the 1963 paper on tic-tac-toe in which they also proved the Hales–Jewett theorem . [ 2 ] [ 3 ] Other examples of games to which the argument applies include the m , n , k -games such as gomoku . In the game of Chomp strategy stealing shows that the first player has a winning strategy in any rectangular board (other than 1x1). In the game of Sylver coinage , strategy stealing has been used to show that the first player can win in certain positions called "enders". [ 4 ] In all of these examples the proof reveals nothing about the actual strategy.
A strategy-stealing argument can be used on the example of the game of tic-tac-toe , for a board and winning rows of any size. [ 2 ] [ 3 ] Suppose that the second player (P2) is using a strategy S which guarantees a win. The first player (P1) places an X in an arbitrary position. P2 responds by placing an O according to S . But if P1 ignores the first random X , P1 is now in the same situation as P2 on P2's first move: a single enemy piece on the board. P1 may therefore make a move according to S – that is, unless S calls for another X to be placed where the ignored X is already placed. But in this case, P1 may simply place an X in some other random position on the board, the net effect of which will be that one X is in the position demanded by S , while another is in a random position, and becomes the new ignored piece, leaving the situation as before. Continuing in this way, S is, by hypothesis, guaranteed to produce a winning position (with an additional ignored X of no consequence). But then P2 has lost – contradicting the supposition that P2 had a guaranteed winning strategy. Such a winning strategy for P2, therefore, does not exist, and tic-tac-toe is either a forced win for P1 or a tie. (Further analysis shows it is in fact a tie.)
The same proof holds for any strong positional game .
There is a class of chess positions called Zugzwang in which the player obligated to move would prefer to "pass" if this were allowed. Because of this, the strategy-stealing argument cannot be applied to chess. [ 5 ] It is not currently known whether White or Black can force a win with optimal play, or if both players can force a draw. However, virtually all students of chess consider White's first move to be an advantage and White wins more often than black in high-level games.
In Go passing is allowed. When the starting position is symmetrical (empty board, neither player has any points), this means that the first player could steal the second player's winning strategy simply by giving up the first move. Since the 1930s, however, [ 6 ] the second player is typically awarded some compensation points , which makes the starting position asymmetrical, and the strategy-stealing argument will no longer work.
An elementary strategy in the game is " mirror go ", where the second player performs moves which are diagonally opposite those of this opponent. This approach may be defeated using ladder tactics , ko fights , or successfully competing for control of the board's central point.
The strategy-stealing argument shows that the second player cannot win, by means of deriving a contradiction from any hypothetical winning strategy for the second player. The argument is commonly employed in games where there can be no draw, by means of the law of the excluded middle . However, it does not provide an explicit strategy for the first player, and because of this it has been called non-constructive. [ 5 ] This raises the question of how to actually compute a winning strategy.
For games with a finite number of reachable positions, such as chomp , a winning strategy can be found by exhaustive search. [ 7 ] However, this might be impractical if the number of positions is large.
In 2019, Greg Bodwin and Ofer Grossman proved that the problem of finding a winning strategy is PSPACE-hard in two kinds of games in which strategy-stealing arguments were used: the minimum poset game and the symmetric Maker-Maker game . [ 8 ] | https://en.wikipedia.org/wiki/Strategy-stealing_argument |
In mechanism design , a strategyproof (SP) mechanism is a game form in which each player has a weakly- dominant strategy , so that no player can gain by "spying" over the other players to know what they are going to play. When the players have private information (e.g. their type or their value to some item), and the strategy space of each player consists of the possible information values (e.g. possible types or values), a truthful mechanism is a game in which revealing the true information is a weakly-dominant strategy for each player. [ 1 ] : 244 An SP mechanism is also called dominant-strategy-incentive-compatible (DSIC) , [ 1 ] : 415 to distinguish it from other kinds of incentive compatibility .
A SP mechanism is immune to manipulations by individual players (but not by coalitions). In contrast, in a group strategyproof mechanism , no group of people can collude to misreport their preferences in a way that makes every member better off. In a strong group strategyproof mechanism, no group of people can collude to misreport their preferences in a way that makes at least one member of the group better off without making any of the remaining members worse off. [ 2 ]
Typical examples of SP mechanisms are:
Typical examples of mechanisms that are not SP are:
SP is also applicable in network routing . [ citation needed ] Consider a network as a graph where each edge (i.e. link) has an associated cost of transmission , privately known to the owner of the link. The owner of a link wishes to be compensated for relaying messages. As the sender of a message on the network, one wants to find the least cost path. There are efficient methods for doing so, even in large networks. However, there is one problem: the costs for each link are unknown. A naive approach would be to ask the owner of each link the cost, use these declared costs to find the least cost path, and pay all links on the path their declared costs. However, it can be shown that this payment scheme is not SP, that is, the owners of some links can benefit by lying about the cost. We may end up paying far more than the actual cost. It can be shown that given certain assumptions about the network and the players (owners of links), a variant of the VCG mechanism is SP. [ citation needed ]
There is a set X {\displaystyle X} of possible outcomes.
There are n {\displaystyle n} agents which have different valuations for each outcome. The valuation of agent i {\displaystyle i} is represented as a function:
which expresses the value it has for each alternative, in monetary terms.
It is assumed that the agents have Quasilinear utility functions; this means that, if the outcome is x {\displaystyle x} and in addition the agent receives a payment p i {\displaystyle p_{i}} (positive or negative), then the total utility of agent i {\displaystyle i} is:
The vector of all value-functions is denoted by v {\displaystyle v} .
For every agent i {\displaystyle i} , the vector of all value-functions of the other agents is denoted by v − i {\displaystyle v_{-i}} . So v ≡ ( v i , v − i ) {\displaystyle v\equiv (v_{i},v_{-i})} .
A mechanism is a pair of functions:
A mechanism is called strategyproof if, for every player i {\displaystyle i} and for every value-vector of the other players v − i {\displaystyle v_{-i}} :
It is helpful to have simple conditions for checking whether a given mechanism is SP or not. This subsection shows two simple conditions that are both necessary and sufficient.
If a mechanism with monetary transfers is SP, then it must satisfy the following two conditions, for every agent i {\displaystyle i} : [ 1 ] : 226
1. The payment to agent i {\displaystyle i} is a function of the chosen outcome and of the valuations of the other agents v − i {\displaystyle v_{-i}} - but not a direct function of the agent's own valuation v i {\displaystyle v_{i}} . Formally, there exists a price function P r i c e i {\displaystyle Price_{i}} , that takes as input an outcome x ∈ X {\displaystyle x\in X} and a valuation vector for the other agents v − i {\displaystyle v_{-i}} , and returns the payment for agent i {\displaystyle i} , such that for every v i , v i ′ , v − i {\displaystyle v_{i},v_{i}',v_{-i}} , if:
then:
PROOF: If P a y m e n t i ( v i , v − i ) > P a y m e n t i ( v i ′ , v − i ) {\displaystyle Payment_{i}(v_{i},v_{-i})>Payment_{i}(v_{i}',v_{-i})} then an agent with valuation v i ′ {\displaystyle v_{i}'} prefers to report v i {\displaystyle v_{i}} , since it gives him the same outcome and a larger payment; similarly, if P a y m e n t i ( v i , v − i ) < P a y m e n t i ( v i ′ , v − i ) {\displaystyle Payment_{i}(v_{i},v_{-i})<Payment_{i}(v_{i}',v_{-i})} then an agent with valuation v i {\displaystyle v_{i}} prefers to report v i ′ {\displaystyle v_{i}'} .
As a corollary, there exists a "price-tag" function, P r i c e i {\displaystyle Price_{i}} , that takes as input an outcome x ∈ X {\displaystyle x\in X} and a valuation vector for the other agents v − i {\displaystyle v_{-i}} , and returns the payment for agent i {\displaystyle i} For every v i , v − i {\displaystyle v_{i},v_{-i}} , if:
then:
2. The selected outcome is optimal for agent i {\displaystyle i} , given the other agents' valuations. Formally:
where the maximization is over all outcomes in the range of O u t c o m e ( ⋅ , v − i ) {\displaystyle Outcome(\cdot ,v_{-i})} .
PROOF: If there is another outcome x ′ = O u t c o m e ( v i ′ , v − i ) {\displaystyle x'=Outcome(v_{i}',v_{-i})} such that v i ( x ′ ) + P r i c e i ( x ′ , v − i ) > v i ( x ) + P r i c e i ( x , v − i ) {\displaystyle v_{i}(x')+Price_{i}(x',v_{-i})>v_{i}(x)+Price_{i}(x,v_{-i})} , then an agent with valuation v i {\displaystyle v_{i}} prefers to report v i ′ {\displaystyle v_{i}'} , since it gives him a larger total utility.
Conditions 1 and 2 are not only necessary but also sufficient: any mechanism that satisfies conditions 1 and 2 is SP.
PROOF: Fix an agent i {\displaystyle i} and valuations v i , v i ′ , v − i {\displaystyle v_{i},v_{i}',v_{-i}} . Denote:
By property 1, the utility of the agent when playing truthfully is:
and the utility of the agent when playing untruthfully is:
By property 2:
so it is a dominant strategy for the agent to act truthfully.
The actual goal of a mechanism is its O u t c o m e {\displaystyle Outcome} function; the payment function is just a tool to induce the players to be truthful. Hence, it is useful to know, given a certain outcome function, whether it can be implemented using a SP mechanism or not (this property is also called implementability ). [ citation needed ]
The monotonicity property is necessary for strategyproofness. [ citation needed ]
A single-parameter domain is a game in which each player i {\displaystyle i} gets a certain positive value v i {\displaystyle v_{i}} for "winning" and a value 0 for "losing". A simple example is a single-item auction, in which v i {\displaystyle v_{i}} is the value that player i {\displaystyle i} assigns to the item.
For this setting, it is easy to characterize truthful mechanisms. Begin with some definitions.
A mechanism is called normalized if every losing bid pays 0.
A mechanism is called monotone if, when a player raises his bid, his chances of winning (weakly) increase.
For a monotone mechanism, for every player i and every combination of bids of the other players, there is a critical value in which the player switches from losing to winning.
A normalized mechanism on a single-parameter domain is truthful if the following two conditions hold: [ 1 ] : 229–230
There are various ways to extend the notion of truthfulness to randomized mechanisms. They are, from strongest to weakest: [ 3 ] : 6–8
Universal implies strong-SD implies Lex implies weak-SD, and all implications are strict. [ 3 ] : Thm.3.4
For every constant ϵ > 0 {\displaystyle \epsilon >0} , a randomized mechanism is called truthful with probability 1 − ϵ {\displaystyle 1-\epsilon } if for every agent and for every vector of bids, the probability that the agent benefits by bidding non-truthfully is at most ϵ {\displaystyle \epsilon } , where the probability is taken over the randomness of the mechanism. [ 1 ] : 349
If the constant ϵ {\displaystyle \epsilon } goes to 0 when the number of bidders grows, then the mechanism is called truthful with high probability . This notion is weaker than full truthfulness, but it is still useful in some cases; see e.g. consensus estimate .
A new type of fraud that has become common with the abundance of internet-based auctions is false-name bids – bids submitted by a single bidder using multiple identifiers such as multiple e-mail addresses.
False-name-proofness means that there is no incentive for any of the players to issue false-name-bids. This is a stronger notion than strategyproofness. In particular, the Vickrey–Clarke–Groves (VCG) auction is not false-name-proof. [ 4 ]
False-name-proofness is importantly different from group strategyproofness because it assumes that an individual alone can simulate certain behaviors that normally require the collusive coordination of multiple individuals. [ citation needed ] [ further explanation needed ] | https://en.wikipedia.org/wiki/Strategyproofness |
Stratification has several usages in mathematics.
In mathematical logic , stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form Q 1 ∧ ⋯ ∧ Q n ∧ ¬ Q n + 1 ∧ ⋯ ∧ ¬ Q n + m → P {\displaystyle Q_{1}\wedge \dots \wedge Q_{n}\wedge \neg Q_{n+1}\wedge \dots \wedge \neg Q_{n+m}\rightarrow P} is stratified if and only if
there is a stratification assignment S that fulfills the following conditions:
The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each stratum of the program, from the lowest one up.
Stratification is not only useful for guaranteeing unique interpretation of Horn clause theories.
In New Foundations (NF) and related set theories, a formula ϕ {\displaystyle \phi } in the language of first-order logic with equality and membership is said to be stratified if and only if there is a function σ {\displaystyle \sigma } which sends each variable appearing in ϕ {\displaystyle \phi } (considered as an item of syntax) to
a natural number (this works equally well if all integers are used) in such a way that
any atomic formula x ∈ y {\displaystyle x\in y} appearing in ϕ {\displaystyle \phi } satisfies σ ( x ) + 1 = σ ( y ) {\displaystyle \sigma (x)+1=\sigma (y)} and any atomic formula x = y {\displaystyle x=y} appearing in ϕ {\displaystyle \phi } satisfies σ ( x ) = σ ( y ) {\displaystyle \sigma (x)=\sigma (y)} .
It turns out that it is sufficient to require that these conditions be satisfied only when
both variables in an atomic formula are bound in the set abstract { x ∣ ϕ } {\displaystyle \{x\mid \phi \}} under consideration. A set abstract satisfying this weaker condition is said to be weakly stratified .
The stratification of New Foundations generalizes readily to languages with more
predicates and with term constructions. Each primitive predicate needs to have specified
required displacements between values of σ {\displaystyle \sigma } at its (bound) arguments
in a (weakly) stratified formula. In a language with term constructions, terms themselves
need to be assigned values under σ {\displaystyle \sigma } , with fixed displacements from the
values of each of their (bound) arguments in a (weakly) stratified formula. Defined term
constructions are neatly handled by (possibly merely implicitly) using the theory
of descriptions: a term ( ι x . ϕ ) {\displaystyle (\iota x.\phi )} (the x such that ϕ {\displaystyle \phi } ) must
be assigned the same value under σ {\displaystyle \sigma } as the variable x.
A formula is stratified if and only if it is possible to assign types to all variables appearing
in the formula in such a way that it will make sense in a version TST of the theory of
types described in the New Foundations article, and this is probably the best way
to understand the stratification of New Foundations in practice.
The notion of stratification can be extended to the lambda calculus ; this is found
in papers of Randall Holmes.
A motivation for the use of stratification is to address Russell's paradox , the antinomy considered to have undermined Frege 's central work Grundgesetze der Arithmetik (1902). Quine, Willard Van Orman (1963) [1961]. From a Logical Point of View (2nd ed.). New York: Harper & Row . p. 90. LCCN 61-15277 .
In singularity theory , there is a different meaning, of a decomposition of a topological space X into disjoint subsets each of which is a topological manifold (so that in particular a stratification defines a partition of the topological space). This is not a useful notion when unrestricted; but when the various strata are defined by some recognisable set of conditions (for example being locally closed ), and fit together manageably, this idea is often applied in geometry. Hassler Whitney and René Thom first defined formal conditions for stratification. See Whitney stratification and topologically stratified space .
See stratified sampling . | https://en.wikipedia.org/wiki/Stratification_(mathematics) |
Stratification in water is the formation in a body of water of relatively distinct and stable layers by density . It occurs in all water bodies where there is stable density variation with depth. Stratification is a barrier to the vertical mixing of water, which affects the exchange of heat, carbon, oxygen and nutrients. [ 1 ] Wind-driven upwelling and downwelling of open water can induce mixing of different layers through the stratification, and force the rise of denser cold, nutrient-rich, or saline water and the sinking of lighter warm or fresher water, respectively. Layers are based on water density: denser water remains below less dense water in stable stratification in the absence of forced mixing.
Stratification occurs in several kinds of water bodies, such as oceans , lakes , estuaries , flooded caves, aquifers and some rivers.
The driving force in stratification is gravity , which sorts adjacent arbitrary volumes of water by local density, operating on them by buoyancy and weight . A volume of water of lower density than the surroundings will have a resultant buoyant force lifting it upwards, and a volume with higher density will be pulled down by the weight which will be greater than the resultant buoyant forces, following Archimedes' principle . Each volume will rise or sink until it has either mixed with its surroundings through turbulence and diffusion to match the density of the surroundings, reaches a depth where it has the same density as the surroundings, or reaches the top or bottom boundary of the body of water, and spreads out until the forces are balanced and the body of water reaches its lowest potential energy.
The density of water, which is defined as mass per unit of volume, is a function of temperature ( T {\displaystyle T} ), salinity ( S {\displaystyle S} ) and pressure ( p {\displaystyle p} ), which is a function of depth and the density distribution of the overlaying water column, and is denoted as ρ ( S , T , p ) {\displaystyle \rho (S,T,p)} .
The dependence on pressure is not significant, since water is almost perfectly incompressible. [ 2 ] An increase in the temperature of the water above 4 °C causes expansion and the density will decrease. Water expands when it freezes, and a decrease in temperature below 4 °C also causes expansion and a decrease in density. An increase in salinity, the mass of dissolved solids, will increase the density.
Density is the decisive factor in stratification. It is possible for a combination of temperature and salinity to result in a density that is less or more than the effect of either one in isolation, so it can happen that a layer of warmer saline water is layered between a colder fresher surface layer and a colder more saline deeper layer.
A pycnocline is a layer in a body of water where the change in density is relatively large compared to that of other layers. The thickness of the pycnoocline is not constant everywhere and depends on a variety of variables. [ 3 ]
Just like a pycnocline is a layer with a large change in density with depth, similar layers can be defined for a large change in temperature, a thermocline , and salinity, a halocline . Since the density depends on both the temperature and the salinity, the pycno-, thermo-, and haloclines have a similar shape. [ 4 ]
Mixing is the breakdown of stratification. Once a body of water has reached a stable state of stratification, and no external forces or energy are applied, it will slowly mix by diffusion until homogeneous in density, temperature and composition, varying only due to minor effects of compressibility. This does not usually occur in nature, where there are a variety of external influences to maintain or disturb the equilibrium. Among these are heat input from the sun, which warms the upper volume, making it expand slightly and decreasing the density, so this tends to increase or stabilise stratification. Heat input from below, as occurs from tectonic plate spreading and vulcanism is a disturbing influence, causing heated water to rise, but these are usually local effects and small compared to the effects of wind, heat loss and evaporation from the free surface, and changes of direction of currents.
Wind has the effects of generating wind waves and wind currents , and increasing evaporation at the surface, which has a cooling effect and a concentrating effect on solutes, increasing salinity, both of which increase density. The movement of waves creates some shear in the water, which increases mixing in the surface water, as does the development of currents. Mass movement of water between latitudes is affected by coriolis forces , which impart motion across the current direction, and movement towards or away from a land mass or other topographic obstruction may leave a deficit or excess which lowers or raises the sea level locally, driving upwelling and downwelling to compensate. The major upwellings in the ocean are associated with the divergence of currents that bring deeper waters to the surface. There are at least five types of upwelling: coastal upwelling, large-scale wind-driven upwelling in the ocean interior, upwelling associated with eddies, topographically associated upwelling, and broad-diffusive upwelling in the ocean interior. Downwelling also occurs in anti-cyclonic regions of the ocean where warm rings spin clockwise, causing surface convergence. When these surface waters converge, the surface water is pushed downwards. [ 5 ] These mixing effects destabilise and reduce stratification.
Ocean stratification is the natural separation of an ocean's water into horizontal layers by density , and occurs in all ocean basins. Denser water is below lighter water, representing a stable stratification . The pycnocline is the layer where the rate of change in density is largest.
Ocean stratification is generally stable because warmer water is less dense than colder water, and most heating is from the sun, which directly affects only the surface layer. Stratification is reduced by mechanical mixing induced by wind, but reinforced by convection (warm water rising, cold water sinking). Stratified layers act as a barrier to the mixing of water, which impacts the exchange of heat, carbon, oxygen and other nutrients. [ 1 ] The surface mixed layer is the uppermost layer in the ocean and is well mixed by mechanical (wind) and thermal (convection) effects.
Due to wind driven movement of surface water away from and towards land masses, upwelling and downwelling can occur, breaking through the stratification in those areas, where cold nutrient-rich water rises and warm water sinks, respectively, mixing surface and bottom waters.
The thickness of the thermocline is not constant everywhere and depends on a variety of variables.
Between 1960 and 2018, upper ocean stratification increased between 0.7 and 1.2% per decade due to climate change. [ 1 ] This means that the differences in density of the layers in the oceans increase, leading to larger mixing barriers and other effects. [ clarification needed ] Global upper-ocean stratification has continued its increasing trend in 2022. [ 7 ] The southern oceans (south of 30°S) experienced the strongest rate of stratification since 1960, followed by the Pacific, Atlantic, and the Indian Oceans. [ 1 ] Increasing stratification is predominantly affected by changes in ocean temperature ; salinity only plays a role locally. [ 1 ]
An estuary is a partially enclosed coastal body of brackish water with one or more rivers or streams flowing into it, and with a free connection to the open sea . [ 8 ]
The residence time of water in an estuary is dependent on the circulation within the estuary that is driven by density differences due to changes in salinity and temperature. Less dense freshwater floats over saline water and warmer water floats above colder water for temperatures greater than 4 °C. As a result, near-surface and near-bottom waters can have different trajectories, resulting in different residence times.
Vertical mixing determines how much the salinity and temperature will change from the top to the bottom, profoundly affecting water circulation. Vertical mixing occurs at three levels: from the surface downward by wind forces, the bottom upward by turbulence generated at the interface between the estuarine and oceanic water masses, and internally by turbulent mixing caused by the water currents which are driven by the tides, wind, and river inflow. [ 9 ]
Different types of estuarine circulation result from vertical mixing:
Salt wedge estuaries are characterized by a sharp density interface between the upper layer of freshwater and the bottom layer of saline water . River water dominates in this system, and tidal effects have a small role in the circulation patterns. The freshwater floats on top of the seawater and gradually thins as it moves seaward. The denser seawater moves along the bottom up the estuary forming a wedge shaped layer and becoming thinner as it moves landward. As a velocity difference develops between the two layers, shear forces generate internal waves at the interface, mixing the seawater upward with the freshwater. [ 10 ] An example is the Mississippi estuary. [ citation needed ]
As tidal forcing increases, the control of river flow on the pattern of circulation in the estuary becomes less dominating. Turbulent mixing induced by the current creates a moderately stratified condition. Turbulent eddies mix the water column, creating a mass transfer of freshwater and seawater in both directions across the density boundary. Therefore, the interface separating the upper and lower water masses is replaced with a water column with a gradual increase in salinity from surface to bottom. A two layered flow still exists however, with the maximum salinity gradient at mid depth. Partially stratified estuaries are typically shallow and wide, with a greater width to depth ratio than salt wedge estuaries. [ 10 ] An example is the Thames . [ citation needed ]
In vertically homogeneous estuaries, tidal flow is greater relative to river discharge, resulting in a well mixed water column and the disappearance of the vertical salinity gradient. The freshwater-seawater boundary is eliminated due to the intense turbulent mixing and eddy effects. The width to depth ratio of vertically homogeneous estuaries is large, with the limited depth creating enough vertical shearing on the seafloor to mix the water column completely. If tidal currents at the mouth of an estuary are strong enough to create turbulent mixing, vertically homogeneous conditions often develop. [ 10 ]
Fjords are usually examples of highly stratified estuaries; they are basins with sills and have freshwater inflow that greatly exceeds evaporation. Oceanic water is imported in an intermediate layer and mixes with the freshwater. The resulting brackish water is then exported into the surface layer. A slow import of seawater may flow over the sill and sink to the bottom of the fjord (deep layer), where the water remains stagnant until flushed by an occasional storm. [ 9 ]
Inverse estuaries occur in dry climates where evaporation greatly exceeds the inflow of freshwater. A salinity maximum zone is formed, and both riverine and oceanic water flow close to the surface towards this zone. [ 11 ] This water is pushed downward and spreads along the bottom in both the seaward and landward direction. The maximum salinity can reach extremely high values and the residence time can be several months. In these systems, the salinity maximum zone acts like a plug, inhibiting the mixing of estuarine and oceanic waters so that freshwater does not reach the ocean. The high salinity water sinks seaward and exits the estuary. [ 12 ] [ 13 ]
Lake stratification, generally a form of thermal stratification caused by density variations due to water temperature, is the formation of separate and distinct layers of water during warm weather, and sometimes when frozen over. Typically stratified lakes show three distinct layers, the epilimnion comprising the top warm layer, the thermocline (or metalimnion ): the middle layer, which may change depth throughout the day, and the colder hypolimnion extending to the floor of the lake. [ citation needed ]
The thermal stratification of lakes is a vertical isolation of parts of the water body from mixing caused by variation in the temperature at different depths in the lake, and is due to the density of water varying with temperature. [ 14 ] Cold water is denser than warm water of the same salinity, and the epilimnion generally consists of water that is not as dense as the water in the hypolimnion. [ 15 ] However, the temperature of maximum density for freshwater is 4 °C. In temperate regions where lake water warms up and cools through the seasons, a cyclical pattern of overturn occurs that is repeated from year to year as the water at the top of the lake cools and sinks (see stable and unstable stratification ). For example, in dimictic lakes the lake water turns over during the spring and the fall. This process occurs more slowly in deeper water and as a result, a thermal bar may form. [ 14 ] If the stratification of water lasts for extended periods, the lake is meromictic .
In shallow lakes, stratification into epilimnion, metalimnion, and hypolimnion often does not occur, as wind or cooling causes regular mixing throughout the year. These lakes are called polymictic . There is not a fixed depth that separates polymictic and stratifying lakes, as apart from depth, this is also influenced by turbidity, lake surface area, and climate. [ 16 ] The lake mixing regime (e.g. polymictic, dimictic, meromictic) [ 17 ] describes the yearly patterns of lake stratification that occur in most years. However, short-term events can influence lake stratification as well. Heat waves can cause periods of stratification in otherwise mixed, shallow lakes, [ 18 ] while mixing events, such as storms or large river discharge, can break down stratification. [ 19 ] Recent research suggests that seasonally ice-covered dimictic lakes may be described as "cryostratified" or "cryomictic" according to their wintertime stratification regimes. [ 19 ] Cryostratified lakes exhibit inverse stratification near the ice surface and have depth-averaged temperatures near 4 °C, while cryomictic lakes have no under-ice thermocline and have depth-averaged winter temperatures closer to 0 °C. [ 19 ]
An anchialine system is a landlocked body of water with a subterranean connection to the ocean . Depending on its formation, these systems can exist in one of two primary forms: pools or caves. The primary differentiating characteristics between pools and caves is the availability of light; cave systems are generally aphotic while pools are euphotic . The difference in light availability has a large influence on the biology of a given system. Anchialine systems are a feature of coastal aquifers which are density stratified, with water near the surface being fresh or brackish , and saline water intruding from the coast at depth. Depending on the site, it is sometimes possible to access the deeper saline water directly in the anchialine pool, or sometimes it may be accessible by cave diving . [ 20 ]
Anchialine systems are extremely common worldwide especially along neotropical coastlines where the geology and aquifer systems are relatively young, and there is minimal soil development. Such conditions occur notably where the bedrock is limestone or recently formed volcanic lava . Many anchialine systems are found on the coastlines of the island of Hawaii , the Yucatán Peninsula , South Australia , the Canary Islands , Christmas Island , and other karst and volcanic systems. [ 20 ]
Karst caves which drain into the sea may have a halocline separating the fresh water from the seawater underneath which can be visible even when both layers are clear due to the difference in refractive indices. | https://en.wikipedia.org/wiki/Stratification_(water) |
The flow in many fluids varies with density and depends upon gravity. The fluid with lower density is always above the fluid with higher density ( stable stratification ). Stratified flows are very common such as the Earth's ocean and its atmosphere. [ 1 ]
A stratified fluid may be defined as the fluid with density variations in the vertical direction. For example, air and water; both are fluids and if we consider them together then they can be seen as a stratified fluid system. Density variations in the atmosphere profoundly affect the motion of water and air. Wave phenomena in air flow over the mountains and occurrence of smog are the examples of stratification effect in the atmosphere.
When a fluid system having a condition in which fluid density decreases with height, is disturbed, then the gravity and friction restore the undisturbed conditions. If however the fluid tends to be stable if density decreases with height. [ clarification needed ] [ 2 ]
It is known that the sub critical flow of a stratified fluid past a barrier produce motions upstream of the barrier. Sub critical flow may be defined as a flow for which the Froude number based on channel height is less than 1/π, so that one
or more stationary lee waves would be present. Some of the upstream motions do not decompose with the distance upstream. These ‘ columnar ’ modes have zero frequency and a sinusoidal structure in the direction of the density gradient; they effectively lead to a continuous change in upstream conditions. If the barrier is two-dimensional (i.e. of infinite extent in the direction perpendicular to the upstream flow and the direction of density gradient), inviscid theories show that the length of the upstream
region affected by the columnar modes increases without bound as t->infinity. Non-zero viscosity (and/or diffusivity) will, however, limit the region affected, since the wave amplitudes will then slowly decay. [ 3 ]
Turbulent mixing in stratified flows is described by mixing efficiency. This mixing efficiency compares the energy used in irreversible mixing, enlarging the minimum gravitational potential energy that can be kept in the density field, to the
entire change in mechanical energy during the mixing process. It can be defined either as an integral quantity, calculated between inert initial and final conditions or as a fraction of the energy flux to mixing and the power into the system. These two definitions can give different values if the system is not in steady state. Mixing efficiency is especially important in oceanography as mixing is required to keep the overall stratification in a steady-state ocean. The entire amount of mixing in the oceans is equal to the product of the power input to the ocean and the mean mixing efficiency. [ 4 ]
Wallis and Dobson (1973) estimate their criterion with transition observations that they call “Slugging” and note that empirically the stability limit is described by j ∗ = 0.5 α 3 / 2 {\displaystyle j^{*}=0.5\alpha ^{3/2}}
Here α = ( h G H ) {\displaystyle \alpha ={\left({\frac {h_{G}}{H}}\right)}} and j ∗ = [ U G α ρ G g H ( ρ L − ρ G ) ] {\displaystyle j^{*}=\left[{\frac {U_{G}\alpha {\sqrt {\rho _{G}}}}{\sqrt {gH(\rho _{L}-\rho _{G})}}}\right]\quad } where H is channel height and U, h and ρ denote the mean velocity, holdup and density respectively. The subscripts G and L stand for gas and liquid and g denotes Gravity.
Taitel and Dukler (1976) [TD] expanded the (Kelvin and helmholtz) KH analysis first to the case of a finite wave on a flat liquid sheet in horizontal channel flow and then to finite waves on stratified liquid in an Inclined pipe. In order to apply this criterion they need to provide the equilibrium liquid level hL (or liquid holdup). They calculate h L {\displaystyle h_{L}} through momentum balances in the gas and liquid phases (two fluid models) in which shear stresses are examine and assessed using conventional friction factors definitions. In two fluid models, the pipe geometry is taken into consideration through wetted perimeters by the gas and liquid phases, including the gas-liquid interface. This states that the wall resistance of the liquid is similar to that for open-channel flow and that of the gas to close-duct flow. This geometry analysis is general and could be applied not only to round pipes, but to any other possible shape. In this method, each pair of superficial gas and liquid velocity relates to a distinctive value of h L {\displaystyle h_{L}} .
According to [TD], a finite wave will grow in a horizontal rectangular channel of height H, when j ∗ > ( 1 − h L H ) α 3 / 2 {\displaystyle j^{*}>{\left(1-{\frac {h_{L}}{H}}\right)\alpha ^{3/2}}} or U G > ( 1 − h L H ) ( ( ρ L − ρ G ) g A G ρ G d A L / d h L ) 1 / 2 {\displaystyle U_{G}>{\left(1-{\frac {h_{L}}{H}}\right)}{\left({\frac {(\rho _{L}-\rho _{G})gA_{G}}{\rho _{G}dA_{L}/dh_{L}}}\right)^{1/2}}} for inclined pipe. D is the pipe diameter and A is the cross section area. Note that ( 1 − h L H ) = α {\displaystyle {\left(1-{\frac {h_{L}}{H}}\right)}=\alpha } . If ( h L H ) = 0.5 {\displaystyle {\left({\frac {h_{L}}{H}}\right)}=0.5} , ( 1 − h L H ) = 0.5 {\displaystyle {\left(1-{\frac {h_{L}}{H}}\right)}=0.5} , and this is compatible with the result of Wallis and Dobson(1973) The [TD] overall procedure result to a weak dependence on viscosity, through the calculation of h L {\displaystyle h_{L}} .
[TD] also identify two kinds of stratified flow : stratified smooth (SS) and stratified wavy (SW). These waves, as they say, “are produced by the gas flow under conditions where the velocity of gas is enough to cause waves to form, but slower than that needed for the quick wave growth which leads transition to intermittent or annular flow.” [TD] suggest a standard to predict the transition from stratified smooth to stratified wavy flow, based on Jeffreys’ (1925, 1926) ideas. [ 5 ]
Density stratification has significant effect on diffusion in fluids. For example, smoke which is coming from a chimney diffuses turbulently if the earth atmosphere is not stably stratified. When the lower air is in stable condition, as in morning or early evening, the smoke comes out and become flat into a long, thin layer. Strong stratification, or inversions as they are called sometimes, restrict contaminants to the lower regions of the earth atmosphere, and cause many of our current air-pollution problems. [ 6 ] | https://en.wikipedia.org/wiki/Stratified_flows |
In mathematics, especially in topology, a stratified space is a topological space that admits or is equipped with a stratification , a decomposition into subspaces, which are nice in some sense (e.g., smooth or flat [ 1 ] ).
A basic example is a subset of a smooth manifold that admits a Whitney stratification . But there is also an abstract stratified space such as a Thom–Mather stratified space .
On a stratified space, a constructible sheaf can be defined as a sheaf that is locally constant on each stratum.
Among the several ideals, Grothendieck's Esquisse d’un programme considers (or proposes) a stratified space with what he calls the tame topology .
Mather gives the following definition of a stratified space. A prestratification on a topological space X is a partition of X into subsets (called strata) such that (a) each stratum is locally closed , (b) it is locally finite and (c) (axiom of frontier) if two strata A , B are such that the closure of A intersects B , then B lies in the closure of A . A stratification on X is a rule that assigns to a point x in X a set germ S x {\displaystyle S_{x}} at x of a closed subset of X that satisfies the following axiom: for each point x in X , there exists a neighborhood U of x and a prestratification of U such that for each y in U , S x {\displaystyle S_{x}} is the set germ at y of the stratum of the prestratification on U containing y . [ citation needed ]
A stratified space is then a topological space equipped with a stratification. [ citation needed ]
In the MacPherson's stratified pseudomanifolds ; the strata are the differences X i+i -X i between sets in the filtration. There is also a local conical condition; there must be an almost smooth atlas where locally each little open set looks like the product of two factors R n x c(L) ; a euclidean factor and the topological cone of a space L . Classically, here is the point where the definitions turns to be obscure, since L is asked to be a stratified pseudomanifold. The logical problem is avoided by an inductive trick which makes different the objects L and X . [ citation needed ]
The changes of charts or cocycles have no conditions in the MacPherson's original context. Pflaum asks them to be smooth, while in the Thom-Mather context they must preserve the above decomposition, they have to be smooth in the Euclidean factor and preserve the conical radium. [ citation needed ] | https://en.wikipedia.org/wiki/Stratified_space |
Stratigraphic paleobiology is a branch of geology that is closely related to paleobiology , sequence stratigraphy and sedimentology . Stratigraphic paleobiology studies how the fossil record is altered by sedimentological processes and how this affects biostratigraphy and paleobiological interpretations of the fossil record. [ 1 ]
Patzkowsky and Holland (2012) define stratigraphic paleobiology as follows: [ 2 ]
"[Stratigraphic paleobiology] is built on the premise that the distribution of fossil taxa in time and space is controlled not only by processes of evolution, ecology, and environmental change, but also by the stratigraphic processes that govern where and when sediment that might contain fossils is deposited and preserved. Teasing apart the effects of these two suites of processes to understand the history of life on Earth is the essence of stratigraphic paleobiology."
Large parts of stratigraphic paleobiology rely on sequence stratigraphy . This is since within a sequence, many parameters such as depositional conditions, (non)preservation, and facies change deterministically. This sequence stratigraphic background alone, without any changes in ecology or any evolutionary processes, creates a baseline of constant change in the number of fossils and taxa that are preserved. [ 3 ] One example for this are maximum flooding surfaces , which commonly display large accumulations of shells and an increased number of first fossil occurrences and last fossil occurrences. This is however not necessary linked to any change in ecology or an extinction event, but can be generated by the low deposition rates during the maximum flooding surface alone. | https://en.wikipedia.org/wiki/Stratigraphic_paleobiology |
The Stratingh Institute for Chemistry is a research institute of the Faculty of Science and Engineering of the University of Groningen ( The Netherlands ). It is named after Sibrandus Stratingh, who is known for being the inventor of the first battery powered electric car . [ 1 ] As of 2020, about 150 people (from over 30 nationalities) are employed within the Stratingh Institute for Chemistry. The staff members include Ben Feringa , who won the 2016 Nobel Prize in Chemistry "for the design and synthesis of molecular machines ", [ 2 ] Nathalie Katsonis and Sijbren Otto .
The institute is currently located on the Zernike Campus in Groningen, in the Feringa Building and Linnaeusborg. [ 3 ]
The research carried out within the institute falls within the following research areas: | https://en.wikipedia.org/wiki/Stratingh_Institute_for_Chemistry |
Stratocladistics is a technique in phylogenetics of making phylogenetic inferences using both geological and morphobiological data. It follows many of the same rules as cladistics , using Bayesian logic to quantify how good a phylogenetic hypothesis is in terms of debt and parsimony . However, in addition to the morphological debt that is used to determine phylogenetic dissimilarities in cladistics, there is also stratigraphic debt which adds the dimension of time to the equation.
Although stratocladistics has been viewed with suspicion by some workers, it represents a total evidence approach that has some advantages over traditional cladistic approaches. For example, stratocladistics has been shown to outperform simple parsimony in tests based on simulated data and stratocladistics has better resolution than simple cladistics, with fewer equally parsimonious trees than in a basic cladistic analysis. [ 1 ]
"StrataPhy" . — software for stratocladistic reconstructions | https://en.wikipedia.org/wiki/Stratocladistics |
Stratos Global Corporation was a Canada -based telecommunications company founded in 1985, mainly serving maritime, government and oil and gas markets around the world. It was acquired by Inmarsat in 2009.
Stratos offers mobile and fixed satellite , microwave and wireless services, including Inmarsat, Iridium satellite constellation , Globalstar , HughesNet , MSAT , and VSAT . It caters to government agencies, military forces, NGOs, first responders, and diverse markets such as aeronautical, energy and natural resources, media, maritime, construction/engineering, and recreational users. [ 1 ]
Its corporate headquarters were located in Bethesda, Maryland and had a registered office in St John's, Newfoundland , Canada. It provided products and services through offices worldwide, as well as through a global network of authorized partners.
AmosConnect is Stratos' PC -based shipboard computer software platform that provides narrowband satellite communications, email, fax, telex, GSM text and interoffice communication for those at sea. [ 2 ] [ 3 ]
AmosConnect 7.4.27, released in December 2008, [ 4 ] is the latest supported version. [ 5 ] AmosConnect 8.4.0.1, released in November 2013, [ 6 ] was discontinued on 30 June 2017 [ 3 ] [ 5 ] with the company recommending its customers downgrade to version 7. [ 7 ] [ 8 ]
In October 2017 it was reported in the news media that all versions of AmosConnect 8 [ 6 ] suffered from a severe security vulnerability and had an intentional backdoor that could potentially expose a ship's data whilst at sea. [ 7 ] [ 8 ] Security consulting firm IOActive had informed Inmarsat of the issues in October 2016. [ 7 ] [ 8 ]
In 1997, Stratos acquired IDB Mobile Communications. In 1998, Stratos acquired NovaNet Communications, American Mobile Satellite Corp., and Teleglobe Canada Inc. In 2000, Stratos acquired Shell Offshore Services, Seven Seas Communications, and Rig Telephones Inc (Datacomm). [ 9 ]
On April 15, 2009, Stratos was acquired by Inmarsat but continued to operate as a separate entity under the Stratos name until 2012. [ 2 ]
This article about a telecommunications corporation or company in the United States is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Stratos_Global_Corporation |
Stratospheric aerosol injection (SAI) is a proposed method of solar geoengineering (or solar radiation modification) to reduce global warming . This would introduce aerosols into the stratosphere to create a cooling effect via global dimming and increased albedo , which occurs naturally from volcanic winter . [ 1 ] It appears that stratospheric aerosol injection, at a moderate intensity, could counter most changes to temperature and precipitation, take effect rapidly, have low direct implementation costs, and be reversible in its direct climatic effects. [ 2 ] The Intergovernmental Panel on Climate Change concludes that it "is the most-researched [solar geoengineering] method that it could limit warming to below 1.5 °C (2.7 °F)." [ 3 ] However, like other solar geoengineering approaches, stratospheric aerosol injection would do so imperfectly and other effects are possible, [ 4 ] particularly if used in a suboptimal manner. [ 5 ]
Various forms of sulfur have been shown to cool the planet after large volcanic eruptions. [ 6 ] Re-entering satellites are polluting the stratosphere. [ 7 ] However, as of 2021, there has been little research and existing aerosols in the stratosphere are not well understood. [ 8 ] So there is no leading candidate material. Alumina , calcite and salt are also under consideration. [ 9 ] [ 10 ] The leading proposed method of delivery is custom aircraft. [ 11 ]
An aerosol is a suspension of fine solid particles or liquid droplets in air or another gas . [ 12 ] Aerosols can be generated from natural or human causes . The term aerosol commonly refers to the mixture of particulates in air, and not to the particulate matter alone. [ 13 ] Examples of natural aerosols are fog , mist or dust . Examples of human caused aerosols include particulate air pollutants , mist from the discharge at hydroelectric dams , irrigation mist, perfume from atomizers , smoke , dust , sprayed pesticides , and medical treatments for respiratory illnesses. [ 14 ]
Sources of natural aerosols include oceans, volcanoes, deserts, and living organisms. [ 17 ] [ 18 ] The ocean produces aerosols in two main ways. First, when wind blows over waves, it creates spray made up mostly of sea salt . Second, tiny ocean organisms—such as plankton —release dimethyl sulfide and other gases into the air which, in turn, react with other substances in the atmosphere, including water vapor , to form gaseous sulfate ( sulfuric acid ) aerosols. Both sea salt and sulfate aerosols help to form clouds by acting as “ seeds ” for water droplets, affecting cloud formation and Earth's energy balance.. While these ocean aerosols are widespread, there is still uncertainty about exactly how much they affect the atmosphere.
Volcanic eruptions release ash and gases into the air. Although the falls out of the atmosphere relatively quickly, sulfur dioxide can rise into the stratosphere, where it reacts with water vapor to form long-lived sulfate aerosols in the upper atmosphere. These reflect sunlight and temporarily cool the planet. After a large eruption, these particles can stay in the air for a year or more.
Natural aerosols cool the Earth. [ 19 ] When large volcanic eruptions occur, they can cause short-term global cooling of around half a degree or more, depending on the size of the eruption. For example, the eruption of Mount Pinatubo in 1991 caused global temperatures to drop by about 0.5 degrees Celsius for up to three years. [ 20 ] These events have played an important role in past climate variability .
Human activities, especially fossil fuel combustion and biomass burning, emit aerosols directly and indirectly via gases that react in the atmosphere. [ 21 ] Common anthropogenic aerosols include sulfates, nitrates, black carbon (soot), and organic carbon. Among these, sulfates are the dominant cooling agent. Organic carbon aerosols also reflect light, while black carbon absorbs it, warming the air and darkening snow and ice.
The net effect of anthropogenic aerosols has been to mask global warming. From 1850 to 2014, they reduced global average surface temperature by about 0.66°C. This cooling is stronger in the more populous Northern Hemisphere. This uneven effect has altered rainfall patterns, including a weakening of tropical monsoons.
Air pollution regulations have reduced sulfate emissions in Europe and North America since the 1980s, and more recently in China. These reductions have improved air quality but diminish the cooling influence of aerosols, contributing to accelerated warming.
Mikhail Budyko is believed to have been the first, in 1974, to put forth the concept of artificial solar radiation management with stratospheric sulfate aerosols if global warming ever became a pressing issue. [ 22 ] Such controversial climate engineering proposals for global dimming have sometimes been called a "Budyko Blanket". [ 23 ] [ 24 ] [ 25 ]
In 2009, a Russian team tested aerosol formation in the lower troposphere using helicopters. [ 26 ] In 2015, David Keith and Gernot Wagner described a potential field experiment, the Stratospheric Controlled Perturbation Experiment (SCoPEx), using stratospheric calcium carbonate [ 27 ] injection, [ 28 ] but as of October 2020 the time and place had not yet been determined. [ 29 ] [ 30 ] SCoPEx is in part funded by Bill Gates . [ 31 ] [ 32 ] Sir David King , a former chief scientific adviser to the government of the United Kingdom, stated that SCoPEX and Gates' plans to dim the sun with calcium carbonate could have disastrous effects. [ 33 ]
In 2012, the Bristol University -led Stratospheric Particle Injection for Climate Engineering (SPICE) project planned on a limited field test to evaluate a potential delivery system. The group received support from the EPSRC , NERC and STFC to the tune of £2.1 million [ 34 ] and was one of the first UK projects aimed at providing evidence-based knowledge about solar radiation management . [ 34 ] Although the field testing was cancelled, the project panel decided to continue the lab-based elements of the project. [ 35 ] Furthermore, a consultation exercise was undertaken with members of the
public in a parallel project by Cardiff University , with specific exploration of
attitudes to the SPICE test. [ 36 ] This research found that almost all of the participants in the poll were willing to allow the field trial to proceed, but very few were comfortable with the actual use of stratospheric aerosols. A campaign opposing geoengineering led by the ETC Group drafted an open letter calling for the project to be suspended until international agreement is reached, [ 37 ] specifically pointing to the upcoming convention of parties to the Convention on Biological Diversity in 2012. [ 38 ]
Various forms of sulfur were proposed as the injected substance, as this is in part how volcanic eruptions cool the planet. [ 6 ] Precursor gases such as sulfur dioxide and hydrogen sulfide have been considered. According to estimates, "one kilogram of well placed sulfur in the stratosphere would roughly offset the warming effect of several hundred thousand kilograms of carbon dioxide." [ 39 ] One study calculated the impact of injecting sulfate particles, or aerosols , every one to four years into the stratosphere in amounts equal to those lofted by the volcanic eruption of Mount Pinatubo in 1991 , [ 40 ] but did not address the many technical and political challenges involved in potential solar geoengineering efforts. [ 41 ] Use of gaseous sulfuric acid appears to reduce the problem of aerosol growth. [ 11 ] Materials such as photophoretic particles, metal oxides (as in Welsbach seeding , and titanium dioxide ), and diamond are also under consideration. [ 42 ] [ 43 ] [ 44 ]
Various techniques have been proposed for delivering the aerosol or precursor gases. [ 1 ] The required altitude to enter the stratosphere is the height of the tropopause , which varies from 11 kilometres (6.8 mi/36,000 ft) at the poles to 17 kilometers (11 mi/58,000 ft) at the equator.
The latitude and distribution of injection locations has been discussed by various authors. While a near-equatorial injection regime will allow particles to enter the rising leg of the Brewer-Dobson circulation , several studies have concluded that a broader, and higher-latitude, injection regime will reduce injection mass flow rates and/or yield climatic benefits. [ 49 ] [ 50 ] Concentration of precursor injection in a single longitude appears to be beneficial, with condensation onto existing particles reduced, giving better control of the size distribution of aerosols resulting. [ 51 ] The long residence time of carbon dioxide in the atmosphere may require a millennium-timescale commitment to aerosol injection [ 52 ] if aggressive emissions abatement is not pursued simultaneously.
Welsbach seeding is a patented solar radiation modification method, involving seeding the stratosphere with small (10 to 100 micron ) metal oxide particles ( thorium dioxide , aluminium oxide ). The purpose of the Welsbach seeding would be to "(reduce) atmospheric warming due to the greenhouse effect resulting from a greenhouse gases layer," by converting radiative energy at near- infrared wavelengths into radiation at far-infrared wavelengths, permitting some of the converted radiation to escape into space, thus cooling the atmosphere. The seeding as described would be performed by airplanes at altitudes between 7 and 13 kilometres.
The method was patented by Hughes Aircraft Company in 1991, US patent 5003186. [ 53 ] Quote from the patent: "This invention relates to a method for the reduction of global warming resulting from the greenhouse effect, and in particular to a method which involves the seeding of the earth's stratosphere with Welsbach-like materials."
This is not considered to be a viable option by current geoengineering experts. [ citation needed ]
A study in 2020 looked at the cost of SAI through to the year 2100. It found that relative to other climate interventions and solutions, SAI remains inexpensive. However, at about $18 billion per year per degree Celsius of warming avoided (in 2020 USD), a solar geoengineering program with substantial climate impact would lie well beyond the financial reach of individuals, small states, or other non-state potential rogue actors. [ 54 ] The annual cost of delivering a sufficient amount of sulfur to counteract expected greenhouse warming is estimated at $5–10 billion US dollars. [ 54 ]
SAI is expected to have low direct financial costs of implementation, [ 55 ] relative to the expected costs of both unabated climate change and aggressive mitigation.
Early studies suggest that stratospheric aerosol injection might have a relatively low direct cost. One analysis estimated the annual cost of delivering 5 million tons of an albedo enhancing aerosol to an altitude of 20 to 30 km is at US$2 billion to 8 billion, an amount which they suggest would be sufficient to offset the expected warming during the next century. [ 56 ] In comparison, the annual cost estimates for climate damage or emission mitigation range from US$200 billion to 2 trillion. [ 56 ]
A 2016 study found the cost per 1 W/m 2 of cooling to be between 5–50 billion USD/yr. [ 57 ] Because larger particles are less efficient at cooling and drop out of the sky faster, the unit-cooling cost is expected to increase over time as increased dose leads to larger, but less efficient, particles by mechanism such as coalescence and Ostwald ripening . [ 58 ] Assume RCP8.5, -5.5 W/m 2 of cooling would be required by 2100 to maintain 2020 climate. At the dose level required to provide this cooling, the net efficiency per mass of injected aerosols would reduce to below 50% compared to low-level deployment (below 1W/m 2 ). [ 59 ] At a total dose of -5.5 W/m 2 , the cost would be between 55–550 billion USD/yr when efficiency reduction is also taken into account, bringing annual expenditure to levels comparable to other mitigation alternatives.
The advantages of this approach in comparison to other solar geoengineering methods include:
It is uncertain how effective any solar geoengineering technique would be, due to the difficulties modeling their impacts and the complex nature of the global climate system . Certain efficacy issues are specific to stratospheric aerosols.
Solar geoengineering in general poses various problems and risks. However, certain problems are specific to or more pronounced with stratospheric sulfide injection. [ 91 ]
Most of the existing governance of stratospheric sulfate aerosols is from that which is applicable to solar radiation management more broadly. However, some existing legal instruments would be relevant to stratospheric sulfate aerosols specifically. At the international level, the Convention on Long-Range Transboundary Air Pollution (CLRTAP Convention) obligates those countries which have ratified it to reduce their emissions of particular transboundary air pollutants. Notably, both solar radiation management and climate change (as well as greenhouse gases) could satisfy the definition of "air pollution" which the signatories commit to reduce, depending on their actual negative effects. [ 111 ] Commitments to specific values of the pollutants, including sulfates, are made through protocols to the CLRTAP Convention. Full implementation or large scale climate response field tests of stratospheric sulfate aerosols could cause countries to exceed their limits. However, because stratospheric injections would be spread across the globe instead of concentrated in a few nearby countries, and could lead to net reductions in the "air pollution" which the CLRTAP Convention is to reduce so they may be allowed.
The stratospheric injection of sulfate aerosols would cause the Vienna Convention for the Protection of the Ozone Layer to be applicable due to their possible deleterious effects on stratospheric ozone. That treaty generally obligates its Parties to enact policies to control activities which "have or are likely to have adverse effects resulting from modification or likely modification of the ozone layer." [ 112 ] The Montreal Protocol to the Vienna Convention prohibits the production of certain ozone depleting substances, via phase outs. Sulfates are presently not among the prohibited substances.
In the United States, the Clean Air Act might give the United States Environmental Protection Agency authority to regulate stratospheric sulfate aerosols. [ 113 ] | https://en.wikipedia.org/wiki/Stratospheric_aerosol_injection |
Straw-bale construction is a building method that uses bales of straw (usually wheat [ 2 ] straw) as structural elements, building insulation , or both. This construction method is commonly used in natural building or "brown" construction projects. Research has shown that straw-bale construction is a sustainable method for building, from the standpoint of both materials and energy needed for heating and cooling. [ 3 ]
Advantages of straw-bale construction over conventional building systems include the renewable nature of straw, cost, easy availability, naturally fire-retardant and high insulation value. [ 4 ] [ 5 ] [ 6 ] Disadvantages include susceptibility to rot, difficulty of obtaining insurance coverage, and high space requirements for the straw itself. [ 7 ] Research has been done using moisture probes placed within the straw wall in which 7 of 8 locations had moisture contents of less than 20%. This is a moisture level that does not aid in the breakdown of the straw. [ 8 ] However, proper construction of the straw-bale wall is important in keeping moisture levels down, just as in the construction of any type of building.
Straw houses have been built on the African plains since the Paleolithic Era. Straw bales were used in construction 400 years ago in Germany; and straw-thatched roofs have long been used in northern Europe and Asia. When European Settlers came to North America, teepees were insulated in winter with loose straw between the inner lining and outer cover. [ 9 ]
Straw-bale construction was greatly facilitated by the mechanical hay baler, which was invented in the 1850s and was widespread by the 1890s. [ 9 ] It proved particularly useful in the Nebraska Sandhills . Pioneers seeking land under the 1862 Homestead Act and the 1904 Kinkaid Act found a dearth of trees over much of Nebraska. In many parts of the state, the soil was suitable for dugouts and sod houses . [ 10 ] However, in the Sandhills, the soil generally made poor construction sod; [ 11 ] in the few places where suitable sod could be found, it was more valuable for agriculture than as a building material. [ 12 ]
The first documented use of hay bales in construction in Nebraska was a schoolhouse built in 1896 or 1897. Unfenced and unprotected by stucco or plaster, it was reported in 1902 as having been eaten by cows. To combat this, builders began plastering their bale structures; if cement or lime stucco was unavailable, locally obtained "gumbo mud" was employed. [ 12 ] Between 1896 and 1945, an estimated 70 straw-bale buildings, including houses, farm buildings, churches, schools, offices, and grocery stores had been built in the Sandhills. [ 9 ] In 1990, nine surviving bale buildings were reported in Arthur and Logan Counties, [ 13 ] including the 1928 Pilgrim Holiness Church in the village of Arthur , which is listed in the National Register of Historic Places . [ 11 ]
Since the 1990s straw-bale construction has been substantially revived, particularly in North America, Europe, and Australia. [ 14 ] Straw was one of the first materials to be used in green buildings. [ 2 ] This revival is likely attributed to greater environmental awareness and the material's natural, non-toxic qualities, low embodied energy , and relative affordability. Straw-bale construction has encountered issues regarding building codes depending on the location of the building. [ 15 ] [ 16 ] However, in the USA, the introduction of Appendices S and R in the 2015 International Residential Code has helped to legitimize and improve understanding of straw-bale construction. In France, the approval in 2012 of professional rules for straw-building recognized it as “common technology” and qualifies for standard-insurance programs. [ 17 ]
Straw bale building typically consists of stacking rows of bales (often in running-bond ) on a raised footing or foundation , with a moisture barrier or capillary break between the bales and their supporting platform. [ 18 ] There are two types of straw-bales commonly used, those bound together with two strings and those with three. The three string bale is the larger in all three dimensions. [ 19 ] Bale walls can be tied together with pins of bamboo or wood (internal to the bales or on their faces), or with surface wire meshes, and then stuccoed or plastered , either with a lime-based formulation or earth/clay render. The bales may actually provide the structural support for the building [ 20 ] (" load-bearing " or "Nebraska-style" technique), as was the case in the original examples from the late 19th century. The plastered bale assembly also can be designed to provide lateral and shear support for wind and seismic loads.
Alternatively, bale buildings can have a structural frame of other materials, usually lumber or timber-frame, with bales simply serving as insulation and plaster substrate, ("infill" or "non-loadbearing" technique), which is most often required in northern regions and/or in wet climates. In northern regions, the potential snow-loading can exceed the strength of the bale walls. In wet climates, the imperative for applying a vapor-permeable finish precludes the use of cement-based stucco. Additionally, the inclusion of a skeletal framework of wood or metal allows the erection of a roof prior to raising the bales, which can protect the bale wall during construction, when it is the most vulnerable to water damage in all but the most dependably arid climates. A combination of framing and load-bearing techniques may also be employed, referred to as "hybrid" straw bale construction. [ 21 ]
Straw bales can also be used as part of a Spar and Membrane Structure (SMS) wall system in which lightly reinforced 5–8 cm (2.0–3.1 in) sprayed concrete skins are interconnected with extended X-shaped light rebar in the head joints of the bales. [ 22 ] In this wall system the concrete skins provide structure, seismic reinforcing, and fireproofing, while the bales are used as leave-in formwork and insulation.
The University of Bath has completed a research programme which used ‘ModCell’ panels—prefabricated panels consisting of a wooden structural frame infilled with straw bales and rendered with a breathable lime-based system—to build 'BaleHaus', a straw bale construction on the university's campus. Monitoring work of the structure carried out by architectural researchers at the university has found that as well as reducing the environmental footprint, the construction offers other benefits, including healthier living through higher levels of thermal insulation and regulation of humidity levels. The group has published a number of research papers on its findings. [ 23 ]
High density pre-compressed bales ( straw blocks ) can bear higher loads than traditional field bales (bales created with baling machines on farms). While field bales support around 900 kilograms per metre (600 lb/ft) of wall length, high-density bales can bear at least 6,000 kg/m (4,000 lb/ft).
Bale buildings can also be constructed of non-straw bales—such as those made from recycled material such as tires, cardboard, paper, plastic, and carpeting—and even bags containing "bales" of wood chips or rice hulls . [ 5 ] [ 6 ]
Straw bales have also been used in very energy efficient high-performance buildings such as the S-House [ 24 ] in Austria which meets the Passivhaus energy standard. In South Africa, a five-star lodge made from 10,000 strawbales has housed world leaders Nelson Mandela and Tony Blair. [ 25 ] In the Swiss Alps, in the little village of Nax Mont-Noble , construction works have begun in October 2011 for the first hotel in Europe built entirely with straw bales. [ 26 ] The Harrison Vault, [ 27 ] in Joshua Tree, California, is engineered to withstand the high seismic loads in that area using only the assembly consisting of bales, lath and plaster. [ 28 ] The technique was used successfully for strawbale housing in rural China. [ 29 ] Straw bale domes along the Syrio-African rift at Kibbutz Lotan have an interior geodesic frame of steel pipes. [ 30 ] Another method to reap the benefits of straw is to incorporate straw-bale walls into a pre-existing structure. [ 31 ]
Straw bales are widely used to insulate walls, but they may also be used to insulate roofs and sub-floors. [ 32 ]
Compressed straw bales have a wide range of documented R-value. R-value is a measurement of a materials insulating quality, higher the number the more insulating. The reported R-value ranges from 17–55 (in American units) or 3–9.6 (in SI) depending on the study, differing wall designs could be responsible for wide range in R-value. [ 33 ] [ 34 ] given that the bales are over a foot thick, the R-value per inch is lower than most other commercial insulation types including batts (3–4) and foamboard (~5). Bale walls are typically coated with a thick layer of plaster , which provides a well-distributed thermal mass , active on a short-term (diurnal) cycle. The combination of insulation and mass provide an excellent platform for passive solar building design for winter and summer.
In common with most building materials, there is a degree of uncertainty in the thermal conductivity due to the influences of temperature, moisture content and density. However, from evaluation of a range of literature and experimental data, a value of 0.064 W/m·K is regarded as a representative design value for straw bales at the densities typically used in building construction. [ 35 ]
Compressed and plastered straw bale walls are also resistant to fire. [ 36 ]
The hygrothermal properties of straw bales have been measured and reviewed in several technical papers. [ 37 ] [ 38 ] [ 39 ] [ 32 ] [ 40 ] [ 41 ] [ 42 ] [ 43 ] According to research, the thermal conductivity does not differ significantly depending on the type of straw. [ 44 ] Samples with densities between 63 and 350 kg/m 3 have been analysed. [ 39 ] [ 32 ] The best performing was characterised by a thermal conductivity of 0.038 W m −1 K −1 . [ 39 ] Marques et al., [ 41 ] Reif et al. [ 43 ] and Cascone et al. [ 32 ] indicate that the thermal conductivity of straw is relatively insensitive to bale density. The thermal conductivity of straw bales has been shown to differ with the direction of the straw's orientation within the bale, with straws with fibres oriented perpendicularly or randomly to the heat flow having lower thermal conductivity than those arranged in parallel. [ 42 ] [ 45 ] For different temperatures and densities, Vjelien [ 45 ] studied four variations of the same kind of straw: two variations concerned the direction of the fibres in relation to the heat flow: perpendicular and parallel, and the other two concerned the macrostructure chopped straw and defibrated straw. The thermal conductivity of the defibrated straw was lower than that of the chopped straw.
The use of straw bales as thermal insulation in buildings has been studied by many authors. [ 37 ] [ 38 ] [ 39 ] [ 32 ] They mainly focus on the straw’s thermal and hygrothermal properties. The findings showed that using straw in construction improves energy, environmental, and economic efficiency:
Some studies have evaluated the advantages of using straw bales for building insulation. Measurements carried out in an innovative and sustainable house built in France have shown that this material helps to minimize heating degrees and energy consumption. The simulated heating requirements in the winter are calculated to be 59 kW h/m 2 . In Italy, the energy-saving potential of a straw wall was assessed under various climatic conditions. [ 39 ] As compared to the Italian regulations’ reference of a Net Zero Energy Building (NZEB), the straw wall performed extremely well in terms of energy efficiency. The embodied energy of a straw wall structure is about half that of a conventional wall assembly, and the corresponding CO2 emissions are more than 40% lower. Furthermore, in the summer, straw bale walls provide significant thermal inertia. [ 42 ] [ 46 ]
Liuzzi et al. [ 38 ] compared expanded polystyrene (EPS), straw fibre, and olive fibre in a hygrothermal simulation of a flat in two different climatic zones (Bari and Bilbao), assuming a retrofit via interior panels. The simulation results show that the annual energy requirement when using straw fibre and olive fibre panels is close to the annual energy requirement for expanded polystyrene panels in both climates. During the cooling season, however, olive fibre and straw fibre insulation panels perform better, with a reduction of approximately 21% in Bilbao and 14% in Bari.
Straw has a thermal conductivity similar to that of common insulating materials. It has a thermal conductivity of 0.038–0.08 W m −1 K −1 , which is comparable to other wood fibre insulation materials. To achieve the same thermal insulation efficiency as other more insulating materials such as extruded and extended polystyrene, the thickness of the straw insulation layer should be increased by 30–90%. [ 47 ]
Two significant problems related to straw-bale construction are moisture and mold . During the construction phase, buildings need to be protected from rain and from water leakages into the body of the walls. [ 48 ] If exposed to water, compressed straw may expand due to absorption of moisture. In turn, this can cause more cracking through which more moisture can infiltrate. Further damage to the wall can be caused by mold releasing potentially toxic spores into the wall cavities [ 49 ] and into the air. [ 50 ] In hot climates, where walls may have become internally dampened, internal temperatures may rise (due to decomposition of affected straw). Rats and mice can infiltrate straw bale homes during construction, so care must be taken to keep such animals out of the material. Other problems relate to straw dust which may cause breathing difficulties among people with allergies to straw or hay. [ 51 ] [ 52 ]
Several companies have developed prefabricated straw bale walls. A passive ecological house can easily be assembled with those panels.
This article incorporates text by S. Bourbia1 · H. Kazeoui · R. Belarbi available under the CC BY 4.0 license. | https://en.wikipedia.org/wiki/Straw-bale_construction |
Streak seeding [ 1 ] is a method first described during ICCBM-3 by Enrico Stura to induce crystallization in a straight line into a sitting or hanging drop for protein crystallization by introducing microseeds. The purpose is to control nucleation and understand the parameters that make crystals grow. It is also used to test any particular set of conditions to check if crystals could grow under such conditions.
The technique is relatively simple. [ 2 ] A cat whisker is used to dislodge seeds from a crystal. The whisker is passed through the drop starting from one side of the drop and ending on the opposite side of the drop in one smooth motion. To allow for vapour diffusion equilibration, the well in which the drop has been placed is resealed. The same procedure is repeated for all the drops whose conditions need testing.
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Streak_seeding |
In microbiology , streaking is a mechanical technique used to isolate a pure strain from a single species of microorganism, often bacteria . [ 1 ] Samples from a colony derived from a single cell are taken from the streaked plate to create a genetically identical microbiological culture grown on a new plate so that the organism can be identified, studied, or tested. [ 2 ] Different patterns can be used to streak a plate. All involve the dilution of bacteria by systematically streaking them over the exterior of the agar in a Petri dish to obtain isolated colonies which contain gradually fewer numbers of cells. [ 1 ] If the agar surface grows microorganisms which are all genetically same, the culture is then considered as a pure microbiological culture .
The modern streak plate method was developed in the 1880s from the efforts of Robert Koch and other microbiologists to obtain microbiological cultures of bacteria in order to study them. [ 3 ] Prior to the adoption of streaking, pour plates were the common technique utilized by microbiologists to obtain pure strains . [ 4 ] The dilution or isolation by streaking method was first developed in Koch's laboratory by his two assistants Friedrick Loeffler and Georg Theodor August Gaffky . [ 4 ]
Streaking is rapid and ideally a simple process of isolation dilution. The technique is done by diluting a comparatively large concentration of bacteria to a smaller concentration. The decrease of bacteria should sufficiently spread apart colonies and allow for the separation of the different types of microbes in a sample. [ 1 ] Streaking is done using a sterile tool, such as a cotton swab or commonly an inoculation loop . If using a metal inoculation loop, it is first sterilized by passing it through a flame. When the loop is cool, it is dipped into an inoculum such as a broth or patient specimen containing many species of bacteria. [ 4 ] Aseptic techniques are used to maintain microbiological cultures and to prevent contamination of the growth medium . [ 1 ]
Early examples of streaking moved in a single direction across the plate, different from the back and forth "zig zag" motion seen nowadays. [ 4 ] Many different methods have been developed to streak a plate. Picking a technique is a matter of individual preference and can also depend on how large the number of microbes the sample contains. [ 5 ]
The most common pattern used is Quadrant streaking , also called "four sectors streaking" and "four way streak method." [ 5 ] Involves splitting the agar plate into four sections, or quadrants. To begin, a sterile loop starts in the first quarter of a plate and moves in a back and forth motion multiple times across the agar surface going from the outside of the plate into the center. Once each previous quarter is completed, the plate is turned 90 degrees and a newly sterilized inoculation loop must be used. [ 4 ] Starting from the bottom of the previous quadrant, re-run over half of the streaks to pickup material before covering the next quarter. This process repeats moving through all four quadrants and will result in the final containing the most diluted section. [ 1 ]
The three-phase streaking pattern, also known as the T-Streak, is recommended for beginners. [ 5 ] However, it is also limited in applications using greater than a single culture. [ 5 ] The plate is split by drawing a "T" to create three separate sections. Rotate the plate so that the top of the "T" is furthest from your dominant hand. [ 6 ] Starting from this first section a sterilized inoculation loop is dragged across the surface of the agar back and forth in a zigzag motion until approximately a third of the plate has been covered. The loop then is re-sterilized and the plate is turned 90 degrees. Starting in the previously streaked section, the loop is dragged through it two to three times continuing the zigzag pattern before moving to cover a second section. The procedure is then repeated once more, being cautious to not touch the previously streaked sectors. [ 5 ] Each time the loop gathers fewer and fewer bacteria until it gathers just single bacterial cells that can grow into a colony. The plate should show the heaviest growth in the first section. The second section will have less growth and a few isolated colonies, while the final section will have the least amount of growth and many isolated colonies. [ 6 ]
For use in both dilutions and pure cultures , radiant streaking begins from streaking a small portion of agar on one side of the plate utilizing a sterile loop. Starting from the streaked section on the one side, make a set of vertical lines across the plate stretching to the other in a ray like pattern. Then switch to a new sterile inoculation loop and make horizontal lines crossing over the vertical as you go down the plate. [ 5 ]
Continuous streaking is a method utilized to spread an even distribution of a sample across a plate for propagation, or increasing the size of the culture. It is implemented by starting from the outside and moving towards the inside of a plate in a single motion. This method is quick but only applicable for very diluted samples or in cases where a pure strain has already been achieved. [ 5 ] In laboratories wishing to save material, a single plate can be divided into sections and a continuous streak used for a different material in each section. Allowing for a maximum number of samples to be streaked at one time. [ 5 ]
Another continuous method is zig zag streaking and is also used to propagate culture samples. [ 5 ] Starting from the side furthest from your dominant hand, move a sterilized loop back and forth across the plate. Use large motions across the entire width of the plate to cover the greatest area of the agar surface. [ 6 ]
The plate upon which a sample will be streaked is a Petri dish containing a growth medium . Bacteria need different nutrients to grow. [ 7 ] This includes water, a source of energy, sources of carbon, nitrogen, and additional minerals, growth factors, and other vitamins specific to the type of bacteria. [ 8 ] A very common type of media used in microbiology labs is known as agar , a gelatinous substance derived from seaweed. [ 9 ] The nutrient agar medium creates a sterile and transparent substance which can withstand the high temperatures of bacteria incubation while retaining its shape. [ 10 ] Choice of which growth medium is used depends on which microorganism is being cultured, or selected for. Selective mediums can also be used for bacterial isolation. By adding an inhibitor such as an antibiotic into the growth medium, it can select against unwanted bacterial strains from growing on the plate. [ 8 ]
Dependent on the strain, the streaked plate may then be incubated , usually for 24 to 46 hours, to allow the bacteria to reproduce. [ 11 ] Some strains of bacteria for example, the Bartonella species require longer periods of incubation due to slow growth rates. [ 11 ] During incubation the plates are maintained at a constant temperature within the laboratory. Commonly the cultures are held at temperatures near 25 °C, standard room temperature. [ 12 ] However, some microorganisms require incubation at different temperatures specific to their range of high growth rates and survival that must be accounted for. [ 12 ] When setting up incubation, place the cover over the petri dish and turn the plate upside down, the portion with the streaked agar should serve as the top. This is done so any condensation that forms throughout the process will not fall onto the bacteria being grown. [ 13 ] At the end of incubation there should be enough bacteria to form visible colonies in the areas touched by the inoculation loop. From these mixed colonies, single bacterial or fungal species can be identified based on their morphological (size/shape/color) differences. [ 14 ] This can then be sub-cultured to a new media plate to yield a pure culture for further analysis. [ 4 ]
The use of streak plates to obtain pure cultures of bacteria is a technique utilized by a variety of scientific fields such as pathology , taxonomy and ecology . [ 15 ] Bacteria within the environment frequently occur in mixed populations. To be able to study the infectious, morphological, and physiological characteristics of an individual species, the bacteria need to be isolated into genetically identical pure strains. [ 16 ] Microbiology streaking is commonly employed in research of infectious disease . Streak plates allow for the analysis of antibiotic response and genome sequencing to analyze the individual genetic makeup of a strain. They are also utilized in the process of transformation , the manipulation of traits in bacteria by adding or removing specific genes. [ 11 ] | https://en.wikipedia.org/wiki/Streaking_(microbiology) |
A stream is a continuous body of surface water [ 1 ] flowing within the bed and banks of a channel . Depending on its location or certain characteristics, a stream may be referred to by a variety of local or regional names. Long, large streams are usually called rivers , while smaller, less voluminous and more intermittent streams are known, amongst others, as brook , creek , rivulet , rill , run , tributary , feeder , freshet , narrow river , and streamlet . [ 2 ]
The flow of a stream is controlled by three inputs – surface runoff (from precipitation or meltwater ), daylighted subterranean water , and surfaced groundwater ( spring water ). The surface and subterranean water are highly variable between periods of rainfall. Groundwater, on the other hand, has a relatively constant input and is controlled more by long-term patterns of precipitation. [ 3 ] The stream encompasses surface, subsurface and groundwater fluxes that respond to geological, geomorphological, hydrological and biotic controls. [ 4 ]
Streams are important as conduits in the water cycle , instruments in groundwater recharge , and corridors for fish and wildlife migration. The biological habitat in the immediate vicinity of a stream is called a riparian zone . Given the status of the ongoing Holocene extinction , streams play an important corridor role in connecting fragmented habitats and thus in conserving biodiversity . The study of streams and waterways in general is known as surface hydrology and is a core element of environmental geography . [ 5 ]
A brook is a stream smaller than a creek, especially one that is fed by a spring or seep . It is usually small and easily forded . A brook is characterised by its shallowness.
A creek ( / k r iː k / ) or crick ( / k r ɪ k / ): [ 6 ] [ 7 ]
In hydrography, gut is a small creek; [ 15 ] this is seen in proper names in eastern North America from the Mid-Atlantic states (for instance, The Gut in Pennsylvania, Ash Gut in Delaware, [ 16 ] and other streams) [ 17 ] down into the Caribbean (for instance, Guinea Gut , Fish Bay Gut , Cob Gut , Battery Gut and other rivers and streams in the United States Virgin Islands , in Jamaica (Sandy Gut, [ 18 ] Bens Gut River, [ 19 ] White Gut River), and in many streams and creeks of the Dutch Caribbean ). [ 20 ]
A river is a large natural stream that is much wider and deeper than a creek and not easily fordable, and may be a navigable waterway . [ 21 ]
The linear channel between the parallel ridges or bars on a shoreline beach or river floodplain, or between a bar and the shore. Also called a swale .
A tributary is a contributory stream to a larger stream, or a stream which does not reach a static body of water such as a lake , bay or ocean [ 22 ] but joins another river (a parent river). Sometimes also called a branch or fork. [ 23 ]
A distributary , or a distributary channel , is a stream that branches off and flows away from a main stream channel, and the phenomenon is known as river bifurcation . Distributaries are common features of river deltas , and are often found where a valleyed stream enters wide flatlands or approaches the coastal plains around a lake or an ocean . They can also occur inland, on alluvial fans , or where a tributary stream bifurcates as it nears its confluence with a larger stream. Common terms for individual river distributaries in English-speaking countries are arm and channel .
There are a number of regional names for a stream.
A stream's source depends on the surrounding landscape and its function within larger river networks. While perennial and intermittent streams are typically supplied by smaller upstream waters and groundwater, headwater and ephemeral streams often derive most of their water from precipitation in the form of rain and snow. [ 45 ] Most of this precipitated water re-enters the atmosphere by evaporation from soil and water bodies, or by the evapotranspiration of plants. Some of the water proceeds to sink into the earth by infiltration and becomes groundwater, much of which eventually enters streams. Some precipitated water is temporarily locked up in snow fields and glaciers , to be released later by evaporation or melting. The rest of the water flows off the land as runoff, the proportion of which varies according to many factors, such as wind, humidity, vegetation, rock types, and relief. This runoff starts as a thin film called sheet wash, combined with a network of tiny rills, together constituting sheet runoff; when this water is concentrated in a channel, a stream has its birth. Some creeks may start from ponds or lakes.
The streams typically derive most of their water from rain and snow precipitation. Most of this water re-enters the atmosphere either by evaporation from soil and water bodies, or by plant evapotranspiration. By infiltration some of the water sinks into the earth and becomes groundwater, much of which eventually enters streams. Most precipitated water is partially bottled up by evaporation or freezing in snow fields and glaciers. The majority of the water flows as a runoff from the ground; the proportion of this varies depending on several factors, such as climate, temperature, vegetation, types of rock, and relief. This runoff begins as a thin layer called sheet wash, combined with a network of tiny rills, which together form the sheet runoff; when this water is focused in a channel, a stream is born. Some rivers and streams may begin from lakes or ponds.
Freshwater's primary sources are precipitation and mountain snowmelt. However, rivers typically originate in the highlands, and are slowly created by the erosion of mountain snowmelt into lakes or rivers. Rivers usually flow from their source topographically, and erode as they pass until they reach the base stage of erosion.
Some scientists have proposed a critical support flow (CSD) concept and model to determine the hydrographic indicators of river sources in complex geographical areas. [ 46 ]
The source of a river or stream (its point of origin) can consist of lakes, swamps, springs, or glaciers. A typical river has several tributaries; each of these may be made up of several other smaller tributaries, so that together this stream and all its tributaries are called a drainage network. Although each tributary has its own source, international practice is to take the source farthest from the river mouth as the source of the entire river system, from which the most extended length of the river measured as the starting point is taken as the length of the whole river system, [ 47 ] and that furthest starting point is conventionally taken as the source of the whole river system. For example, the origin of the Nile River is the confluence of the White Nile and the Blue Nile, but the source of the whole river system is in its upper reaches. If there is no specific designation, "length of the Nile" refers to the "river length of the Nile system", rather than to the length of the Nile river from the point where it is formed by a confluence of tributaries. The Nile's source is often cited as Lake Victoria, but the lake has significant feeder rivers. The Kagera River, which flows into Lake Victoria near Bukoba's Tanzanian town [ clarification needed ] , is the longest feeder, though sources do not agree on which is the Kagera's longest tributary and therefore the Nile's most remote source itself. [ 48 ] [ 49 ]
To qualify as a stream, a body of water must be either recurring or perennial. Recurring (intermittent) streams have water in the channel for at least part of the year. A stream of the first order is a stream which does not have any other recurring or perennial stream feeding into it. When two first-order streams come together, they form a second-order stream. When two second-order streams come together, they form a third-order stream. Streams of lower order joining a higher order stream do not change the order of the higher stream.
The gradient of a stream is a critical factor in determining its character and is entirely determined by its base level of erosion. The base level of erosion is the point at which the stream either enters the ocean, a lake or pond, or enters a stretch in which it has a much lower gradient, and may be specifically applied to any particular stretch of a stream.
In geological terms, the stream will erode down through its bed to achieve the base level of erosion throughout its course. If this base level is low, then the stream will rapidly cut through underlying strata and have a steep gradient, and if the base level is relatively high, then the stream will form a flood plain and meander.
Typically, streams are said to have a particular elevation profile , beginning with steep gradients, no flood plain, and little shifting of channels, eventually evolving into streams with low gradients, wide flood plains, and extensive meanders. The initial stage is sometimes termed a "young" or "immature" stream, and the later state a "mature" or "old" stream.
Meanders are looping changes of direction of a stream caused by the erosion and deposition of bank materials. These are typically serpentine in form. Typically, over time the meanders gradually migrate downstream. If some resistant material slows or stops the downstream movement of a meander, a stream may erode through the neck between two legs of a meander to become temporarily straighter, leaving behind an arc-shaped body of water termed an oxbow lake or bayou . A flood may also cause a meander to be cut through in this way.
The stream load is defined as the solid matter carried by a stream. Streams can carry sediment, or alluvium. The amount of load it can carry (capacity) as well as the largest object it can carry (competence) are both dependent on the velocity of the stream.
A perennial stream is one which flows continuously all year. [ 50 ] : 57 Some perennial streams may only have continuous flow in segments of its stream bed year round during years of normal rainfall. [ 50 ] [ 51 ] Blue-line streams are perennial streams and are marked on topographic maps with a solid blue line.
The word "perennial" from the 1640s, meaning "evergreen," is established in Latin perennis, keeping the meaning as "everlasting all year round," per "over" plus annus "year." This has been proved since the 1670s by the "living years" in the sense of botany. The metaphorical sense of "enduring, eternal" originates from 1750. They are related to "perennial." See biennial for shifts in vowels. [ 52 ]
Perennial streams have one or more of these characteristics:
Absence of such characteristics supports classifying a stream as intermittent, "showing interruptions in time or space". [ 53 ]
Generally, streams that flow only during and immediately after precipitation are termed ephemeral . There is no clear demarcation between surface runoff and an ephemeral stream, [ 50 ] : 58 and some ephemeral streams can be classed as intermittent—flow all but disappearing in the normal course of seasons but ample flow (backups) restoring stream presence — such circumstances are documented when stream beds have opened up a path into mines or other underground chambers. [ 54 ]
According to official U.S. definitions, the channels of intermittent streams are well-defined, [ 55 ] as opposed to ephemeral streams, which may or may not have a defined channel, and rely mainly on storm runoff, as their aquatic bed is above the water table . [ 56 ] An ephemeral stream does not have the biological, hydrological, and physical characteristics of a continuous or intermittent stream. [ 56 ] The same non-perennial channel might change characteristics from intermittent to ephemeral over its course. [ 56 ]
Washes can fill up quickly during rains, and there may be a sudden torrent of water after a thunderstorm begins upstream, such as during monsoonal conditions. In the United States, an intermittent or seasonal stream is one that only flows for part of the year and is marked on topographic maps with a line of blue dashes and dots. [ 50 ] : 57–58 A wash , desert wash, or arroyo is normally a dry streambed in the deserts of the American Southwest , which flows after sufficient rainfall.
In Italy, an intermittent stream is termed a torrent ( Italian : torrente ). In full flood the stream may or may not be "torrential" in the dramatic sense of the word, but there will be one or more seasons in which the flow is reduced to a trickle or less. Typically torrents have Apennine rather than Alpine sources, and in the summer they are fed by little precipitation and no melting snow. In this case the maximum discharge will be during the spring and autumn.
An intermittent stream can also be called a winterbourne in Britain, a wadi in the Arabic -speaking world or torrente or rambla (this last one from Arabic origin) in Spain and Latin America. In Australia, an intermittent stream is usually called a creek and marked on topographic maps with a solid blue line. [ citation needed ]
There are five generic classifications:
"Macroinvertebrate" refers to easily seen invertebrates , larger than 0.5 mm, found in stream and river bottoms. [ 59 ] Macroinvertebrates are larval stages of most aquatic insects and their presence is a good indicator that the stream is perennial. Larvae of caddisflies , mayflies , stoneflies , and damselflies [ 60 ] require a continuous aquatic habitat until they reach maturity. Crayfish and other crustaceans , snails , bivalves (clams), and aquatic worms also indicate the stream is perennial. These require a persistent aquatic environment for survival. [ 61 ]
Fish and amphibians are secondary indicators in assessment of a perennial stream because some fish and amphibians can inhabit areas without persistent water regime. When assessing for fish, all available habitat should be assessed: pools, riffles, root clumps and other obstructions. Fish will seek cover if alerted to human presence, but should be easily observed in perennial streams. Amphibians also indicate a perennial stream and include tadpoles , frogs , salamanders , and newts . These amphibians can be found in stream channels, along stream banks, and even under rocks. Frogs and tadpoles usually inhabit shallow and slow moving waters near the sides of stream banks. Frogs will typically jump into water when alerted to human presence. [ 61 ]
Well defined river beds composed of riffles, pools, runs, gravel bars, a bed armor layer, and other depositional features, plus well defined banks due to bank erosion, are good identifiers when assessing for perennial streams. [ 62 ] Particle size will help identify a perennial stream. Perennial streams cut through the soil profile, which removes fine and small particles. By assessing areas for relatively coarse material left behind in the stream bed and finer sediments along the side of the stream or within the floodplain will be a good indicator of persistent water regime. [ 60 ]
A perennial stream can be identified 48 hours after a storm. Direct storm runoff usually has ceased at this point. If a stream is still flowing and contributing inflow is not observed above the channel, the observed water is likely baseflow. Another perennial stream indication is an abundance of red rust material in a slow-moving wetted channel or stagnant area. This is evidence that iron-oxidizing bacteria are present, indicating persistent expression of oxygen-depleted ground water. In a forested area, leaf and needle litter in the stream channel is an additional indicator. Accumulation of leaf litter does not occur in perennial streams since such material is continuously flushed. In the adjacent overbank of a perennial stream, fine sediment may cling to riparian plant stems and tree trunks. Organic debris drift lines or piles may be found within the active overbank area after recent high flow. [ 60 ]
Streams, headwaters, and streams flowing only part of the year provide many benefits upstream and downstream. They defend against floods, remove contaminants, recycle nutrients that are potentially dangerous as well as provide food and habitat for many forms of fish. Such streams also play a vital role in preserving our drinking water quality and supply, ensuring a steady flow of water to surface waters and helping to restore deep aquifers.
The extent of land basin drained by a stream is termed its drainage basin (also known in North America as the watershed and, in British English, as a catchment). [ 64 ] A basin may also be composed of smaller basins. For instance, the Continental Divide in North America divides the mainly easterly-draining Atlantic Ocean and Arctic Ocean basins from the largely westerly-flowing Pacific Ocean basin. The Atlantic Ocean basin, however, may be further subdivided into the Atlantic Ocean and Gulf of Mexico drainages. (This delineation is termed the Eastern Continental Divide .) Similarly, the Gulf of Mexico basin may be divided into the Mississippi River basin and several smaller basins, such as the Tombigbee River basin. Continuing in this vein, a component of the Mississippi River basin is the Ohio River basin, which in turn includes the Kentucky River basin, and so forth.
Stream crossings are where streams are crossed by roads , pipelines , railways , or any other thing which might restrict the flow of the stream in ordinary or flood conditions. Any structure over or in a stream which results in limitations on the movement of fish or other ecological elements may be an issue. | https://en.wikipedia.org/wiki/Stream |
The capacity of a stream or river is the total amount of sediment a stream is able to transport . This measurement usually corresponds to the stream power and the width-integrated bed shear stress across section along a stream profile. Note that capacity is greater than the load, which is the amount of sediment carried by the stream. Load is generally limited by the sediment available upstream.
Stream capacity is often mistaken for the stream competency , which is a measure of the maximum size of the particles that the stream can transport, or for the total load , which is the load that a stream carries.
The sediment transported by the stream depends upon the intensity of rainfall and land characteristics.
This hydrology article is a stub . You can help Wikipedia by expanding it .
This sedimentology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Stream_capacity |
Stream capture , river capture , river piracy or stream piracy is a geomorphological phenomenon occurring when a stream or river drainage system or watershed is diverted from its own bed, and flows down to the bed of a neighbouring stream. This can happen for several reasons, including:
The cause is not always clear.
The additional water flowing down the capturing stream may accelerate erosion and encourage the development of a canyon (gorge).
The now-dry valley of the original stream is known as a wind gap .
The Slims River was previously fed by meltwater from the Kaskawulsh Glacier in the Saint Elias Mountains in the Yukon and its waters flowed into Kluane Lake and on to the Bering Sea . Because of climate change , the glacier has rapidly receded and the meltwater no longer feeds the Slims. The water instead now feeds the Kaskawulsh River which is a tributary to the Alsek River and drains into the Gulf of Alaska . [ 6 ] [ 7 ]
River capture is a shaping force in the biogeography or distribution of many freshwater fish species. [ 8 ] [ 9 ]
Geological uplift in the southern South Island led to the divergence of freshwater galaxiid populations isolated by river capture. [ 10 ] [ 11 ] [ 12 ]
The formerly massive Great Dividing Range runs the length of the eastern coastline of Australia and has isolated native freshwater fish populations east and west of the range for millions of years. In the last two million years erosion has reduced the Great Dividing Range to a critical point where west-to-east river capture events have been possible. A number of native fish species that originated in the Murray– Darling river system to the west are (or were) found naturally occurring in a number of coastal systems spanning almost the entire length of the range.
None of the river capture events that allowed native fish of the Murray-Darling system to cross into and colonise these East Coast river systems seem to have formed permanent linkages. The colonising Murray-Darling fish in these East Coast river systems have therefore become isolated from their parent species, and due to isolation, the founder effect , genetic drift and natural selection , have become separate species (see allopatric speciation ).
Examples include:
Olive perchlet ( Ambassis agassizii ), western carp gudgeon ( Hypseleotris klungzingeri ), pygmy perch ( Nannoperca australis ) and Australian smelt ( Retropinna semoni ) also appear to have made crossings into coastal systems, the last two species seemingly many times as they are found in most or all coastal streams in south eastern Australia as well as the Murray-Darling system.
Unfortunately, with the exception of eastern freshwater cod and Mary River cod , it has not been widely recognised that these coastal populations of Murray–Darling native fish are separate species and their classifications have not been updated to reflect this. Many are threatened and two, the Richmond River cod and the Brisbane River cod , have become extinct. | https://en.wikipedia.org/wiki/Stream_capture |
In hydrology stream competency, also known as stream competence, is a measure of the maximum size of particles a stream can transport . [ 1 ] The particles are made up of grain sizes ranging from large to small and include boulders , rocks, pebbles , sand , silt , and clay . These particles make up the bed load of the stream. Stream competence was originally simplified by the “sixth-power-law,” which states the mass of a particle that can be moved is proportional to the velocity of the river raised to the sixth power. This refers to the stream bed velocity which is difficult to measure or estimate due to the many factors that cause slight variances in stream velocities. [ 2 ]
Stream capacity , while linked to stream competency through velocity, is the total quantity of sediment a stream can carry. Total quantity includes dissolved, suspended, saltation and bed loads. [ 3 ]
The movement of sediment is called sediment transport . Initiation of motion involves mass, force, friction and stress. Gravity and friction are the two primary forces in play as water flows through a channel . Gravity acts upon water to move it down slope. Friction exerted on the water by the bed and banks of the channel works to slow the movement of the water. When the force of gravity is equal and opposite to the force of friction the water flows through the channel at a constant velocity. When the force of gravity is greater than the force of friction the water accelerates. [ 4 ]
This sediment transport sorts grain sizes based on the velocity. As stream competence increases, the D 50 (median grain size) of the stream also increases and can be used to estimate the magnitude of flow which would begin particle transport. [ 5 ] Stream competence tends to decrease in the downstream direction, [ 6 ] meaning the D 50 will increase from mouth to head of the stream.
Stream power is the rate of potential energy loss per unit of channel length. [ 7 ] This potential energy is lost moving particles along the stream bed.
Ω = ρ w g Q S {\displaystyle \Omega =\rho _{w}gQS}
where Ω {\displaystyle \Omega } is the stream power, ρ w {\displaystyle \rho _{w}} is the density of water, g {\displaystyle g} is the gravitational acceleration , S {\displaystyle S} is the channel slope, and Q {\displaystyle Q} is the discharge of the stream.
The discharge of a stream, Q {\displaystyle Q} , is the velocity of the stream, U {\displaystyle U} , multiplied by the cross-sectional area , A C S {\displaystyle A_{\mathrm {CS} }} , of the stream channel at that point:
Q = U A C S {\displaystyle Q=UA_{\mathrm {CS} }}
in which Q {\displaystyle Q} is the discharge of the stream, U {\displaystyle U} is the average stream velocity, and A C S {\displaystyle A_{\mathrm {CS} }} is the cross-sectional area of the stream.
As velocity increases, so does stream power, and a larger stream power corresponds to an increased ability to move bed load particles.
In order for sediment transport to occur in gravel bed channels, flow strength must exceed a critical threshold, called the critical threshold of entrainment , or threshold of mobility. Flow over the surface of a channel and floodplain creates a boundary shear stress field. As discharge increases, shear stress increases above a threshold and starts the process of sediment transport. A comparison of the flow strength available during a given discharge to the critical shear strength needed to mobilize the sediment on the bed of the channel helps us predict whether or not sediment transport is likely to occur, and to some degree, the sediment size likely to move. Although sediment transport in natural rivers varies wildly, relatively simple approximations based on simple flume experiments are commonly used to predict transport. [ 8 ] Another way to estimate stream competency is to use the following equation for critical shear stress, τ c {\displaystyle \tau _{c}} which is the amount of shear stress required to move a particle of a certain diameter. [ 9 ]
τ c = τ c ∗ ( ρ s − ρ w ) g d 50 {\displaystyle \tau _{c}=\tau _{c}^{\ast }(\rho _{s}-\rho _{w})gd_{50}}
where:
The shear stress of a stream is represented by the following equation:
τ = ρ w g D S {\displaystyle \tau =\rho _{w}gDS}
where:
If we combine the two equations we get:
ρ w g D S = τ c ∗ ( ρ s − ρ w ) g d 50 {\displaystyle \rho _{w}gDS=\tau _{c}^{\ast }(\rho _{s}-\rho _{w})gd_{50}}
Solving for particle diameter d we get
d 50 = ρ w D S τ c ∗ ( ρ s − ρ w ) {\displaystyle d_{50}={\frac {\rho _{w}DS}{\tau _{c}^{\ast }(\rho _{s}-\rho _{w})}}}
The equation shows particle diameter, d 50 {\displaystyle d_{50}} , is directly proportional to both the depth of water and slope of stream bed (flow and velocity), and inversely proportional to Shield's parameter and the effective density of the particle.
Velocity differences between the bottom and tops of particles can lead to lift . Water is allowed to flow above the particle but not below resulting in a zero and non-zero velocity at the bottom and top of the particle respectively. The difference in velocities results in a pressure gradient that imparts a lifting force on the particle. If this force is greater than the particle's weight, it will begin transport. [ 11 ]
Flows are characterized as either laminar or turbulent . Low-velocity and high- viscosity fluids are associated with laminar flow, while high-velocity and low-viscosity are associated with turbulent flows. Turbulent flows result velocities that vary in both magnitude and direction. These erratic flows help keep particles suspended for longer periods of time. Most natural channels are considered to have turbulent flow. [ 7 ]
Another important property comes into play when discussing stream competency, and that is the intrinsic quality of the material. In 1935 Filip Hjulström published his curve, which takes into account the cohesiveness of clay and some silt. This diagram illustrates stream competency as a function of velocity. [ 12 ]
By observing the size of boulders, rocks, pebbles, sand, silt, and clay in and around streams, one can understand the forces at work shaping the landscape. Ultimately these forces are determined by the amount of precipitation , the drainage density , relief ratio and sediment parent material. [ 7 ] They shape depth and slope of the stream, velocity and discharge, channel and floodplain, and determine the amount and kind of sediment observed. This is how the power of water moves and shapes the landscape through erosion , transport, and deposition, and it can be understood by observing stream competency.
Stream competence does not rely solely on velocity. The bedrock of the stream influences the stream competence. Differences in bedrock will affect the general slope and particle sizes in the channel. Stream beds that have sandstone bedrock tend to have steeper slopes and larger bed material, while shale and limestone stream beds tend to be shallower with smaller grain size. [ 6 ] Slight variations in underlying material will affect erosion rates, cohesion, and soil composition.
Vegetation has a known impact on a stream's flow, but its influence is hard to isolate. A disruption in flow will result in lower velocities, leading to a lower stream competence. Vegetation has a 4-fold effect on stream flow: resistance to flow, bank strength, nucleus for bar sedimentation , and construction and breaching of log-jams.
Cowan method for estimating Manning's n .
n = ( n 0 + n 1 + n 2 + n 3 + n 4 ) m 5 {\displaystyle n=(n_{0}+n_{1}+n_{2}+n_{3}+n_{4})m_{5}}
Manning's n considers a vegetation correction factor. Even stream beds with minimal vegetation will have flow resistance.
Vegetation growing in the stream bed and channel helps bind sediment and reduce erosion in a stream bed. A high root density will result in a reinforced stream channel.
Vegetation-sediment interaction. Vegetation that gets caught in the middle of a stream will disrupt flow and lead to sedimentation in the resulting low velocity eddies . As the sedimentation continues, the island grows, and flow is further impacted.
Vegetation-vegetation interaction. Build-up of vegetation carried by streams eventually cuts off-flow completely to side or main channels of a stream. When these channels are closed, or opened in the case of a breach , the flow characteristics of the stream are disrupted. | https://en.wikipedia.org/wiki/Stream_competency |
In fluid dynamics , two types of stream function (or streamfunction ) are defined:
The properties of stream functions make them useful for analyzing and graphically illustrating flows.
The remainder of this article describes the two-dimensional stream function.
The two-dimensional stream function is based on the following assumptions:
Although in principle the stream function doesn't require the use of a particular coordinate system, for convenience the description presented here uses a right-handed Cartesian coordinate system with coordinates ( x , y , z ) {\displaystyle (x,y,z)} .
Consider two points A {\displaystyle A} and P {\displaystyle P} in the x y {\displaystyle xy} plane, and a continuous curve A P {\displaystyle AP} , also in the x y {\displaystyle xy} plane, that connects them. Then every point on the curve A P {\displaystyle AP} has z {\displaystyle z} coordinate z = 0 {\displaystyle z=0} . Let the total length of the curve A P {\displaystyle AP} be L {\displaystyle L} .
Suppose a ribbon-shaped surface is created by extending the curve A P {\displaystyle AP} upward to the horizontal plane z = b {\displaystyle z=b} ( b > 0 ) {\displaystyle (b>0)} , where b {\displaystyle b} is the thickness of the flow. Then the surface has length L {\displaystyle L} , width b {\displaystyle b} , and area b L {\displaystyle b\,L} . Call this the test surface .
The total volumetric flux through the test surface is
where s {\displaystyle s} is an arc-length parameter defined on the curve A P {\displaystyle AP} , with s = 0 {\displaystyle s=0} at the point A {\displaystyle A} and s = L {\displaystyle s=L} at the point P {\displaystyle P} .
Here n ^ {\displaystyle {\hat {\mathbf {n} }}} is the unit vector perpendicular to the test surface, i.e.,
where R {\displaystyle R} is the 3 × 3 {\displaystyle 3\times 3} rotation matrix corresponding to a 90 ∘ {\displaystyle 90^{\circ }} anticlockwise rotation about the positive z {\displaystyle z} axis:
The integrand in the expression for Q {\displaystyle Q} is independent of z {\displaystyle z} , so the outer integral can be evaluated to yield
Lamb and Batchelor define the stream function ψ {\displaystyle \psi } as follows. [ 3 ]
Using the expression derived above for the total volumetric flux, Q {\displaystyle Q} , this can be written as
In words, the stream function ψ {\displaystyle \psi } is the volumetric flux through the test surface per unit thickness, where thickness is measured perpendicular to the plane of flow.
The point A {\displaystyle A} is a reference point that defines where the stream function is identically zero. Its position is chosen more or less arbitrarily and, once chosen, typically remains fixed.
An infinitesimal shift d P = ( d x , d y ) {\displaystyle \mathrm {d} P=(\mathrm {d} x,\mathrm {d} y)} in the position of point P {\displaystyle P} results in the following change of the stream function:
From the exact differential
so the flow velocity components in relation to the stream function ψ {\displaystyle \psi } must be
Notice that the stream function is linear in the velocity. Consequently if two incompressible flow fields are superimposed, then the stream function of the resultant flow field is the algebraic sum of the stream functions of the two original fields.
Consider a shift in the position of the reference point, say from A {\displaystyle A} to A ′ {\displaystyle A'} . Let ψ ′ {\displaystyle \psi '} denote the stream function relative to the shifted reference point A ′ {\displaystyle A'} :
Then the stream function is shifted by
which implies the following:
The velocity u {\displaystyle \mathbf {u} } can be expressed in terms of the stream function ψ {\displaystyle \psi } as
where R {\displaystyle R} is the 3 × 3 {\displaystyle 3\times 3} rotation matrix corresponding to a 90 ∘ {\displaystyle 90^{\circ }} anticlockwise rotation about the positive z {\displaystyle z} axis. Solving the above equation for ∇ ψ {\displaystyle \nabla \psi } produces the equivalent form
From these forms it is immediately evident that the vectors u {\displaystyle \mathbf {u} } and ∇ ψ {\displaystyle \nabla \psi } are
Additionally, the compactness of the rotation form facilitates manipulations (e.g., see Condition of existence ).
In general, a divergence-free field like u {\displaystyle \mathbf {u} } , also known as a solenoidal vector field , can always be represented as the curl of some vector potential A {\displaystyle {\boldsymbol {A}}} :
The stream function ψ {\displaystyle \psi } can be understood as providing the strength of a vector potential that is directed perpendicular to the plane: [ 4 ]
in other words A = ψ z ^ {\displaystyle {\boldsymbol {A}}=\psi {\hat {\mathbf {z} }}} , where z ^ {\displaystyle {\hat {\mathbf {z} }}} is the unit vector pointing in the positive z {\displaystyle z} direction.
This can also be written as the vector cross product
where we've used the vector calculus identity
Noting that z ^ = ∇ z {\displaystyle {\hat {\mathbf {z} }}=\nabla z} , and defining ϕ = z {\displaystyle \phi =z} , one can express the velocity field as
This form shows that the level surfaces of ψ {\displaystyle \psi } and the level surfaces of z {\displaystyle z} (i.e., horizontal planes) form a system of orthogonal stream surfaces .
An alternative definition, sometimes used in meteorology and oceanography , is
In two-dimensional plane flow, the vorticity vector, defined as ω = ∇ × u {\displaystyle {\boldsymbol {\omega }}=\nabla \times \mathbf {u} } , reduces to ω z ^ {\displaystyle \omega \,{\hat {\mathbf {z} }}} , where
or
These are forms of Poisson's equation .
Consider two-dimensional plane flow with two infinitesimally close points P = ( x , y , z ) {\displaystyle P=(x,y,z)} and P ′ = ( x + d x , y + d y , z ) {\displaystyle P'=(x+dx,y+dy,z)} lying in the same horizontal plane. From calculus, the corresponding infinitesimal difference between the values of the stream function at the two points is
Suppose ψ {\displaystyle \psi } takes the same value, say C {\displaystyle C} , at the two points P {\displaystyle P} and P ′ {\displaystyle P'} . Then this gives
implying that the vector ∇ ψ {\displaystyle \nabla \psi } is normal to the surface ψ = C {\displaystyle \psi =C} . Because u ⋅ ∇ ψ = 0 {\displaystyle \mathbf {u} \cdot \nabla \psi =0} everywhere (e.g., see In terms of vector rotation ), each streamline corresponds to the intersection of a particular stream surface and a particular horizontal plane. Consequently, in three dimensions, unambiguous identification of any particular streamline requires that one specify corresponding values of both the stream function and the elevation ( z {\displaystyle z} coordinate).
The development here assumes the space domain is three-dimensional. The concept of stream function can also be developed in the context of a two-dimensional space domain. In that case level sets of the stream function are curves rather than surfaces, and streamlines are level curves of the stream function. Consequently, in two dimensions, unambiguous identification of any particular streamline requires that one specify the corresponding value of the stream function only.
It's straightforward to show that for two-dimensional plane flow u {\displaystyle \mathbf {u} } satisfies the curl-divergence equation
where R {\displaystyle R} is the 3 × 3 {\displaystyle 3\times 3} rotation matrix corresponding to a 90 ∘ {\displaystyle 90^{\circ }} anticlockwise rotation about the positive z {\displaystyle z} axis. This equation holds regardless of whether or not the flow is incompressible.
If the flow is incompressible (i.e., ∇ ⋅ u = 0 {\displaystyle \nabla \cdot \mathbf {u} =0} ), then the curl-divergence equation gives
Then by Stokes' theorem the line integral of R u {\displaystyle R\,\mathbf {u} } over every closed loop vanishes
Hence, the line integral of R u {\displaystyle R\,\mathbf {u} } is path-independent. Finally, by the converse of the gradient theorem , a scalar function ψ ( x , y , t ) {\displaystyle \psi (x,y,t)} exists such that
Here ψ {\displaystyle \psi } represents the stream function.
Conversely, if the stream function exists, then R u = ∇ ψ {\displaystyle R\,\mathbf {u} =\nabla \psi } . Substituting this result into the curl-divergence equation yields ∇ ⋅ u = 0 {\displaystyle \nabla \cdot \mathbf {u} =0} (i.e., the flow is incompressible).
In summary, the stream function for two-dimensional plane flow exists if and only if the flow is incompressible.
For two-dimensional potential flow , streamlines are perpendicular to equipotential lines. Taken together with the velocity potential , the stream function may be used to derive a complex potential. In other words, the stream function accounts for the solenoidal part of a two-dimensional Helmholtz decomposition , while the velocity potential accounts for the irrotational part.
The basic properties of two-dimensional stream functions can be summarized as follows:
If the fluid density is time-invariant at all points within the flow, i.e.,
then the continuity equation (e.g., see Continuity equation#Fluid dynamics ) for two-dimensional plane flow becomes
In this case the stream function ψ {\displaystyle \psi } is defined such that
and represents the mass flux (rather than volumetric flux) per unit thickness through the test surface. | https://en.wikipedia.org/wiki/Stream_function |
Stream gradient (or stream slope ) is the grade (or slope) of a stream . It is measured by the ratio of drop in elevation and horizontal distance. [ 1 ] It is a dimensionless quantity , usually expressed in units of meters per kilometer (m/km) or feet per mile (ft/mi); it may also be expressed in percent (%).
The world average river reach slope is 2.6 m/km or 0.26%; [ 2 ] a slope smaller than 1% and greater than 4% is considered gentle and steep, respectively. [ 3 ]
Stream gradient may change along the stream course.
An average gradient can be defined, known as the relief ratio , which gives the average drop in elevation per unit length of river. [ 4 ] The calculation is the difference in elevation between the river's source and the river terminus ( confluence or mouth ) divided by the total length of the river or stream.
A high gradient indicates a steep slope and rapid flow of water (i.e. more ability to erode); where as a low gradient indicates a more nearly level stream bed and sluggishly moving water, that may be able to carry only small amounts of very fine sediment . High gradient streams tend to have steep, narrow V-shaped valleys , and are referred to as young streams. Low gradient streams have wider and less rugged valleys , with a tendency for the stream to meander . Many rivers involve, to some extent, a flattening of the river gradient as approach the terminus at sea level.
A stream that flows upon a uniformly erodible substrate will tend to have a steep gradient near its source, and a low gradient nearing zero as it reaches its base level . Of course, a uniform substrate would be rare in nature; hard layers of rock along the way may establish a temporary base level, followed by a high gradient, or even a waterfall , as softer materials are encountered below the hard layer.
Human dams , glaciation , changes in sea level , and many other factors can also change the "normal" or natural gradient pattern.
On topographic maps , stream gradient can be easily approximated if the scale of the map and the contour intervals are known. Contour lines form a V-shape on the map, pointing upstream. By counting the number of lines that cross a certain segment of a stream, multiplying this by the contour interval, and dividing that quantity by the length of the stream segment, one obtains an approximation to the stream gradient.
Because stream gradient is customarily given in feet per 1000 feet, one should then measure the amount a stream segment rises and the length of the stream segment in feet, then multiply feet per foot gradient by 1000. For example, if one measures a scale mile along the stream length, and counts three contour lines crossed on a map with ten-foot contours, the gradient is approximately 5.7 feet per 1000 feet, a fairly steep gradient. | https://en.wikipedia.org/wiki/Stream_gradient |
Stream metabolism , often referred to as aquatic ecosystem metabolism in both freshwater (lakes, rivers, wetlands, streams, reservoirs) and marine ecosystems, includes gross primary productivity (GPP) and ecosystem respiration (ER) and can be expressed as net ecosystem production (NEP = GPP - ER). Analogous to metabolism within an individual organism , stream metabolism represents how energy is created ( primary production ) and used ( respiration ) within an aquatic ecosystem . In heterotrophic ecosystems, GPP:ER is <1 (ecosystem using more energy than it is creating); in autotrophic ecosystems it is >1 (ecosystem creating more energy than it is using). [1] Most streams are heterotrophic. [2] A heterotrophic ecosystem often means that allochthonous (coming from outside the ecosystem) inputs of organic matter , such as leaves or debris fuel ecosystem respiration rates, resulting in respiration greater than production within the ecosystem. However, autochthonous (coming from within the ecosystem) pathways also remain important to metabolism in heterotrophic ecosystems. In an autotrophic ecosystem, conversely, primary production (by algae , macrophytes ) exceeds respiration, meaning that ecosystem is producing more organic carbon than it is respiring.
Stream metabolism can be influenced by a variety of factors, including physical characteristics of the stream (slope, width, depth, and speed/volume of flow), biotic characteristics of the stream (abundance and diversity of organisms ranging from bacteria to fish ), light and nutrient availability to fuel primary production, organic matter to fuel respiration, water chemistry and temperature, and natural or human-caused disturbance, such as dams, removal of riparian vegetation , nutrient pollution , wildfire or flooding .
Measuring stream metabolic state is important to understand how disturbance may change the available primary productivity, and whether and how that increase or decrease in NEP influences foodweb dynamics , allochthonous/autochthonous pathways, and trophic interactions. Metabolism (encompassing both ER and GPP) must be measured rather than primary productivity alone, because simply measuring primary productivity does not indicate excess production available for higher trophic levels. One commonly used method for determining metabolic state in an aquatic system is daily changes in oxygen concentration, from which GPP, ER, and net daily metabolism can be estimated.
Disturbances can affect trophic relationships in a variety of ways, such as simplifying foodwebs , causing trophic cascades , and shifting carbon sources and major pathways of energy flow (Power et al. 1985, Power et al. 2008). Part of understanding how disturbance will impact trophic dynamics lies in understanding disturbance impacts to stream metabolism (Holtgrieve et al. 2010). For example, in Alaska streams, disturbance of the benthos by spawning salmon caused distinct changes in stream metabolism; autotrophic streams became net heterotrophic during the spawning run, then reverted to autotrophy after the spawning season (Holtgrieve and Schindler 2011). There is evidence that this seasonal disturbance impacts trophic dynamics of benthic invertebrates and in turn their vertebrate predators (Holtgrieve and Schindler 2011, Moore and Schindler 2008). Wildfire disturbance may have similar metabolic and trophic impacts in streams. | https://en.wikipedia.org/wiki/Stream_metabolism |
The stream order or waterbody order is a positive whole number used in geomorphology and hydrology to indicate the level of branching in a river system .
There are various approaches [ 1 ] to the topological ordering of rivers or sections of rivers based on their distance from the source ("top down" [ 2 ] ) or from the confluence (the point where two rivers merge) or river mouth ("bottom up" [ 3 ] ), and their hierarchical position within the river system. As terminology, the words "stream" and "branch" tend to be used rather than "river".
The classic stream order , also called Hack's stream order or Gravelius' stream order , is a "bottom up" hierarchy that allocates the number "1" to the river with its mouth at the sea (the main stem ). Stream order is an important aspect of a drainage basin. It is defined as the measure of the position of a stream in the hierarchy of streams. Tributaries are given a number one greater than that of the river or stream into which they discharge. So, for example, all immediate tributaries of the main stem are given the number "2". Tributaries emptying into a "2" are given the number "3" and so on. [ 4 ]
This type of stream order indicates the river's place in the network. It is suitable for general cartographic purposes, but can pose problems because at each confluence, a decision must be made about which of the two branches is a continuation of the main channel, and whether the main channel has its source at the confluence of two other smaller streams. The first order stream is the one which, at each confluence, has the greatest volumetric flow, usually reflecting the long-standing naming of rivers. Associated with this stream order system was the quest by geographers of the 19th century to find the "true" source of a river. In the course of this work, other criteria were discussed to enable the main stream to be defined. In addition to measuring the length of rivers (the distance between the farthest source and the mouth) and the size of the various catchments , geographers searched for the stream which deviated least at the actual confluence, as well as taking into account the successive names of rivers and their tributaries, such as the Rhine and the Aare or the Elbe and the Vltava .
According to the "top down" system devised by Arthur Newell Strahler , rivers of the first order are the outermost tributaries. If two streams of the same order merge, the resulting stream is given a number that is one higher. If two rivers with different stream orders merge, the resulting stream is given the higher of the two numbers. [ 5 ] [ 6 ]
The Strahler order is designed to reflect the morphology of a catchment and forms the basis of important hydrographical indicators of its structure, such as its bifurcation ratio, drainage density and frequency. Its basis is the watershed line of the catchment. It is, however, scale-dependent. The larger the map scale , the more orders of stream may be revealed. A general lower boundary for the definition of a "stream" may be set by defining its width at the mouth or, referencing a map, by limiting its extent. The system itself is also applicable for other small-scale structures outside of hydrology.
The Shreve system also gives the outermost tributaries the number "1". Unlike the Strahler method, at a confluence the two numbers are added together. [ 7 ]
Shreve stream order is preferred in hydrodynamics : it sums the number of sources in each catchment above a stream gauge or outflow, and correlates roughly to the discharge volumes and pollution levels. Like the Strahler method, it is dependent on the precision of the sources included, but less dependent on map scale. It can be made relatively scale-independent by using suitable normalization and is then largely independent of an exact knowledge of the upper and lower courses of an area. [ 7 ]
Other systems include the Horton stream order, an early top down system devised by Robert E. Horton , [ 8 ] and the topological stream order system, which is "a bottom up" system, and where the stream order number increases by one at every confluence. [ 4 ]
Classical or topological ordering systems are assigned a dimensionless numerical order of "one", starting at the mouth of a stream, which is its lowest elevation point. The vector order then increases as it traces upstream and converges with other smaller streams, resulting in a correlation of higher-order numbers to more highly elevated headwaters.
Horton proposed to establish a reversal of that order. Horton's 1947 research report established a stream ordering method based on vector geometry. In 1952, Arthur Strahler proposed a modification to Horton's method. Both Horton's and Strahler's methods established the assignment of the lowest order, number 1, starting at the river's headwater, which is the highest elevation point. Classical order number assignment correlates to height and elevation and traces upstream, but Horton and Strahler's stream ordering methods correlate to gravity flow and trace downstream.
Both Horton's and Strahler's stream ordering methods rely on principles of vector point-line geometry. Horton's and Strahler's rules form the basis of programming algorithms that interpret map data as queried by Geographic Information Systems .
The classic use of stream order is in general hydrological cartography. Stream order systems are also important for the systematic mapping of a river system, enabling the clear labelling and ordering of streams.
The Strahler and Shreve methods are particularly valuable for the modelling and morphometric analysis of river systems, because they define each section of a river. That allows the network to be separated at each gauge or outflow into upstream and downstream regimes, and for these points to be classified. These systems are also used as a basis for modelling the water budget using storage models or time-related, precipitation-outflow models and the like.
In the GIS-based earth sciences these two models are used because they show the graphical extent of a river object. Hack, Strahler and Shreve order can be computed by RivEX , an ESRI ArcGIS Pro 3.3.x tool.
Research activity following Strahler's 1952 report has focused on solving some challenges when converting two-dimensional maps into three-dimensional vector models. One challenge has been to convert rasterized pixel images of streams into vector format. Another problem has been that map scaling adjustments when using GIS may alter the stream classification by a factor or one or two orders. Depending on the scale of the GIS map, some fine detail of the tree structure of a river system can be lost.
Research efforts by private industry, universities and federal government agencies such as the EPA and USGS have combined resources and aligned focus to study these and other challenges. The principal intent is to standardize software and programming rules so GIS data is consistently reliable at any map scale. To this end, both the EPA and USGS have spearheaded standardization efforts, culminating in the creation of The National Map . Both federal agencies, as well as leading private industry software companies have adopted Horton's and Strahler's stream order vector principles as the basis for coding logic rules built into the standardized National Map software. | https://en.wikipedia.org/wiki/Stream_order |
In hydrology , a stream pool is a stretch of a river or stream in which the water depth is above average and the water velocity is below average. [ 1 ]
A stream pool may be bedded with sediment or armoured with gravel , and in some cases the pool formations may have been formed as basins in exposed bedrock formations. Plunge pools , or plunge basins, are stream pools formed by the action of waterfalls . Pools are often formed on the outside of a bend in a meandering river. [ 2 ]
The depth and lack of water velocity often leads to stratification in stream pools, especially in warmer regions. In warm arid regions of the Western United States, surface waters were found to be 3–9 °C higher than those at the bottom [ 3 ]
This portion of a stream often provides a specialized aquatic ecosystem habitat for organisms that have difficulty feeding or navigating in swifter reaches of the stream or in seasonally warmer water. Such pools can be important fish habitat, especially where many streams reach high summer temperatures and very low-flow dry season characteristics. In warm and arid regions, the stratification of stream pools provide cooler water for fish that prefer low water temperatures, such as the redband trout ( Oncorhynchus mykiss ) in the Western United States . [ 4 ] Mosquito larvae , which prefer still and often stagnant water , can be found in stream pools due to the low water velocity. [ 5 ] | https://en.wikipedia.org/wiki/Stream_pool |
A stream is a continuous body of surface water [ 1 ] flowing within the bed and banks of a channel . Depending on its location or certain characteristics, a stream may be referred to by a variety of local or regional names. Long, large streams are usually called rivers , while smaller, less voluminous and more intermittent streams are known, amongst others, as brook , creek , rivulet , rill , run , tributary , feeder , freshet , narrow river , and streamlet . [ 2 ]
The flow of a stream is controlled by three inputs – surface runoff (from precipitation or meltwater ), daylighted subterranean water , and surfaced groundwater ( spring water ). The surface and subterranean water are highly variable between periods of rainfall. Groundwater, on the other hand, has a relatively constant input and is controlled more by long-term patterns of precipitation. [ 3 ] The stream encompasses surface, subsurface and groundwater fluxes that respond to geological, geomorphological, hydrological and biotic controls. [ 4 ]
Streams are important as conduits in the water cycle , instruments in groundwater recharge , and corridors for fish and wildlife migration. The biological habitat in the immediate vicinity of a stream is called a riparian zone . Given the status of the ongoing Holocene extinction , streams play an important corridor role in connecting fragmented habitats and thus in conserving biodiversity . The study of streams and waterways in general is known as surface hydrology and is a core element of environmental geography . [ 5 ]
A brook is a stream smaller than a creek, especially one that is fed by a spring or seep . It is usually small and easily forded . A brook is characterised by its shallowness.
A creek ( / k r iː k / ) or crick ( / k r ɪ k / ): [ 6 ] [ 7 ]
In hydrography, gut is a small creek; [ 15 ] this is seen in proper names in eastern North America from the Mid-Atlantic states (for instance, The Gut in Pennsylvania, Ash Gut in Delaware, [ 16 ] and other streams) [ 17 ] down into the Caribbean (for instance, Guinea Gut , Fish Bay Gut , Cob Gut , Battery Gut and other rivers and streams in the United States Virgin Islands , in Jamaica (Sandy Gut, [ 18 ] Bens Gut River, [ 19 ] White Gut River), and in many streams and creeks of the Dutch Caribbean ). [ 20 ]
A river is a large natural stream that is much wider and deeper than a creek and not easily fordable, and may be a navigable waterway . [ 21 ]
The linear channel between the parallel ridges or bars on a shoreline beach or river floodplain, or between a bar and the shore. Also called a swale .
A tributary is a contributory stream to a larger stream, or a stream which does not reach a static body of water such as a lake , bay or ocean [ 22 ] but joins another river (a parent river). Sometimes also called a branch or fork. [ 23 ]
A distributary , or a distributary channel , is a stream that branches off and flows away from a main stream channel, and the phenomenon is known as river bifurcation . Distributaries are common features of river deltas , and are often found where a valleyed stream enters wide flatlands or approaches the coastal plains around a lake or an ocean . They can also occur inland, on alluvial fans , or where a tributary stream bifurcates as it nears its confluence with a larger stream. Common terms for individual river distributaries in English-speaking countries are arm and channel .
There are a number of regional names for a stream.
A stream's source depends on the surrounding landscape and its function within larger river networks. While perennial and intermittent streams are typically supplied by smaller upstream waters and groundwater, headwater and ephemeral streams often derive most of their water from precipitation in the form of rain and snow. [ 45 ] Most of this precipitated water re-enters the atmosphere by evaporation from soil and water bodies, or by the evapotranspiration of plants. Some of the water proceeds to sink into the earth by infiltration and becomes groundwater, much of which eventually enters streams. Some precipitated water is temporarily locked up in snow fields and glaciers , to be released later by evaporation or melting. The rest of the water flows off the land as runoff, the proportion of which varies according to many factors, such as wind, humidity, vegetation, rock types, and relief. This runoff starts as a thin film called sheet wash, combined with a network of tiny rills, together constituting sheet runoff; when this water is concentrated in a channel, a stream has its birth. Some creeks may start from ponds or lakes.
The streams typically derive most of their water from rain and snow precipitation. Most of this water re-enters the atmosphere either by evaporation from soil and water bodies, or by plant evapotranspiration. By infiltration some of the water sinks into the earth and becomes groundwater, much of which eventually enters streams. Most precipitated water is partially bottled up by evaporation or freezing in snow fields and glaciers. The majority of the water flows as a runoff from the ground; the proportion of this varies depending on several factors, such as climate, temperature, vegetation, types of rock, and relief. This runoff begins as a thin layer called sheet wash, combined with a network of tiny rills, which together form the sheet runoff; when this water is focused in a channel, a stream is born. Some rivers and streams may begin from lakes or ponds.
Freshwater's primary sources are precipitation and mountain snowmelt. However, rivers typically originate in the highlands, and are slowly created by the erosion of mountain snowmelt into lakes or rivers. Rivers usually flow from their source topographically, and erode as they pass until they reach the base stage of erosion.
Some scientists have proposed a critical support flow (CSD) concept and model to determine the hydrographic indicators of river sources in complex geographical areas. [ 46 ]
The source of a river or stream (its point of origin) can consist of lakes, swamps, springs, or glaciers. A typical river has several tributaries; each of these may be made up of several other smaller tributaries, so that together this stream and all its tributaries are called a drainage network. Although each tributary has its own source, international practice is to take the source farthest from the river mouth as the source of the entire river system, from which the most extended length of the river measured as the starting point is taken as the length of the whole river system, [ 47 ] and that furthest starting point is conventionally taken as the source of the whole river system. For example, the origin of the Nile River is the confluence of the White Nile and the Blue Nile, but the source of the whole river system is in its upper reaches. If there is no specific designation, "length of the Nile" refers to the "river length of the Nile system", rather than to the length of the Nile river from the point where it is formed by a confluence of tributaries. The Nile's source is often cited as Lake Victoria, but the lake has significant feeder rivers. The Kagera River, which flows into Lake Victoria near Bukoba's Tanzanian town [ clarification needed ] , is the longest feeder, though sources do not agree on which is the Kagera's longest tributary and therefore the Nile's most remote source itself. [ 48 ] [ 49 ]
To qualify as a stream, a body of water must be either recurring or perennial. Recurring (intermittent) streams have water in the channel for at least part of the year. A stream of the first order is a stream which does not have any other recurring or perennial stream feeding into it. When two first-order streams come together, they form a second-order stream. When two second-order streams come together, they form a third-order stream. Streams of lower order joining a higher order stream do not change the order of the higher stream.
The gradient of a stream is a critical factor in determining its character and is entirely determined by its base level of erosion. The base level of erosion is the point at which the stream either enters the ocean, a lake or pond, or enters a stretch in which it has a much lower gradient, and may be specifically applied to any particular stretch of a stream.
In geological terms, the stream will erode down through its bed to achieve the base level of erosion throughout its course. If this base level is low, then the stream will rapidly cut through underlying strata and have a steep gradient, and if the base level is relatively high, then the stream will form a flood plain and meander.
Typically, streams are said to have a particular elevation profile , beginning with steep gradients, no flood plain, and little shifting of channels, eventually evolving into streams with low gradients, wide flood plains, and extensive meanders. The initial stage is sometimes termed a "young" or "immature" stream, and the later state a "mature" or "old" stream.
Meanders are looping changes of direction of a stream caused by the erosion and deposition of bank materials. These are typically serpentine in form. Typically, over time the meanders gradually migrate downstream. If some resistant material slows or stops the downstream movement of a meander, a stream may erode through the neck between two legs of a meander to become temporarily straighter, leaving behind an arc-shaped body of water termed an oxbow lake or bayou . A flood may also cause a meander to be cut through in this way.
The stream load is defined as the solid matter carried by a stream. Streams can carry sediment, or alluvium. The amount of load it can carry (capacity) as well as the largest object it can carry (competence) are both dependent on the velocity of the stream.
A perennial stream is one which flows continuously all year. [ 50 ] : 57 Some perennial streams may only have continuous flow in segments of its stream bed year round during years of normal rainfall. [ 50 ] [ 51 ] Blue-line streams are perennial streams and are marked on topographic maps with a solid blue line.
The word "perennial" from the 1640s, meaning "evergreen," is established in Latin perennis, keeping the meaning as "everlasting all year round," per "over" plus annus "year." This has been proved since the 1670s by the "living years" in the sense of botany. The metaphorical sense of "enduring, eternal" originates from 1750. They are related to "perennial." See biennial for shifts in vowels. [ 52 ]
Perennial streams have one or more of these characteristics:
Absence of such characteristics supports classifying a stream as intermittent, "showing interruptions in time or space". [ 53 ]
Generally, streams that flow only during and immediately after precipitation are termed ephemeral . There is no clear demarcation between surface runoff and an ephemeral stream, [ 50 ] : 58 and some ephemeral streams can be classed as intermittent—flow all but disappearing in the normal course of seasons but ample flow (backups) restoring stream presence — such circumstances are documented when stream beds have opened up a path into mines or other underground chambers. [ 54 ]
According to official U.S. definitions, the channels of intermittent streams are well-defined, [ 55 ] as opposed to ephemeral streams, which may or may not have a defined channel, and rely mainly on storm runoff, as their aquatic bed is above the water table . [ 56 ] An ephemeral stream does not have the biological, hydrological, and physical characteristics of a continuous or intermittent stream. [ 56 ] The same non-perennial channel might change characteristics from intermittent to ephemeral over its course. [ 56 ]
Washes can fill up quickly during rains, and there may be a sudden torrent of water after a thunderstorm begins upstream, such as during monsoonal conditions. In the United States, an intermittent or seasonal stream is one that only flows for part of the year and is marked on topographic maps with a line of blue dashes and dots. [ 50 ] : 57–58 A wash , desert wash, or arroyo is normally a dry streambed in the deserts of the American Southwest , which flows after sufficient rainfall.
In Italy, an intermittent stream is termed a torrent ( Italian : torrente ). In full flood the stream may or may not be "torrential" in the dramatic sense of the word, but there will be one or more seasons in which the flow is reduced to a trickle or less. Typically torrents have Apennine rather than Alpine sources, and in the summer they are fed by little precipitation and no melting snow. In this case the maximum discharge will be during the spring and autumn.
An intermittent stream can also be called a winterbourne in Britain, a wadi in the Arabic -speaking world or torrente or rambla (this last one from Arabic origin) in Spain and Latin America. In Australia, an intermittent stream is usually called a creek and marked on topographic maps with a solid blue line. [ citation needed ]
There are five generic classifications:
"Macroinvertebrate" refers to easily seen invertebrates , larger than 0.5 mm, found in stream and river bottoms. [ 59 ] Macroinvertebrates are larval stages of most aquatic insects and their presence is a good indicator that the stream is perennial. Larvae of caddisflies , mayflies , stoneflies , and damselflies [ 60 ] require a continuous aquatic habitat until they reach maturity. Crayfish and other crustaceans , snails , bivalves (clams), and aquatic worms also indicate the stream is perennial. These require a persistent aquatic environment for survival. [ 61 ]
Fish and amphibians are secondary indicators in assessment of a perennial stream because some fish and amphibians can inhabit areas without persistent water regime. When assessing for fish, all available habitat should be assessed: pools, riffles, root clumps and other obstructions. Fish will seek cover if alerted to human presence, but should be easily observed in perennial streams. Amphibians also indicate a perennial stream and include tadpoles , frogs , salamanders , and newts . These amphibians can be found in stream channels, along stream banks, and even under rocks. Frogs and tadpoles usually inhabit shallow and slow moving waters near the sides of stream banks. Frogs will typically jump into water when alerted to human presence. [ 61 ]
Well defined river beds composed of riffles, pools, runs, gravel bars, a bed armor layer, and other depositional features, plus well defined banks due to bank erosion, are good identifiers when assessing for perennial streams. [ 62 ] Particle size will help identify a perennial stream. Perennial streams cut through the soil profile, which removes fine and small particles. By assessing areas for relatively coarse material left behind in the stream bed and finer sediments along the side of the stream or within the floodplain will be a good indicator of persistent water regime. [ 60 ]
A perennial stream can be identified 48 hours after a storm. Direct storm runoff usually has ceased at this point. If a stream is still flowing and contributing inflow is not observed above the channel, the observed water is likely baseflow. Another perennial stream indication is an abundance of red rust material in a slow-moving wetted channel or stagnant area. This is evidence that iron-oxidizing bacteria are present, indicating persistent expression of oxygen-depleted ground water. In a forested area, leaf and needle litter in the stream channel is an additional indicator. Accumulation of leaf litter does not occur in perennial streams since such material is continuously flushed. In the adjacent overbank of a perennial stream, fine sediment may cling to riparian plant stems and tree trunks. Organic debris drift lines or piles may be found within the active overbank area after recent high flow. [ 60 ]
Streams, headwaters, and streams flowing only part of the year provide many benefits upstream and downstream. They defend against floods, remove contaminants, recycle nutrients that are potentially dangerous as well as provide food and habitat for many forms of fish. Such streams also play a vital role in preserving our drinking water quality and supply, ensuring a steady flow of water to surface waters and helping to restore deep aquifers.
The extent of land basin drained by a stream is termed its drainage basin (also known in North America as the watershed and, in British English, as a catchment). [ 64 ] A basin may also be composed of smaller basins. For instance, the Continental Divide in North America divides the mainly easterly-draining Atlantic Ocean and Arctic Ocean basins from the largely westerly-flowing Pacific Ocean basin. The Atlantic Ocean basin, however, may be further subdivided into the Atlantic Ocean and Gulf of Mexico drainages. (This delineation is termed the Eastern Continental Divide .) Similarly, the Gulf of Mexico basin may be divided into the Mississippi River basin and several smaller basins, such as the Tombigbee River basin. Continuing in this vein, a component of the Mississippi River basin is the Ohio River basin, which in turn includes the Kentucky River basin, and so forth.
Stream crossings are where streams are crossed by roads , pipelines , railways , or any other thing which might restrict the flow of the stream in ordinary or flood conditions. Any structure over or in a stream which results in limitations on the movement of fish or other ecological elements may be an issue. | https://en.wikipedia.org/wiki/Stream_profile |
Stream restoration or river restoration , also sometimes referred to as river reclamation , is work conducted to improve the environmental health of a river or stream , in support of biodiversity , recreation, flood management and/or landscape development. [ 1 ]
Stream restoration approaches can be divided into two broad categories: form-based restoration, which relies on physical interventions in a stream to improve its conditions; and process-based restoration, which advocates the restoration of hydrological and geomorphological processes (such as sediment transport or connectivity between the channel and the floodplain ) to ensure a stream's resilience and ecological health. [ 2 ] [ 3 ] Form-based restoration techniques include deflectors; cross-vanes; weirs, step-pools and other grade-control structures; engineered log jams; bank stabilization methods and other channel-reconfiguration efforts. These induce immediate change in a stream, but sometimes fail to achieve the desired effects if degradation originates at a wider scale. Process-based restoration includes restoring lateral or longitudinal connectivity of water and sediment fluxes and limiting interventions within a corridor defined based on the stream's hydrology and geomorphology. The beneficial effects of process-based restoration projects may sometimes take time to be felt since changes in the stream will occur at a pace that depends on the stream dynamics. [ 4 ]
Despite the significant number of stream-restoration projects worldwide, the effectiveness of stream restoration remains poorly quantified, partly due to insufficient monitoring . [ 5 ] [ 6 ] However, in response to growing environmental awareness, stream-restoration requirements are increasingly adopted in legislation in different parts of the world.
Stream restoration or river restoration, sometimes called river reclamation in the United Kingdom, is a set of activities that aim to improve the environmental health of a river or stream. These activities aim to restore rivers and streams to their original states or to a reference state, in support of biodiversity, recreation, flood management, landscape development, or a combination of these phenomena. [ 1 ] Stream restoration is generally associated with environmental restoration and ecological restoration . In that sense, stream restoration differs from:
Improved stream health may be indicated by expanded habitat for diverse species (e.g. fish, aquatic insects, other wildlife) and reduced stream bank erosion , although bank erosion is increasingly generally recognized as contributing to the ecological health of streams. [ 7 ] [ 8 ] [ 9 ] [ 10 ] Enhancements may also include improved water quality (i.e., reduction of pollutant levels and increase of dissolved oxygen levels) and achieving a self-sustaining, resilient stream system that does not require periodic human intervention, such as dredging or construction of flood or erosion control structures. [ 11 ] [ 12 ] Stream restoration projects can also yield increased property values in adjacent areas. [ 13 ]
In the past decades, stream restoration has emerged as a significant discipline in the field of water-resources management, due to the degradation of many aquatic and riparian ecosystems related to human activities. [ 14 ] In the U.S. alone, it was estimated in the early 2000s that more than one billion U.S. dollars were spent each year to restore rivers and that close to 40,000 restoration projects had been conducted in the continental part of the country. [ 15 ] [ 16 ]
Stream restoration activities may range from the simple improvement or removal of a structure that inhibits natural stream functions (e.g. repairing or replacing a culvert , [ 18 ] or removing barriers to fish passage such as weirs ), to the stabilization of stream banks , or other interventions such as riparian zone restoration or the installation of stormwater -management facilities like constructed wetlands . [ 19 ] The use of recycled water to augment stream flows that have been depleted as a result of human activities can also be considered a form of stream restoration. [ 20 ] When present, navigation locks have a potential to be operated as vertical slot fishways to restore fish passage to some extent for a wide range of fish, including poor swimmers. [ 21 ]
Stream-restoration projects normally begin with an assessment of a focal stream system, including climatic data, geology , watershed hydrology, stream hydraulics , sediment transport patterns, channel geometry, historical channel mobility, and flood records. [ 22 ] Numerous systems exist to classify streams according to their geomorphology. [ 23 ] This preliminary assessment helps to understand the stream dynamics and determining the cause of the observed degradation to be addressed; it can also be used to determine the target state for the intended restoration work, especially since the "natural" or undisturbed state is sometimes no longer achievable due to various constraints. [ 3 ]
Two broad approaches to stream restoration have been defined in the past decades: form-based restoration and process-based restoration. Whereas the former focuses on the restoration of structural features and/or patterns considered to be characteristic of the target stream system, the latter is based on the restoration of hydrological and geomorphological processes (such as sediment transport or connectivity between the channel and the floodplain) to ensure a stream's resilience and ecological health. [ 2 ]
Form-based stream restoration promotes the modification of a stream channel to improve stream conditions. [ 3 ] Targeted outcomes can include improved water quality, enhanced fish habitat and abundance, as well as increased bank and channel stability. [ 6 ] This approach is widely used worldwide, and is supported by various government agencies, including the United States Environmental Protection Agency (U.S. EPA). [ 2 ] [ 15 ]
Form-based restoration projects can be carried out at various scales, including the reach scale. They can include measures such as the installation of in-stream structures, bank stabilization and more significant channel reconfiguration efforts. Reconfiguration work may focus on channel shape (in terms of sinuosity and meander characteristics), cross-section or channel profile (slope along the channel bed). These alterations affect the dissipation of energy through a channel, which impacts flow velocity and turbulence , water-surface elevations, sediment transport, and scour, among other characteristics. [ 24 ]
Deflectors are generally wooden or rock structures installed at a bank toe and extending towards the center of a stream, in order to concentrate stream flow away from its banks. They can limit bank erosion and generate varying flow conditions in terms of depth and velocity, which can positively impact fish habitat. [ 25 ]
Cross-vanes are U-shaped structures made of boulders or logs , built across the channel to concentrate stream flow in the center of the channel and thereby reduce bank erosion. They do not impact channel capacity and provides other benefits such as improved habitat for aquatic species. Similar structures used to dissipate stream energy include the W-weirs and J-Hook vanes. [ 26 ]
These structures, which can be built with rocks or wood (logs or woody debris), gradually lower the elevation of the stream and dissipate flow energy, thereby reducing flow velocity. [ 7 ] They can help limit bed degradation. They generate water accumulation upstream from them and fast flowing conditions downstream from them, which can improve fish habitat. However, they can limit fish passage if they are too high.
An emerging stream restoration technique is the installation of engineered log jams . [ 28 ] Because of channelization and removal of beaver dams and woody debris, many streams lack the hydraulic complexity that is necessary to maintain bank stabilization and healthy aquatic habitats. Reintroduction of large woody debris into streams is a method that is being experimented in streams such as Lagunitas Creek in Marin County, California [ 29 ] and Thornton Creek , in Seattle, Washington. Log jams add diversity to the water flow by creating riffles, pools, and temperature variations. Large wood pieces, both living [ 29 ] and dead, [ 30 ] play an important role in the long-term stability of engineered log jams. However, individual pieces of wood in log jams are rarely stable over long periods and are naturally transported downstream, where they can get trapped in further log jams, other stream features or human infrastructures, which can generate nuisances for human use. [ 30 ]
Bank stabilization is a common objective for stream-restoration projects, although bank erosion is generally viewed as favorable for the sustainability and diversity of aquatic and riparian habitats. [ 9 ] This technique may be employed where a stream reach is highly confined, or where infrastructure is threatened. [ 31 ]
Bank stabilization is achieved through the installation of riprap , gabions or through the use of revegetation and/or bioengineering methods, which relies on the use of live plants to build bank stabilizing structures. As new plants sprout from the live branches, the roots anchor the soil and prevent erosion. [ 31 ] This makes bioengineering structures more natural and more adaptable to evolving conditions than "hard" engineering structures. Bioengineering structures include fascines , brush mattresses, brush layer, and vegetated geogrids. [ 32 ]
Channel reconfiguration involves the physical modification of the stream. Depending on the scale of a project, a channel's cross-section can be modified, and meanders can be constructed through earthworks to achieve the target stream morphology. In the U.S., such work is frequently based on the Natural Channel Design (NCD), a method developed in the 1990s. [ 33 ] [ 34 ] This method involves a classification of the stream to be restored based on parameters such as channel pattern and geometry, topography, slope, and bed material. This classification is followed by a design phase based on the NCD method, which includes 8 phases and 40 steps. The method relies on the construction of the desired morphology, and its stabilization with natural materials such as boulders and vegetation to limit erosion and channel mobility. [ 15 ]
Despite its popularity, form-based restoration has been criticized by the scientific community. Common criticisms are that the scale at which form-based restoration is often much smaller than the spatial and temporal scales of the processes that cause the observed problems and that the target state is frequently influenced by the social conception of what a stream should look like and does not necessarily take into account the stream's geomorphological context (e.g., meandering rivers tend to be viewed as more "natural" and more beautiful, whereas local conditions sometimes favour other patterns such as braided rivers ). [ 2 ] [ 9 ] [ 15 ] [ 35 ] Numerous criticisms have also been directed at the NCD method by fluvial geomorphologists, who claim that the method is a "cookbook" approach sometimes used by practitioners that do not have sufficient knowledge of fluvial geomorphology, resulting in project failures. Another criticism is the importance given to channel stability in the NCD method (and with some other form-based restoration methods), which can limit the streams' alluvial dynamic and adaptability to evolving conditions. [ 15 ] [ 36 ] [ 37 ] The NCD method has been criticized for its improper application in the Washington, D.C. area to small-order , interior-forested, upper-headwater streams and wetlands, leading to loss of natural forest ecosystems. [ 38 ]
Contrary to form-based restoration, which consists of improving a stream's conditions by modifying its structure, process-based restoration focuses on restoring the hydrological and geomorphological processes (or functions) that contribute to the stream's alluvial and ecological dynamics. [ 2 ] [ 3 ] [ 6 ] This type of stream restoration has gained in popularity since the mid-1990s, as a more ecosystem-centered approach. [ 39 ] Process-based restoration includes restoring lateral connectivity (between the stream and its floodplain), longitudinal connectivity (along the stream) and water and/or sediment fluxes, which might be impacted by hydro-power dams, grade control structures, erosion control structures and flood protection structures. [ 2 ] Valley Floor Resetting epitomises process-based restoration by infilling the river channel and allowing the stream to carve its anastomosed channel anew, matching ' Stage Zero ' on the Stream Evolution Model. [ 40 ] In general, process-based restoration aims to maximize the resilience of the system and minimize maintenance requirements. [ 23 ] In some instances, form-based restoration methods might be coupled with process-based restoration to restore key structures and achieve quicker results while waiting for restored processes to ensure adequate conditions in the long term. [ 3 ]
The connectivity of streams to their adjacent floodplain along their entire length plays an important role in the equilibrium of the river system. Streams are shaped by the water and sediment fluxes from their watershed, and any alteration of these fluxes (either in quantity, intensity or timing) will result in changes in equilibrium planform and cross-sectional geometry, as well as modifications of the aquatic and riparian ecosystem. Removal or modification of levees can allow a better connection between streams and their floodplain. [ 2 ] Similarly, removing dams and grade control structures can restore water and sediment fluxes and result in more diversified habitats, although impacts on fish communities can be difficult to assess. [ 3 ]
In streams where existing infrastructures cannot be removed or modified, it is also possible to optimize sediment and water management in order to maximize connectivity and achieve flow patterns that ensure minimum ecosystem requirements. This can include releases from dams, but also delaying and/or treating water from agricultural and urban sources. [ 41 ] [ 42 ]
Another method of ensuring the ecological health of streams while limiting impacts on human infrastructures is to delineate a corridor within which the stream is expected to migrate over time. [ 10 ] [ 39 ] This method is based on the concept of minimum intervention within this corridor, whose limits should be determined based on the stream's hydrology and geomorphology. Although this concept is often restricted to the lateral mobility of streams (related to bank erosion), some systems also integrate the space necessary for floods of various return periods . [ 39 ] This concept has been developed and adapted in various countries around the world, resulting in the notion of "stream corridor" or "river corridor" in the U.S., [ 25 ] [ 43 ] [ 44 ] "room for the river" in the Netherlands, [ 45 ] [ 46 ] " espace de liberté " ("freedom space") in France [ 10 ] [ 47 ] (where the concept of "erodible corridor" is also used) and Québec (Canada), [ 39 ] [ 48 ] " espace réservé aux eaux " ("space reserved for water(courses)") in Switzerland, [ 49 ] [ 50 ] " fascia di pertinenza fluviale " in Italy, [ 51 ] "fluvial territory" in Spain [ 52 ] and "making space for water" in the United Kingdom. [ 10 ] A cost-benefit analysis has shown that this approach could be beneficial in the long term due to lower stream stabilization and maintenance costs, lower damages resulting from erosion and flooding, and ecological services rendered by the restored streams. [ 48 ] However, this approach cannot be implemented alone if watershed-scale stressors contribute to stream degradation. [ 43 ]
In addition to the aforementioned restoration approaches and methods, additional measures can be implemented if stream degradation factors occur at the watershed scale. First, high-quality areas should also be protected. Additional measures include revegetation / reforestation efforts (ideally with native species); the adoption of agricultural best management practices that minimize erosion and runoff ; adequate treatment of sewage water and industrial discharge across the watershed; and improved stormwater management to delay/minimize the transport of water to the stream and minimize pollutant migration. [ 2 ] [ 41 ] [ 42 ] Alternative stormwater management facilities include the following options:
In the 2000s, a study of stream restoration efforts in the U.S. led to the creation of the National River Restoration Science Synthesis (NRRSS) database, which included information on over 35,000 stream restoration projects carried out in the U.S. [ 16 ] Synthesizing efforts are also carried out in other parts of the world, such as Europe. [ 53 ] However, despite the large number of stream restoration projects carried out each year worldwide, the effectiveness of stream restoration projects remains poorly quantified. [ 15 ] This situation appears to result from limited data on the restored streams' biophysical and geochemical contexts, to insufficient post-monitoring work and to the varying metrics used to evaluate project effectiveness. [ 5 ] [ 6 ] [ 23 ] Depending on the objectives of the restoration project, the goals (restoration of fish populations, of alluvial dynamics, etc.) may take considerable time to be fully achieved. Therefore, whereas monitoring efforts should be proportional to the scale of the situation to be addressed, long-term is often necessary in order to fully evaluate a project's effectiveness. [ 4 ] [ 23 ]
In general, project effectiveness has been found to be dependent on selection of an appropriate restoration method considering the nature, cause and scale of the degradation problem. As such, reach-scale projects generally fail at restoring conditions whose root cause lies at the watershed scale, such as water quality issues. [ 2 ] Furthermore, project failures have sometimes been attributed to design based on insufficient scientific bases; in some cases, restoration techniques may have been selected mainly for aesthetic reasons. [ 12 ] [ 54 ] Additional factors that can influence the effectiveness of river restoration projects include the selection of sites to be restored (for example, sites located near undisturbed reaches could be recolonized more effectively) and the amount of tree cutting and other destructive work necessary to carry out the restoration work (which can have long-lasting detrimental effects on the quality of the habitat). [ 55 ] Although often viewed as a challenge, public involvement is generally considered to be a positive factor for the long-term success of stream restoration projects. [ 3 ]
Stream restoration is gradually being introduced in the legislative framework of various states. Examples include the European water framework's commitment to restoring surface water bodies, [ 56 ] the adoption of the concept of freedom space in the French legislation, [ 10 ] the inclusion in the Swiss legislation of the notion of space reserved for watercourses and of the requirement to restore streams to a state close to their natural state, [ 49 ] and the inclusion of river corridors in land use planning in the American states of Vermont and Washington. [ 43 ] [ 44 ] Although this evolution is generally viewed positively by the scientific community, a concern expressed by some is that it could lead to less flexibility and less room for innovation in a field that is still in development. [ 2 ] [ 39 ]
The River Restoration Centre, based at Cranfield University , is responsible for the National River Restoration Inventory, which is used to document best practice in river watercourse and floodplain restoration, enhancement and management efforts in the United Kingdom. [ 57 ] Other established sources for information on stream restoration include the NRRSS in the U.S. [ 58 ] and the European Centre for River Restoration (ECRR), which holds details of projects across Europe. [ 59 ] ECRR and the LIFE+ RESTORE project have developed a wiki-based inventory of river restoration case studies. [ 53 ] | https://en.wikipedia.org/wiki/Stream_restoration |
In fluid dynamics , stream thrust averaging is a process used to convert three-dimensional flow through a duct into one-dimensional uniform flow. It makes the assumptions that the flow is mixed adiabatically and without friction . However, due to the mixing process, there is a net increase in the entropy of the system. Although there is an increase in entropy , the stream thrust averaged values are more representative of the flow than a simple average as a simple average would violate the second law of thermodynamics .
Stream thrust :
Mass flow :
Stagnation enthalpy :
Solving for U ¯ {\displaystyle {\overline {U}}} yields two solutions. They must both be analyzed to determine which is the physical solution. One will usually be a subsonic root and the other a supersonic root. If it is not clear which value of velocity is correct, the second law of thermodynamics may be applied.
Second law of thermodynamics:
The values T 1 {\displaystyle T_{1}} and p 1 {\displaystyle p_{1}} are unknown and may be dropped from the formulation. The value of entropy is not necessary, only that the value is positive.
One possible unreal solution for the stream thrust averaged velocity yields a negative entropy. Another method of determining the proper solution is to take a simple average of the velocity and determining which value is closer to the stream thrust averaged velocity. | https://en.wikipedia.org/wiki/Stream_thrust_averaging |
Streamflow , or channel runoff , is the flow of water in streams and other channels , and is a major element of the water cycle . It is one runoff component, the movement of water from the land to waterbodies , the other component being surface runoff . Water flowing in channels comes from surface runoff from adjacent hillslopes, from groundwater flow out of the ground, and from water discharged from pipes. The discharge of water flowing in a channel is measured using stream gauges or can be estimated by the Manning equation . The record of flow over time is called a hydrograph . Flooding occurs when the volume of water exceeds the capacity of the channel.
Streams play a critical role in the hydrologic cycle that is essential for all life on Earth. A diversity of biological species, from unicellular organisms to vertebrates, depend on flowing-water systems for their habitat and food resources. Rivers are major aquatic landscapes for all manners of plants and animals. Rivers even help keep the aquifers underground full of water by discharging water downward through their streambeds. In addition to that, the oceans stay full of water because rivers and runoff continually refreshes them. [ 1 ] Streamflow is the main mechanism by which water moves from the land to the oceans or to basins of interior drainage .
Stream discharge is derived from four sources: channel precipitation, overland flow, interflow, and groundwater.
Rivers are always moving, which is good for environment, as stagnant water does not stay fresh and inviting very long. There are many factors, both natural and human-induced, that cause rivers to continuously change: [ 3 ]
Natural mechanisms
Human-induced mechanisms
Streamflow is measured as an amount of water passing through a specific point over time. The units used in the United States are cubic feet per second , while in most other countries cubic meters per second are utilized. There are a variety of ways to measure the discharge of a stream or canal. A stream gauge provides continuous flow over time at one location for water resource and environmental management or other purposes. Streamflow values are better indicators than gage height of conditions along the whole river. Measurements of streamflow are made about every six weeks by United States Geological Survey (USGS) personnel. They wade into the stream to make the measurement or do so from a boat, bridge, or cableway over the stream. For each gaging station, a relation between gage height and streamflow is determined by simultaneous measurements of gage height and streamflow over the natural range of flows (from very low flows to floods). This relation provides the streamflow data from that station. [ 5 ] For purposes that do not require a continuous measurement of stream flow over time, current meters or acoustic Doppler velocity profilers can be used. For small streams—a few meters wide or smaller— weirs may be installed.
One informal method that provides an approximation of the stream flow termed the orange method or float method is:
In the United States, streamflow gauges are funded primarily from state and local government funds. In fiscal year 2008, the USGS provided 35% of the funding for everyday operation and maintenance of gauges. [ 8 ] Additionally, USGS uses hydrographs to study streamflow in rivers. A hydrograph is a chart showing, most often, river stage (height of the water above an arbitrary altitude) and streamflow (amount of water, usually in cubic feet per second). Other properties, such as rainfall and water quality parameters can also be plotted. [ 9 ]
For most streams especially those with a small watershed, no record of discharge is available. In that case, it is possible to make discharge estimates using the rational method or some modified version of it. However, if chronological records of discharge are available for a stream, a short term forecast of discharge can be made for a given rainstorm using a hydrograph.
This method involves building a graph in which the discharge generated by a rainstorm of a given size is plotted over time, usually hours or days. It is called the unit hydrograph method because it addresses only the runoff produced by a particular rainstorm in a specified period of time—the time taken for a river to rise, peak, and fall in response to a storm.
Once a rainfall-runoff relationship is established, then subsequent rainfall data can be used to forecast streamflow for selected storms, called standard storms. A standard rainstorm is a high intensity storm of some known magnitude and frequency. One method of unit hydrograph analysis involves expressing the hour by hour or day by day increase in streamflow as a percentage of total runoff. Plotted on a graph, these data from the unit hydrograph for that storm, which represents the runoff added to the pre-storm baseflow.
To forecast the flows in a large drainage basin using the unit hydrograph method would be difficult because in a large basin geographic conditions may vary significantly from one part of the basin to another. This is especially so with the distribution of rainfall because an individual rainstorm rarely covers the basin evenly. As a result, the basin does not respond as a unit to a given storm, making it difficult to construct a reliable hydrograph.
For large basins, where unit hydrograph might not be useful and reliable, the magnitude and frequency method is used to calculate the probability of recurrence of large flows based on records of past years' flows. In United States, these records are maintained by the Hydrological Division of the USGS for large streams. For a basin with an area of 5,000 square miles or more, the river system is typically gauged at five to ten places.
The data from each gauging station apply to the part of the basin upstream that location. Given several decades of peak annual discharges for a river, limited projections can be made to estimate the size of some large flow that has not been experienced during the period of record. The technique involves projecting the curve (graph line) formed when peak annual discharges are plotted against their respective recurrence intervals. However, in most cases the curve bends strongly, making it difficult to plot a projection accurately. This problem can be overcome by plotting the discharge and/or recurrence interval data on logarithmic graph paper. Once the plot is straightened, a line can be ruled drawn through the points. A projection can then be made by extending the line beyond the points and then reading the appropriate discharge for the recurrence interval in question.
Runoff of water in channels is responsible for transport of sediment , nutrients , and pollution downstream. Without streamflow, the water in a given watershed would not be able to naturally progress to its final destination in a lake or ocean. This would disrupt the ecosystem. Streamflow is one important route of water from the land to lakes and oceans. The other main routes are surface runoff (the flow of water from the land into nearby watercourses that occurs during precipitation and as a result of irrigation), flow of groundwater into surface waters, and the flow of water from constructed pipes and channels. [ 10 ]
Streamflow confers on society both benefits and hazards. Runoff downstream is a means to collect water for storage in dams for power generation of water abstraction. The flow of water assists transport downstream. A given watercourse has a maximum streamflow rate that can be accommodated by the channel that can be calculated. If the streamflow exceeds this maximum rate, as happens when an excessive amount of water is present in the watercourse, the channel cannot handle all the water, and flooding occurs.
The 1993 Mississippi river flood , the largest ever recorded on the river, was a response to a heavy, long duration spring and summer rainfalls. Early rains saturated the soil over more than a 300,000 square miles of the upper watershed, greatly reducing infiltration and leaving soils with little or no storage capacity. As rains continued, surface depressions, wetlands, ponds, ditches, and farm fields filled with overland flow and rainwater. With no remaining capacity to hold water, additional rainfall was forced from the land into tributary channels and thence to the Mississippi River . For more than a month, the total load of water from hundreds of tributaries exceeded the Mississippi's channel capacity, causing it to spill over its banks onto adjacent floodplains. Where the flood waters were artificially constricted by an engineered channel bordered by constructed levees and unable to spill onto large section of floodplain, the flood levels forced even higher. [ 11 ] | https://en.wikipedia.org/wiki/Streamflow |
Streaming data is data that is continuously generated by different sources. Such data should be processed incrementally using stream processing techniques without having access to all of the data. In addition, it should be considered that concept drift may happen in the data which means that the properties of the stream may change over time.
It is usually used in the context of big data in which it is generated by many different sources at high speed. [ 1 ]
Data streaming can also be explained as a technology used to deliver content to devices over the internet, and it allows users to access the content immediately, rather than having to wait for it to be downloaded. [ 2 ] Big data is forcing many organizations to focus on storage costs, which brings interest to data lakes and data streams . [ 3 ] A data lake refers to the storage of a large amount of unstructured and semi data, and is useful due to the increase of big data as it can be stored in such a way that firms can dive into the data lake and pull out what they need at the moment they need it, [ 3 ] whereas a data stream can perform real-time analysis on streaming data, and it differs from data lakes in speed and continuous nature of analysis, without having to store the data first. [ 3 ]
Before explaining the benefits of streaming data, it is important to understand the difference between digitization and digitalization . Digitiztion is the creation (encoding) of digital information (e.g., a file ) using analog information. When a digital camera takes a photo, the light entering its lens is the analog information , and the photo file is the digital representation of said light. The camera is digitizing the visual information. [ 4 ] [ 5 ] Digitalization describes mainly a socio-technical process, where a society or organization adopts digital forms for data storage. Converting analog information in a library (physical books) into digital formats ( eBook documents), for example, would be digitalization of said library. [ 5 ] Within the context of data streaming, media has been digitized since the early 1990s with the adaptation of digital recordings , e.g., storing music and videos in digital forms , while the digitalization of media did not start until the beginning of the 21st century. [ 6 ]
The digital innovation management theories mention five characteristics of streaming data: homogenization and decoupling, modularity, connectivity, digital traces and programmability.
Homogenization and decoupling. “Because all digital information assumes the same form, it can, at least in principle, be processed by the same technologies. Consequently, digitizing has the potential to remove the tight couplings between information types and their storage, transmission, and processing technologies”. [ 7 ] Within the context of data streaming, this means in theory that one can stream data now from any digital device. It also reduces the demand and use of music and films on CDs for example. One of the consequences of homogenization & decoupling is the decline of marginal costs . [ 8 ] The marginal cost of data streaming is because it solely uses digital information, which can be transmitted, stored, and computed in fast and low-cost ways. [ 8 ] An example of an industry that has low marginal costs due to data streaming is the music industry . Producers can now digitize songs and upload them on Spotify , instead of paying for the creation of the physical albums and distributing them. Another consequence is convergent user experience, meaning that previously separated experiences are now brought together in one product. [ 8 ]
Data streaming is also modular, because systems components may be separated and recombined mainly for flexibility and variety. Data streaming works in different application versions and systems such as iOS . It is also possible to change the speed of data streaming. [ 9 ] A consequence of modularity is the creation of platforms. Data streaming platforms bring together analysis of information, but more importantly, they are able to integrate data between different sources (Myers, 2016). IBM streams for example is an analytics platform that enables the applications developed by users to gather, analyze and correlate information that comes to them from a variety of sources ( IBM ).
The third characteristic, connectivity, describes that a digital technology not only connects applications, devices and users but also connects customers and firms. Streaming services for example connects a vast collection of music and films of ‘producers’ with their consumers, so how music on Spotify can easily reach a vast group of consumers. Another example would be data of transport vehicles that can also be connected to firms with streaming applications, via vehicle-to-roadside communications. [ 10 ] UPS does this for example to ‘calculate’ the optimal delivery routes by streaming real time big data and thereby reducing time to deliver packages.
Interoperability, which is the ability of a product or system to work with other products or systems, [ 8 ] is a consequence of connectivity. For instance, the music industry is interoperable, because some music platforms have integrated social media platforms. [ 11 ] Another of connectivity is network externality. This means that the value of a good to a user increases with the number of other users (installed base) of the same or similar good. [ 8 ] Data streaming technology can utilize network externalities, because it brings together supply and demand of large networks of creators and consumers. This is very much the case at popcorn time , a service where people can stream latest movies on demand. These streams work better when people have used their content.
The latter has to do with the fact that if one streams content he/she automatically also down/uploads content. While a streaming service is being used it leaves Digital Traces, which simply describes the fact that all digital technologies leave a digital trace from the user. [ 8 ] In the past, when media was sold, the seller/provider only had information about the transaction itself. With data streaming it has become possible to actually track the behaviour of the users because it occurs in real time, directly from the distributor/providers. Morris and Powers [ 12 ] describe this as opening the 'black box' of consumption. Providers of streaming services, for example, are now able to track detailed consuming behavior of the user, which in turn, they use to influence the user's decision-making process by creating algorithms to further develop a service. This kind of streaming has changed the way people consume media, which in time offered new possibilities for new ideas. [ 12 ] These are also referred to as wakes of innovation [ 8 ] and occur in places one would not initially expect. For instance, data streaming has enabled the development of sensors, for example that are used in a lot of sectors for different purposes. In the manufacturing sector data streaming is used for real-time analysis to improve operations. In healthcare sector sensors are being used for connected medical devices to create hubs of patients and healthcare providers, that can trigger alerts when a patient has a medical emergency. [ 13 ]
Finally, programmability, a characteristic that describes that an innovative digital technology can be reprogrammed, improved and/or updated. [ 8 ] Consequences of programmability are emerging functionalities. The most applicable functionality is incompleteness, which means that products and services are never finished, [ 8 ] which is the case for data streaming because suppliers will keep refreshing their models
. [ 14 ] However, a more influential consequence of the programmability, and also of connectivity is the servitization of digital media content. Data streaming has caused a shift towards pay for use instead of pay for ownership;. [ 8 ] [ 12 ] This is happening in the video and music streaming industry, think of Netflix or Spotify. You have to pay to use the service, instead of owning a product. This was the case with buying an album or DVD, whereas now it is possible to access thousands of songs or movies.
Data streaming is becoming more useful and necessary in today's world and is being applied in a broad range of industries, some of which that have been already mentioned in examples such as the medical or transportation industry. Other examples of industries or markets, where data streaming is applicable, are:
Finance : where it allows to track changes in the stock market in real time, computes value-at-risk, and automatically rebalances portfolios based on stock price movements. [ 15 ]
Real-estate : Websites can track a subset of data from consumers’ mobile devices and makes real-time property recommendations of properties to visit based on their geo-location ( Amazon ).
Gaming: An online gaming company can collect streaming data about player-game interactions, and feeds the data into its gaming platform ( Amazon ).
E-commerce/Marketing: Data streaming can provide all clickstream records from its online properties and aggregate and enrich the data with demographic information about users, and optimizes content placement on its site, delivering relevancy and better experience to customers ( Amazon ).
Besides these examples, there are probably many more applications for data streaming. However, data streaming has had the biggest implications for the audio, video and telecom industry because of the creation of streaming services. Streaming services have majorly influenced how people consume their media nowadays. [ 16 ] Since the streaming services have had the most significant impact using the data streaming technology, this will be the main focus further on this page.
The process of technological convergence , which appears because different industries increasingly rely on the same set of technological skills in their production processes, [ 17 ] leads to closer relations between markets that was previously not highly related. For example, social media platforms such as Facebook and Twitter are providing live-streaming services, which allows global news publishers to connect directly with the right audiences as well as a far wider range of audiences than they otherwise would have reached. [ 18 ] This has led to a change in which how and where news publishers are interacting with their audiences, and how they use social media services to deliver their service.
An industry that is impacted by data streaming is the Video Streaming industry. Consumers are now demanding videos to be available at immediate request, meaning that it is no longer only the quality resolution of image that acts as important performance metrics in the media industry, but also how quickly video starts to play. [ 19 ]
The video industry underwent some of the same changes as the music industry. The video industry gained revenue by selling DVDs to customers and selling rights to cinemas and television channels. In 1997, the first online distributors began, but this was still small over a decade later mainly due to lower quality compared to hardcopy films. The third wave of streaming services such as Netflix, iTunes, Hulu, Amazon and Blockbuster have changed the film market. [ 20 ] Netflix started in 1997, but only started to disrupt the market more than a decade later
The digitization, digitalization and underlying technologies of streaming have created these streaming services which essentially caused this disruption. With the rise of streaming firms in the film industry, the sales of physical DVDs vanished completely. An important difference between the music and film industry is that within the film industry, streaming services such as iTunes and Netflix are ‘destroying’ revenue (Sullivan, 2009). Because of this less films are produced and consequently there are less jobs in this industry. On contrary, cinemas are still important in the film industry, but the share of movies and series that are streamed by customers is rising very fast. It replaced the DVD, changed the performance metrics of the incumbents and can thus be seen as disruptive.
Another impacted industry is the Music Streaming industry. In 2017, streaming accounted for 43% of revenues in the music industry, and this was the third year of consecutive growth. [ 21 ] New music streaming services such as Spotify and Apple Music challenges the traditional label companies , which are now risking to be outcompeted by new business models. [ 22 ] Before the rapid adaption of streaming, in 2000 the music industry was experiencing what turned out to be a 15-year-long continued stagnation in revenue, which was due to the high CD prices needed to cover the costs of record labels. [ 23 ] In 2015, the streaming technology overtook the market by allowing revenues to increase by saving costs on labels, and artists to have a more steady income by making money on streams, rather than being reliant on a full album or CD to do well after being published. [ 24 ]
Furthermore, data streaming also has an impact on the Game Streaming industry. Game streaming is caused by the considerable growth of cloud computing , which allow gamers to access a greater variety of games without having to own expensive hardware. [ 25 ] Cloud computing operates as an enabler to the development of game streaming, where hardware and content is accessed from the cloud, leading to a change in offering greater flexibility in content distribution. [ 26 ] Game streaming allowed by cloud technology will drive changes in the gaming industry, where it is the hardware configuration of machines in the cloud that will be the developers, cost and time will be reduced to develop a greater ability of user reach around the world. [ 27 ] | https://en.wikipedia.org/wiki/Streaming_data |
The streaming vibration current ( SVI ) and the associated streaming vibration potential is an electric signal that arises when an acoustic wave propagates through a porous body in which the pores are filled with fluid.
Streaming vibration current was experimentally observed in 1948 by M. Williams. [ 1 ] A theoretical model was developed some 30 years later by Dukhin and coworkers. [ 2 ] This effect opens another possibility for characterizing the electric properties of the surfaces in porous bodies.
This physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Streaming_vibration_current |
In scientific visualization a streamlet is used to visualize flows. It is essentially a short streamline segment. Normally the length of a streamlet is proportional to the flow magnitude at its seed point. [ 1 ]
This computer graphics –related article is a stub . You can help Wikipedia by expanding it .
This computer science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Streamlet_(scientific_visualization) |
Genomic streamlining is a theory in evolutionary biology and microbial ecology that suggests that there is a reproductive benefit to prokaryotes having a smaller genome size with less non-coding DNA and fewer non-essential genes. [ 1 ] [ 2 ] There is a lot of variation in prokaryotic genome size, with the smallest free-living cell's genome being roughly ten times smaller than the largest prokaryote. [ 3 ] Two of the free-living bacterial taxa with the smallest genomes are Prochlorococcus and Pelagibacter ubique , [ 4 ] [ 5 ] both highly abundant marine bacteria commonly found in oligotrophic regions. Similar reduced genomes have been found in uncultured marine bacteria, suggesting that genomic streamlining is a common feature of bacterioplankton . [ 6 ] This theory is typically used with reference to free-living organisms in oligotrophic environments. [ 1 ]
Genome streamlining theory states that certain prokaryotic genomes tend to be small in size in comparison to other prokaryotes, and all eukaryotes , due to selection against the retention of non-coding DNA . [ 2 ] [ 1 ] The known advantages of small genome size include faster genome replication for cell division, fewer nutrient requirements, and easier co-regulation of multiple related genes, because gene density typically increases with decreased genome size. [ 2 ] This means that an organism with a smaller genome is likely to be more successful, or have higher fitness , than one hindered by excessive amounts of unnecessary DNA, leading to selection for smaller genome sizes. [ 2 ]
Some mechanisms that are thought to underlie genome streamlining include deletion bias and purifying selection . [ 1 ] Deletion bias is the phenomenon in bacterial genomes where the rate of DNA loss is naturally higher than the rate of DNA acquisition. [ 2 ] [ 7 ] This is a passive process that simply results from the difference in these two rates. [ 7 ] Purifying selection is the process by which extraneous genes are selected against, making organisms lacking this genetic material more successful by effectively reducing their genome size. [ 2 ] [ 8 ] Genes and non-coding DNA segments that are less crucial for an organism survival will be more likely to be lost over time. [ 8 ]
This selective pressure is stronger in large marine prokaryotic populations , because intra-species competition favours fast, efficient and inexpensive replication . [ 2 ] This is because large population sizes increase competition among members of the same species, and thus increases selective pressure and causes the reduction in genome size to occur more readily among organisms of large population sizes, like bacteria. [ 2 ] This may explain why genome streamlining seems to be particularly prevalent in prokaryotic organisms, as they tend to have larger population sizes than eukaryotes. [ 9 ]
It has also been proposed that having a smaller genome can help minimize overall cell size, which increases a prokaryotes surface-area to volume ratio. [ 10 ] A higher surface-area to volume ratio allows for more nutrient uptake proportional to their size, which allows them to outcompete other larger organisms for nutrients. [ 11 ] [ 10 ] This phenomenon has been noted particularly in nutrient depleted waters. [ 10 ]
Genomic analysis of streamlined organisms have shown that low GC content , low percentage of non-coding DNA, and a low fraction of genes encoding for cytoplasmic membrane proteins, periplasmic proteins , transcriptionally related proteins, and signal transduction pathways are all characteristic of free-living streamlined prokaryotic organisms. [ 6 ] [ 4 ] [ 12 ] Oftentimes, highly streamlined organisms are difficult to isolate by culturing in a laboratory (SAR11 being a central example). [ 6 ] [ 4 ]
Pelagibacter ubique are members of the SAR11 clade , a heterotrophic marine group which are found throughout the oceans and are rather common. [ 4 ] These microbes have the smallest genome and encode the smallest number of Open Reading Frames of any known non-sessile microorganism. [ 4 ] P. ubique has complete biosynthetic pathways and all necessary enzymes for the synthesis of 20 amino acids and only lack a few cofactors despite the genome's small size. The genome size for this microorganism is achieved by lack of, "pseudogenes, introns, transposons, extrachromosomal elements, or inteins". The genome also contains fewer paralogs compared to other members of the same clade and the shortest intergenic spacers for any living cell. [ 4 ] In these organisms, unusual nutrient requirements were found due to the streamlining selection and gene loss when selection occurred for more efficient resource utilization in oceans with limited nutrients for uptake. [ 13 ] These observations indicate that some microbes may be difficult to grow in a laboratory setting because of unusual nutrient requirements. [ 13 ]
Prochlorococcus is one of the dominant cyanobacteria and is a main participant in primary production in oligotrophic waters. [ 14 ] It is the smallest and most abundant photosynthetic organism recorded on Earth. [ 14 ] As a cyanobacteria, they have an incredible ability to adapt to environments with very poor nutrient availability, as they maintain their energy from light. [ 15 ] The nitrogen assimilation pathway in this organism has been significantly modified to adapt to the nutritional limitations of the organisms’ habitats. [ 15 ] These adaptations led to the removal of key enzymes from the genome, such as nitrate reductase , nitrite reductase , and often urease . [ 15 ] Unlike some cyanobacterial counterparts, Prochlorococcus is not able to fix atmospheric nitrogen (N 2 ). [ 16 ] The only nitrogen sources found to be used by this species are ammonia, which is incorporated into glutamate via the enzyme glutamine synthetase and uses less energy compared to nitrate usage, and in certain species, urea. [ 16 ] Moreover, metabolic regulation systems of Prochlorococcus were found to be greatly simplified. [ 15 ]
Nitrogen-fixing marine cyanobacteria are known to support oxygen production in oceans by fixing inorganic nitrogen using the enzyme nitrogenase . [ 17 ] A special subset of these bacteria, UCYN-A , was found to lack the photosystem II complex usually used in photosynthesis and that it lacks a number of major metabolic pathways but is still capable of using the electron transport chain to generate energy from a light source. [ 17 ] Furthermore, anabolic enzymes needed for creating amino acids such as valine, leucine and isoleucine are missing, as well as some which lead to phenylalanine, tyrosine and tryptophan biosynthesis.
This organism seems to be an obligate photoheterotroph that uses carbon substrates for energy production and some biosynthetic materials for biosynthesis. It was discovered that UCYN-A developed a reduced genome of only 1.44 Megabases that is smaller but similar in structure to that of chloroplasts. [ 17 ] In comparison with related species such as Crocosphaera watsonii and Cyanothece sp., which employ genomes which range in length from 5.46 to 6.24 megabases, the UCYN-A genome is much smaller. The compacted genome is a single, circular chromosome with “1,214 identified protein-coding regions”. [ 17 ] The genome of UCYN-A is also highly conserved ( >97% nucleotide identity) across ocean waters, which is atypical of ocean microbes. The lack of UCYN-A genome diversity, presence of nitrogenase and hydrogenase enzymes for the TCA cycle , reduced genome size and coding efficiency of the DNA suggest that this microorganism may have symbiotic lifestyle and live in close association with a host. However, the true lifestyle of this microbe remains unknown. [ 17 ]
Bacterial symbionts , commensals , parasites , and pathogens often have even smaller genomes and fewer genes than free-living organisms, and non-pathogenic bacteria. [ 1 ] They reduce their "core" metabolic repertoire, making them more dependent on their host and environment. [ 1 ] Their genome reduction occurs by different evolutionary mechanisms than those of streamlined free-living organisms. [ 18 ] Pathogenic organisms are thought to undergo genome reduction due to genetic drift , rather than purifying selection . [ 18 ] [ 1 ] Genetic drift is caused by small and effective populations within a microbial community, rather than large and dominating populations. [ 1 ] In this case, DNA mutations happen by chance, and thus often lead to maladaptive genome degradation and lower overall fitness. [ 18 ] Rather than losing non-coding DNA regions or extraneous genes to increase fitness during replication, they lose certain "core" metabolic genes that may now be supplemented by their host, symbiont, or environment. [ 18 ] Since their genome reduction is less dependent on fitness, pseudogenes are frequent in these organisms. [ 1 ] They also typically undergo low rates of horizontal gene transfer (HGT).
Viral genomes resemble prokaryotic genomes in that they have very few non-coding regions. [ 19 ] They are, however, significantly smaller than prokaryotic genomes. While viruses are obligate intracellular parasites , viral genomes are considered streamlined due to the strong purifying selection that occurs when the virus has successfully infected a host . [ 20 ] [ 21 ] During the initial phase of an infection , there is a large bottleneck for the virus population which allows for more genetic diversity, but due to the rapid replication of these viruses, the population size is restored quickly and the diversity within the population is reduced. [ 21 ]
RNA viruses in particular are known to have exceptionally small genomes. [ 22 ] This is at least in part due to the fact that they have overlapping genes . [ 22 ] By reducing their genome size, they increase their fitness due to faster replication. [ 22 ] The virus will then be able to increase population size more rapidly with faster replication rates.
Genomic streamlining has been used to explain certain eukaryotic genome sizes as well, particularly bird genomes. Larger genomes require a larger nucleus, which typically translates to a larger cell size. [ 23 ] For this reason, many bird genomes have also been under selective pressure to decrease in size. [ 23 ] [ 24 ] Flying with a larger mass due to larger cells is more energetically expensive than with a smaller mass. [ 24 ] | https://en.wikipedia.org/wiki/Streamlining_theory |
A symbiotic eukaryote that lives in the hindgut of termites , Streblomastix is a protist associated with a community of ectosymbiotic bacteria . [ 1 ] [ 2 ]
Streblomastix moves by beating its anterior flagella .
These protists measure around 100 micrometers in length. They completely lack mitochondria. [ 3 ] | https://en.wikipedia.org/wiki/Streblomastix |
The Strecker amino acid synthesis , also known simply as the Strecker synthesis, is a method for the synthesis of amino acids by the reaction of an aldehyde with cyanide in the presence of ammonia . The condensation reaction yields an α-aminonitrile, which is subsequently hydrolyzed to give the desired amino acid. [ 1 ] [ 2 ] The method is used for the commercial production of racemic methionine from methional . [ 3 ]
Primary and secondary amines also give N-substituted amino acids. Likewise, the usage of ketones , instead of aldehydes, gives α,α-disubstituted amino acids. [ 4 ]
In the first part of the reaction process, the carbonyl is converted to an iminium , to which a cyanide ion adds. First, the carbonyl oxygen of an aldehyde is protonated, followed by a nucleophilic attack of ammonia to the carbonyl carbon. After subsequent proton exchange, water is cleaved to form the iminium ion intermediate. A cyanide ion then attacks the iminium carbon yielding an aminonitrile.
In the second part of the reaction process, the nitrile is hydrolzed . First, the nitrile nitrogen of the aminonitrile is protonated, and the nitrile carbon is attacked by a water molecule. A 1,2-diamino-diol is then formed after proton exchange and a nucleophilic attack of water to the former nitrile carbon. Ammonia is subsequently eliminated after the protonation of the amino group, and finally the deprotonation of a hydroxyl group produces an amino acid .
One example of the Strecker synthesis is a multikilogram scale synthesis of an L-valine derivative starting from Methyl isopropyl ketone : [ 5 ]
The initial reaction product of 3-methyl-2butanone with sodium cyanide and ammonia is resolved by application of L-tartaric acid . In contrast, asymmetric Strecker reactions require no resolving agent. By replacing ammonia with (S)-alpha-phenylethylamine as chiral auxiliary the ultimate reaction product was chiral alanine . [ 6 ]
Catalytic asymmetric Strecker reaction can be effected using thiourea -derived catalysts . [ 7 ] In 2012, a BINOL -derived catalyst was employed to generate chiral cyanide anion (see figure). [ 8 ]
The German chemist Adolph Strecker discovered the series of chemical reactions that produce an amino acid from an aldehyde or ketone . [ 9 ] [ 10 ] Using ammonia or ammonium salts in this reaction gives unsubstituted amino acids. In the original Strecker reaction acetaldehyde , ammonia , and hydrogen cyanide combined to form after hydrolysis alanine . Using primary and secondary amines in place of ammonium was shown to yield N-substituted amino acids. [ 10 ]
The classical Strecker synthesis gives racemic mixtures of α-amino acids as products, but several alternative procedures using asymmetric auxiliaries [ 11 ] or asymmetric catalysts [ 12 ] [ 13 ] have been developed.
The asymmetric Strecker reaction was reported by Harada in 1963. [ 14 ] The first reported asymmetric synthesis via a chiral catalyst was published in 1996. [ 15 ] However, this was retracted in 2023. [ 16 ]
Several methods exist to synthesize amino acids aside from the Strecker synthesis. [ 17 ] [ 3 ]
The commercial production of amino acids, however, usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Otherwise amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in one industrial synthesis of L- cysteine . Aspartic acid is produced by the addition of ammonia to fumarate using a lyase. [ 3 ] | https://en.wikipedia.org/wiki/Strecker_amino_acid_synthesis |
The Strecker degradation is a chemical reaction which converts an α- amino acid into an aldehyde containing the side chain , by way of an imine intermediate. It is named after Adolph Strecker , a German chemist.
The original observation by Strecker involved the use of alloxan as the oxidant in the first step, [ 1 ] followed by hydrolysis :
The reaction can take place using a variety of organic and inorganic reagents. [ 2 ] | https://en.wikipedia.org/wiki/Strecker_degradation |
The Streeter–Phelps equation is used in the study of water pollution as a water quality modelling tool. The model describes how dissolved oxygen (DO) decreases in a river or stream along a certain distance by degradation of biochemical oxygen demand (BOD). The equation was derived by H. W. Streeter, a sanitary engineer, and Earle B. Phelps , a consultant for the U.S. Public Health Service , in 1925, based on field data from the Ohio River . The equation is also known as the DO sag equation.
The Streeter–Phelps equation determines the relation between the dissolved oxygen concentration and the biological oxygen demand over time and is a solution to the linear first order differential equation [ 1 ]
This differential equation states that the total change in oxygen deficit (D) is equal to the difference between the two rates of deoxygenation and reaeration at any time.
The Streeter–Phelps equation, assuming a plug-flow stream at steady state is then
where
k 1 {\displaystyle k_{1}} lies typically within the range 0.05-0.5 d − 1 {\displaystyle d^{-1}} and k 2 {\displaystyle k_{2}} lies typically within the range 0.4-1.5 d − 1 {\displaystyle d^{-1}} . [ 2 ] The Streeter–Phelps equation is also known as the DO sag equation. This is due to the shape of the graph of the DO over time.
On the DO sag curve a minimum concentration occurs at some point, along a stream. If the Streeter–Phelps equation is differentiated with respect to time, and set equal to zero, the time at which the minimum DO occurs is expressed by
To find the value of the critical oxygen deficit, D c r i t {\displaystyle D_{crit}} , the Streeter–Phelps equation is combined with the equation above, for the critical time, t c r i t {\displaystyle t_{crit}} . Then the minimum dissolved oxygen concentration is
Mathematically it is possible to get a negative value of D O c r i t {\displaystyle DO_{crit}} , even though it is not possible to have a negative amount of DO in reality. [ 3 ]
The distance traveled in a river from a given point source pollution or waste discharge downstream to the D O c r i t {\displaystyle DO_{crit}} (which is the minimum DO) is found by
where v {\displaystyle v} is the flow velocity of the stream. This formula is a good approximation as long as the flow can be regarded as a plug flow (turbulent).
Several estimations of the reaeration rate exist, which generally follow the equation
where
The constants depend on the system to which the equation is applied, i.e. the flow velocity and the size of the stream or river. Different values are available in the literature.
The software " International Hydrological Programme " applies the following equation derived on the basis of values used in published literature [ 4 ]
where
Both the deoxygenation rate, k 1 {\displaystyle k_{1}} and reaeration rate, k 2 {\displaystyle k_{2}} can be temperature corrected, following the general formula. [ 2 ]
where
Normally θ has the value 1.048 for k 1 {\displaystyle k_{1}} and 1.024 for k 2 {\displaystyle k_{2}} .
An increasing temperature has the most impact on the deoxygenation rate, and results in an increased critical deficit ( D c r i t {\displaystyle D_{crit}} ), and x c r i t {\displaystyle x_{crit}} decreases. Furthermore, a decreased D O s a t {\displaystyle DO_{sat}} concentration occurs with increasing temperature, which leads to a decrease in the DO concentration. [ 2 ]
When two streams or rivers merge or water is discharged to a stream it is possible to determine the BOD and DO after mixing assuming steady state conditions and instantaneous mixing. The two streams are considered as dilutions of each other thus the initial BOD and DO will be [ 4 ]
and
where
Nowadays it is possible to solve the classical Streeter–Phelps equation numerically by use of computers. The differential equations are solved by integration.
In 1925, a study on the phenomena of oxidation and reaeration in the Ohio River in the US was published by the sanitary engineer, Harold Warner Streeter and the consultant, Earle Bernard Phelps (1876–1953). The study was based on data obtained from May 1914 to April 1915 by the United States Public Health Service under supervision of Surg. W.H. Frost. [ 1 ]
More complex versions of the Streeter–Phelps model were introduced during the 1960s, where computers made it possible to include further contributions to the oxygen development in streams. At the head of this development were O'Connor (1960) and Thomann (1963). [ 5 ] O'Connor added the contributions from photosynthesis, respiration and sediment oxygen demand (SOD). [ 6 ] Thomann expanded the Streeter–Phelps model to allow for multi segment systems. [ 7 ]
The simple Streeter–Phelps model is based on the assumptions that a single BOD input is distributed evenly at the cross section of a stream or river and that it moves as plug flow with no mixing in the river. [ 8 ] Furthermore, only one DO sink (carbonaceous BOD) and one DO source (reaeration) is considered in the classical Streeter–Phelps model. [ 9 ] These simplifications will give rise to errors in the model. For example the model does not include BOD removal by sedimentation, that suspended BOD is converted to a dissolved state, that sediment has an oxygen demand and that photosynthesis and respiration will impact the oxygen balance. [ 8 ]
In addition to the oxidation of organic matter and the reaeration process, there are many other processes in a stream which affect the DO. [ 8 ] In order to make a more accurate model it is possible to include these factors using an expanded model.
The expanded model is a modification of the traditional model and includes internal sources (reaeration and photosynthesis) and sinks (BOD, background BOD, SOD and respiration) of DO.
It is not always necessary to include all of these parameters. Instead relevant sources and sinks can be summed to yield the overall solution for the particular model. [ 2 ] Parameters in the expanded model can be either measured in the field or estimated theoretically.
Background BOD or benthic oxygen demand is the diffuse source of BOD represented by the decay of organic matter that has already settled on the bottom. This will give rise to a constant diffuse input thus the change in BOD over time will be
where
Sedimented BOD does not directly consume oxygen and this should therefore be taken into account. This is done by introducing a rate of BOD removal combined with a rate of oxygen consumption by BOD. Giving a total rate for oxygen removal by BOD [ 2 ]
where
The change in BOD over time is described as
where L {\displaystyle L} is the BOD from organic matter in the water [ g m 3 ] {\displaystyle [{\tfrac {\mathrm {g} }{\mathrm {m} ^{3}}}]} .
k r {\displaystyle k_{r}} is typically in the range of 0.5-5 d − 1 {\displaystyle d^{-1}} . [ 2 ]
Oxygen can be consumed by organisms in the sediment. This process is referred to as sediment oxygen demand (SOD). Measurement of SOD can be undertaken by measuring the change of oxygen in a box on the sediment (benthic respirometer).
The change in oxygen deficit due to consumption by sediment is described as
where
The range of the SOD is typically in the range of 0.1 – 1 g m 2 d {\displaystyle {\tfrac {g}{m^{2}d}}} for a natural river with low pollution and 5 – 10 g m 2 d {\displaystyle {\tfrac {g}{m^{2}d}}} for a river with moderate to heavy pollution. [ 2 ]
Ammonium is oxidized to nitrate under aerobic conditions
Ammonium oxidation can be treated as part of BOD, so that BOD = CBOD + NBOD, where CBOD is the carbonaceous biochemical oxygen demand and NBOD is nitrogenous BOD. Usually CBOD is much higher than the ammonium concentration and thus NBOD often does not need to be considered. The change in oxygen deficit due to oxidation of ammonium is described as
where
The range of k N {\displaystyle k_{N}} is typically 0.05-0.5 d − 1 {\displaystyle d^{-1}} . [ 2 ]
Photosynthesis and respiration are performed by algae and by macrophytes. Respiration is also performed by bacteria and animals. Assuming steady state (net daily average) the change in deficit will be
where
Note that BOD only includes respiration of microorganisms e.g. algae and bacteria and not by macrophytes and animals.
Due to the variation of light over time, the variation of the photosynthetic oxygen can be described by a periodical function over time, where time is after sunrise and before sunset [ 2 ]
where
The range of the daily average value of primary production ( P − R ) {\displaystyle (P-R)} is typically 0.5-10 m g L d {\displaystyle {\tfrac {mg}{Ld}}} . [ 2 ] | https://en.wikipedia.org/wiki/Streeter–Phelps_equation |
The Strehl ratio is a measure of the quality of optical image formation , originally proposed by Karl Strehl , after whom the term is named. [ 1 ] [ 2 ] Used variously in situations where optical resolution is compromised due to lens aberrations or due to imaging through the turbulent atmosphere , the Strehl ratio has a value between 0 and 1, with a hypothetical, perfectly unaberrated optical system having a Strehl ratio of 1.
The Strehl ratio S {\displaystyle S} is frequently defined [ 3 ] as the ratio of the peak aberrated image intensity from a point source compared to the maximum attainable intensity using an ideal optical system limited only by diffraction over the system's aperture . It is also often expressed in terms not of the peak intensity but the intensity at the image center (intersection of the optical axis with the focal plane) due to an on-axis source; in most important cases these definitions result in a very similar figure (or identical figure, when the point of peak intensity must be exactly at the center due to symmetry). Using the latter definition, the Strehl ratio S {\displaystyle S} can be computed in terms of the wavefront-error δ ( x , y ) {\displaystyle \delta (x,y)} : the offset of the wavefront due to an on-axis point source, compared to that produced by an ideal focusing system over the aperture A(x,y). Using Fraunhofer diffraction theory, one computes the wave amplitude using the Fourier transform of the aberrated pupil function evaluated at 0,0 (center of the image plane) where the phase factors of the Fourier transform formula are reduced to unity. Since the Strehl ratio refers to intensity, it is found from the squared magnitude of that amplitude:
where i is the imaginary unit , ϕ = 2 π δ / λ {\displaystyle \phi =2\pi \delta /\lambda } is the phase error over the aperture at wavelength λ, and the average of the complex quantity inside the brackets is taken over the aperture A(x,y).
The Strehl ratio can be estimated using only the statistics of the phase deviation ϕ {\displaystyle \phi } , according to a formula rediscovered by Mahajan [ 4 ] [ 5 ] but known long before in antenna theory as the Ruze formula [ 6 ]
where sigma (σ) is the root mean square deviation over the aperture of the wavefront phase: σ 2 = ⟨ ( ϕ − ϕ ¯ ) 2 ⟩ {\displaystyle \sigma ^{2}=\langle (\phi -{\bar {\phi }})^{2}\rangle } .
Due to diffraction , even a focusing system which is perfect according to geometrical optics will have a limited spatial resolution . In the usual case of a uniform circular aperture, the point spread function (PSF) which describes the image formed from an object with no spatial extent (a "point source"), is given by the Airy disk as illustrated here. For a circular aperture, the peak intensity found at the center of the Airy disk defines the point source image intensity required for a Strehl ratio of unity. An imperfect optical system using the same physical aperture will generally produce a broader PSF in which the peak intensity is reduced according to the factor given by the Strehl ratio. An optical system with only minor imperfections in this sense may be referred to as "diffraction limited" as its PSF closely resembles the Airy disk; a Strehl ratio of greater than .8 is frequently cited as a criterion for the use of that designation.
Note that for a given aperture the size of the Airy disk grows linearly with the wavelength λ {\displaystyle \lambda } , and consequently the peak intensity falls according to λ − 2 {\displaystyle \lambda ^{-2}} so that the reference point for unity Strehl ratio is changed. Typically, as wavelength is increased, an imperfect optical system will have a broader PSF with a decreased peak intensity. However the peak intensity of the reference Airy disk would have decreased even more at that longer wavelength, resulting in a better Strehl ratio at longer wavelengths (typically) even though the actual image resolution is poorer.
The ratio is commonly used to assess the quality of astronomical seeing in the presence of atmospheric turbulence and assess the performance of any adaptive optical correction system. It is also used for the selection of short exposure images in the lucky imaging method.
In industry, the Strehl ratio has become a popular way to summarize the performance of an optical design because it gives the performance of a real system, of finite cost and complexity, relative to a theoretically perfect system, which would be infinitely expensive and complex to build and would still have a finite point spread function. It provides a simple method to decide whether a system with a Strehl ratio of, for example, 0.95 is good enough, or whether twice as much should be spent to try to get a Strehl ratio of perhaps 0.97 or 0.98.
Characterizing the form of the point-spread function by a single number, as the Strehl Ratio does, will be meaningful and sensible only if the point-spread function is little distorted from its ideal (aberration-free) form, which will be true for a well-corrected system that operates close to the diffraction limit. That includes most telescopes and microscopes , but excludes most photographic systems, for example. The Strehl ratio has been linked via the work of André Maréchal [ 7 ] to an aberration tolerancing theory which is very useful to designers of well-corrected optical systems, allowing a meaningful link between the aberrations of geometrical optics and the diffraction theory of physical optics. A significant shortcoming of the Strehl ratio as a method of image assessment is that, although it is relatively easy to calculate for an optical design prescription on paper, it is normally difficult to measure for a real optical system, not least because the theoretical maximum peak intensity is not readily available. | https://en.wikipedia.org/wiki/Strehl_ratio |
The Strejc system identification method allows the estimate of the transfer function of a non-periodic, black box -type system based on its step response and is widely used in all branches of industrial and mechanical engineering.
It allows specifically to estimate the order n of the studied system, its time constant and its delay.
To use the Strejc method, it is necessary to apply a step signal to the system and record its t u and t g parameters by observing the inflection point of the response curve. These parameters are then compared with the ones in the numeric table to estimate what order approximates better the system's behaviour and then find the time constant t with the second column (using the appropriate order).
This mathematical physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Strejc_method |
The relative strength of two systems of formal logic can be defined via model theory . Specifically, a logic α {\displaystyle \alpha } is said to be as strong as a logic β {\displaystyle \beta } if every elementary class in β {\displaystyle \beta } is an elementary class in α {\displaystyle \alpha } . [ 1 ]
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Strength_(mathematical_logic) |
Strength of Materials ( Russian : Проблемы прочности ) is a bimonthly peer-reviewed scientific journal covering the field of strength of materials and structural elements, mechanics solid deformed body. It was established in 1969 and is published by Springer Science+Business Media on behalf of the Pisarenko Institute of Problems of Strength of the National Academy of Sciences of Ukraine . The editor-in-chief is V.V. Kharchenko. According to the Journal Citation Reports , the journal has a 2020 impact factor of 0.620. [ 1 ]
This article about a materials science journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Strength_of_Materials_(journal) |
The strength of materials is determined using various methods of calculating the stresses and strains in structural members, such as beams, columns, and shafts. The methods employed to predict the response of a structure under loading and its susceptibility to various failure modes takes into account the properties of the materials such as its yield strength , ultimate strength , Young's modulus , and Poisson's ratio . In addition, the mechanical element's macroscopic properties (geometric properties) such as its length, width, thickness, boundary constraints and abrupt changes in geometry such as holes are considered.
The theory began with the consideration of the behavior of one and two dimensional members of structures, whose states of stress can be approximated as two dimensional, and was then generalized to three dimensions to develop a more complete theory of the elastic and plastic behavior of materials. An important founding pioneer in mechanics of materials was Stephen Timoshenko .
In the mechanics of materials, the strength of a material is its ability to withstand an applied load without failure or plastic deformation . The field of strength of materials deals with forces and deformations that result from their acting on a material. A load applied to a mechanical member will induce internal forces within the member called stresses when those forces are expressed on a unit basis. The stresses acting on the material cause deformation of the material in various manners including breaking them completely. Deformation of the material is called strain when those deformations too are placed on a unit basis.
The stresses and strains that develop within a mechanical member must be calculated in order to assess the load capacity of that member. This requires a complete description of the geometry of the member, its constraints, the loads applied to the member and the properties of the material of which the member is composed. The applied loads may be axial (tensile or compressive), or rotational (strength shear). With a complete description of the loading and the geometry of the member, the state of stress and state of strain at any point within the member can be calculated. Once the state of stress and strain within the member is known, the strength (load carrying capacity) of that member, its deformations (stiffness qualities), and its stability (ability to maintain its original configuration) can be calculated.
The calculated stresses may then be compared to some measure of the strength of the member such as its material yield or ultimate strength. The calculated deflection of the member may be compared to deflection criteria that are based on the member's use. The calculated buckling load of the member may be compared to the applied load. The calculated stiffness and mass distribution of the member may be used to calculate the member's dynamic response and then compared to the acoustic environment in which it will be used.
Material strength refers to the point on the engineering stress–strain curve (yield stress) beyond which the material experiences deformations that will not be completely reversed upon removal of the loading and as a result, the member will have a permanent deflection. The ultimate strength of the material refers to the maximum value of stress reached. The fracture strength is the stress value at fracture (the last stress value recorded).
Uniaxial stress is expressed by
where F is the force acting on an area A . [ 3 ] The area can be the undeformed area or the deformed area, depending on whether engineering stress or true stress is of interest.
Material resistance can be expressed in several mechanical stress parameters. The term material strength is used when referring to mechanical stress parameters. These are physical quantities with dimension homogeneous to pressure and force per unit surface . The traditional measure unit for strength are therefore MPa in the International System of Units , and the psi between the United States customary units . Strength parameters include: yield strength, tensile strength, fatigue strength, crack resistance, and other parameters. [ citation needed ]
The slope of this line is known as Young's modulus , or the "modulus of elasticity". The modulus of elasticity can be used to determine the stress–strain relationship in the linear-elastic portion of the stress–strain curve. The linear-elastic region is either below the yield point, or if a yield point is not easily identified on the stress–strain plot it is defined to be between 0 and 0.2% strain, and is defined as the region of strain in which no yielding (permanent deformation) occurs. [ 11 ]
Consider the difference between a carrot and chewed bubble gum. The carrot will stretch very little before breaking. The chewed bubble gum, on the other hand, will plastically deform enormously before finally breaking.
Ultimate strength is an attribute related to a material, rather than just a specific specimen made of the material, and as such it is quoted as the force per unit of cross section area (N/m 2 ). The ultimate strength is the maximum stress that a material can withstand before it breaks or weakens. [ 12 ] For example, the ultimate tensile strength (UTS) of AISI 1018 Steel is 440 MPa . In Imperial units, the unit of stress is given as lbf/in 2 or pounds-force per square inch . This unit is often abbreviated as psi . One thousand psi is abbreviated ksi .
A factor of safety is a design criteria that an engineered component or structure must achieve. F S = F / f {\displaystyle FS=F/f} , where FS: the factor of safety, Rf The applied stress, and F: ultimate allowable stress (psi or MPa) [ 13 ]
Margin of Safety is the common method for design criteria. It is defined MS = P u /P − 1.
For example, to achieve a factor of safety of 4, the allowable stress in an AISI 1018 steel component can be calculated to be F = U T S / F S {\displaystyle F=UTS/FS} = 440/4 = 110 MPa, or F {\displaystyle F} = 110×10 6 N/m 2 . Such allowable stresses are also known as "design stresses" or "working stresses".
Design stresses that have been determined from the ultimate or yield point values of the materials give safe and reliable results only for the case of static loading. Many machine parts fail when subjected to a non-steady and continuously varying loads even though the developed stresses are below the yield point. Such failures are called fatigue failure. The failure is by a fracture that appears to be brittle with little or no visible evidence of yielding. However, when the stress is kept below "fatigue stress" or "endurance limit stress", the part will endure indefinitely. A purely reversing or cyclic stress is one that alternates between equal positive and negative peak stresses during each cycle of operation. In a purely cyclic stress, the average stress is zero. When a part is subjected to a cyclic stress, also known as stress range (Sr), it has been observed that the failure of the part occurs after a number of stress reversals (N) even if the magnitude of the stress range is below the material's yield strength. Generally, higher the range stress, the fewer the number of reversals needed for failure.
There are four failure theories: maximum shear stress theory, maximum normal stress theory, maximum strain energy theory, and maximum distortion energy theory (von Mises criterion of failure). Out of these four theories of failure, the maximum normal stress theory is only applicable for brittle materials, and the remaining three theories are applicable for ductile materials.
Of the latter three, the distortion energy theory provides the most accurate results in a majority of the stress conditions. The strain energy theory needs the value of Poisson's ratio of the part material, which is often not readily available. The maximum shear stress theory is conservative. For simple unidirectional normal stresses all theories are equivalent, which means all theories will give the same result.
A material's strength depends on its microstructure . The engineering processes to which a material is subjected can alter its microstructure. Strengthening mechanisms that alter the strength of a material include work hardening , solid solution strengthening , precipitation hardening , and grain boundary strengthening .
Strengthening mechanisms are accompanied by the caveat that some other mechanical properties of the material may degenerate in an attempt to make a material stronger. For example, in grain boundary strengthening, although yield strength is maximized with decreasing grain size, ultimately, very small grain sizes make the material brittle. In general, the yield strength of a material is an adequate indicator of the material's mechanical strength. Considered in tandem with the fact that the yield strength is the parameter that predicts plastic deformation in the material, one can make informed decisions on how to increase the strength of a material depending on its microstructural properties and the desired end effect. Strength is expressed in terms of the limiting values of the compressive stress , tensile stress , and shear stresses that would cause failure. The effects of dynamic loading are probably the most important practical consideration of the theory of elasticity, especially the problem of fatigue . Repeated loading often initiates cracks, which grow until failure occurs at the corresponding residual strength of the structure. Cracks always start at a stress concentrations especially changes in cross-section of the product or defects in manufacturing, near holes and corners at nominal stress levels far lower than those quoted for the strength of the material. | https://en.wikipedia.org/wiki/Strength_of_materials |
Methods have been devised to modify the yield strength , ductility , and toughness of both crystalline and amorphous materials. These strengthening mechanisms give engineers the ability to tailor the mechanical properties of materials to suit a variety of different applications. For example, the favorable properties of steel result from interstitial incorporation of carbon into the iron lattice. Brass , a binary alloy of copper and zinc , has superior mechanical properties compared to its constituent metals due to solution strengthening. Work hardening (such as beating a red-hot piece of metal on anvil) has also been used for centuries by blacksmiths to introduce dislocations into materials, increasing their yield strengths .
Plastic deformation occurs when large numbers of dislocations move and multiply so as to result in macroscopic deformation. In other words, it is the movement of dislocations in the material which allows for deformation. If we want to enhance a material's mechanical properties (i.e. increase the yield and tensile strength ), we simply need to introduce a mechanism which prohibits the mobility of these dislocations. Whatever the mechanism may be, (work hardening, grain size reduction, etc.) they all hinder dislocation motion and render the material stronger than previously. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The stress required to cause dislocation motion is orders of magnitude lower than the theoretical stress required to shift an entire plane of atoms, so this mode of stress relief is energetically favorable. Hence, the hardness and strength (both yield and tensile) critically depend on the ease with which dislocations move. Pinning points , or locations in the crystal that oppose the motion of dislocations, [ 5 ] can be introduced into the lattice to reduce dislocation mobility, thereby increasing mechanical strength. Dislocations may be pinned due to stress field interactions with other dislocations and solute particles, creating physical barriers from second phase precipitates forming along grain boundaries. There are five main strengthening mechanisms for metals, each is a method to prevent dislocation motion and propagation, or make it energetically unfavorable for the dislocation to move. For a material that has been strengthened, by some processing method, the amount of force required to start irreversible (plastic) deformation is greater than it was for the original material.
In amorphous materials such as polymers, amorphous ceramics (glass), and amorphous metals, the lack of long range order leads to yielding via mechanisms such as brittle fracture, crazing , and shear band formation. In these systems, strengthening mechanisms do not involve dislocations, but rather consist of modifications to the chemical structure and processing of the constituent material.
The strength of materials cannot infinitely increase. Each of the mechanisms explained below involves some trade-off by which other material properties are compromised in the process of strengthening.
The primary species responsible for work hardening are dislocations. Dislocations interact with each other by generating stress fields in the material. The interaction between the stress fields of dislocations can impede dislocation motion by repulsive or attractive interactions. Additionally, if two dislocations cross, dislocation line entanglement occurs, causing the formation of a jog which opposes dislocation motion. These entanglements and jogs act as pinning points, which oppose dislocation motion. As both of these processes are more likely to occur when more dislocations are present, there is a correlation between dislocation density and shear strength.
The shear strengthening provided by dislocation interactions can be described by: [ 6 ]
Δ τ d = α G b ρ ⊥ {\displaystyle \Delta \tau _{d}=\alpha Gb{\sqrt {\rho _{\perp }}}}
where α {\displaystyle \alpha } is a proportionality constant, G {\displaystyle G} is the shear modulus , b {\displaystyle b} is the Burgers vector , and ρ ⊥ {\displaystyle \rho _{\perp }} is the dislocation density.
Dislocation density is defined as the dislocation line length per unit volume:
ρ ⊥ = ℓ ℓ 3 {\displaystyle \rho _{\perp }={\frac {\ell }{\ell ^{3}}}}
Similarly, the axial strengthening will be proportional to the dislocation density.
Δ σ y ∝ G b ρ ⊥ {\displaystyle \Delta \sigma _{y}\propto {Gb{\sqrt {\rho _{\perp }}}}}
This relationship does not apply when dislocations form cell structures. When cell structures are formed, the average cell size controls the strengthening effect. [ 6 ]
Increasing the dislocation density increases the yield strength which results in a higher shear stress required to move the dislocations. This process is easily observed while working a material (by a process of cold working in metals). Theoretically, the strength of a material with no dislocations will be extremely high ( σ ≈ G 10 {\displaystyle \sigma \approx {\frac {G}{10}}} ) because plastic deformation would require the breaking of many bonds simultaneously. However, at moderate dislocation density values of around 10 7 -10 9 dislocations/m 2 , the material will exhibit a significantly lower mechanical strength. Analogously, it is easier to move a rubber rug across a surface by propagating a small ripple through it than by dragging the whole rug. At dislocation densities of 10 14 dislocations/m 2 or higher, the strength of the material becomes high once again. Also, the dislocation density cannot be infinitely high, because then the material would lose its crystalline structure. [ citation needed ]
For this strengthening mechanism, solute atoms of one element are added to another, resulting in either substitutional or interstitial point defects in the crystal (see Figure on the right). The solute atoms cause lattice distortions that impede dislocation motion, increasing the yield stress of the material. Solute atoms have stress fields around them which can interact with those of dislocations. The presence of solute atoms impart compressive or tensile stresses to the lattice, depending on solute size , which interfere with nearby dislocations, causing the solute atoms to act as potential barriers.
The shear stress required to move dislocations in a material is:
Δ τ = G b c ϵ 3 / 2 {\displaystyle \Delta \tau =Gb{\sqrt {c}}\epsilon ^{3/2}}
where c {\displaystyle c} is the solute concentration and ϵ {\displaystyle \epsilon } is the strain on the material caused by the solute.
Increasing the concentration of the solute atoms will increase the yield strength of a material, but there is a limit to the amount of solute that can be added, and one should look at the phase diagram for the material and the alloy to make sure that a second phase is not created.
In general, the solid solution strengthening depends on the concentration of the solute atoms, shear modulus of the solute atoms, size of solute atoms, valency of solute atoms (for ionic materials), and the symmetry of the solute stress field. The magnitude of strengthening is higher for non-symmetric stress fields because these solutes can interact with both edge and screw dislocations, whereas symmetric stress fields, which cause only volume change and not shape change, can only interact with edge dislocations.
In most binary systems, alloying above a concentration given by the phase diagram will cause the formation of a second phase. A second phase can also be created by mechanical or thermal treatments. The particles that compose the second phase precipitates act as pinning points in a similar manner to solutes, though the particles are not necessarily single atoms.
The dislocations in a material can interact with the precipitate atoms in one of two ways (see Figure 2). If the precipitate atoms are small, the dislocations would cut through them. As a result, new surfaces (b in Figure 2) of the particle would get exposed to the matrix and the particle-matrix interfacial energy would increase. For larger precipitate particles, looping or bowing of the dislocations would occur and result in dislocations getting longer. Hence, at a critical radius of about 5 nm, dislocations will preferably cut across the obstacle, while for a radius of 30 nm, the dislocations will readily bow or loop to overcome the obstacle.
The mathematical descriptions are as follows:
For particle bowing- Δ τ = G b L − 2 r {\displaystyle \Delta \tau ={Gb \over L-2r}}
For particle cutting- Δ τ = γ π r b L {\displaystyle \Delta \tau ={\gamma \pi r \over bL}}
Dispersion strengthening is a type of particulate strengthening in which incoherent precipitates attract and pin dislocations. These particles are typically larger than those in the Orowon precipitation hardening discussed above. The effect of dispersion strengthening is effective at high temperatures whereas precipitation strengthening from heat treatments are typically limited to temperatures much lower than the melting temperature of the material. [ 7 ] One common type of dispersion strengthening is oxide dispersion strengthening .
In a polycrystalline metal, grain size has a tremendous influence on the mechanical properties. Because grains usually have varying crystallographic orientations, grain boundaries arise. While undergoing deformation, slip motion will take place. Grain boundaries act as an impediment to dislocation motion for the following two reasons:
1. Dislocation must change its direction of motion due to the differing orientation of grains. [ 4 ] 2. Discontinuity of slip planes from grain one to grain two. [ 4 ]
The stress required to move a dislocation from one grain to another in order to plastically deform a material depends on the grain size. The average number of dislocations per grain decreases with average grain size (see Figure 3). A lower number of dislocations per grain results in a lower dislocation 'pressure' building up at grain boundaries. This makes it more difficult for dislocations to move into adjacent grains. This relationship is the Hall-Petch relationship and can be mathematically described as follows:
σ y = σ y , 0 + k d x {\displaystyle \sigma _{y}=\sigma _{y,0}+{k \over {d^{x}}}} ,
where k {\displaystyle k} is a constant, d {\displaystyle d} is the average grain diameter and σ y , 0 {\displaystyle \sigma _{y,0}} is the original yield stress.
The fact that the yield strength increases with decreasing grain size is accompanied by the caveat that the grain size cannot be decreased infinitely. As the grain size decreases, more free volume is generated resulting in lattice mismatch. Below approximately 10 nm, the grain boundaries will tend to slide instead; a phenomenon known as grain-boundary sliding . If the grain size gets too small, it becomes more difficult to fit the dislocations in the grain and the stress required to move them is less. It was not possible to produce materials with grain sizes below 10 nm until recently, so the discovery that strength decreases below a critical grain size is still finding new applications.
This method of hardening is used for steels.
High-strength steels generally fall into three basic categories, classified by the strengthening mechanism employed.
1- solid-solution-strengthened steels (rephos steels) 2- grain-refined steels or high strength low alloy steels (HSLA) 3- transformation-hardened steels
Transformation-hardened steels are the third type of high-strength steels. These steels use predominantly higher levels of C and Mn along with heat treatment to increase strength. The finished product will have a duplex micro-structure of ferrite with varying levels of degenerate
martensite. This allows for varying levels of strength. There are three basic types of transformation-hardened steels. These are dual-phase (DP), transformation-induced plasticity (TRIP), and martensitic steels.
The annealing process for dual -phase steels consists of first holding the steel in the alpha + gamma temperature region for a set period of time. During that time C and Mn diffuse into the austenite leaving a ferrite of greater purity. The steel is then quenched so that the austenite is transformed
into martensite, and the ferrite remains on cooling. The steel is then subjected to a temper cycle to allow some level of marten-site decomposition. By controlling the amount of martensite in the steel, as well as the degree of temper, the strength level can be controlled. Depending on
processing and chemistry, the strength level can range from 350 to 960 MPa.
TRIP steels also use C and Mn, along with heat treatment, in order to retain small amounts of austenite and bainite in a ferrite matrix. Thermal processing for TRIP steels again involves annealing the steel in the a + g region for a period of time sufficient to allow C and Mn to diffuse
into austenite. The steel is then quenched to a point above the martensite start temperature and held there. This allows the formation of bainite, an austenite decomposition product. While at this temperature, more C is allowed to enrich the retained austenite. This, in turn, lowers the
martensite start temperature to below room temperature. Upon final quenching a metastable austenite is retained in the predominantly ferrite matrix along with small amounts of bainite (and other forms of decomposed austenite). This combination of micro-structures has the added
benefits of higher strengths and resistance to necking during forming. This offers great improvements in formability over other high-strength steels. Essentially, as the TRIP steel is being formed, it becomes much stronger. Tensile strengths of TRIP steels are in the range of 600-960 MPa.
Martensitic steels are also high in C and Mn. These are fully quenched to martensite during processing. The martensite structure is then tempered back to the appropriate strength level, adding toughness to the steel. Tensile strengths for these steels range as high as 1500 MPa.
Polymers fracture via breaking of inter- and intra molecular bonds; hence, the chemical structure of these materials plays a huge role in increasing strength. For polymers consisting of chains which easily slide past each other, chemical and physical cross linking can be used to increase rigidity and yield strength. In thermoset polymers ( thermosetting plastic ), disulfide bridges and other covalent cross links give rise to a hard structure which can withstand very high temperatures. These cross-links are particularly helpful in improving tensile strength of materials which contain much free volume prone to crazing, typically glassy brittle polymers. [ 8 ] In thermoplastic elastomer , phase separation of dissimilar monomer components leads to association of hard domains within a sea of soft phase, yielding a physical structure with increased strength and rigidity. If yielding occurs by chains sliding past each other (shear bands), the strength can also be increased by introducing kinks into the polymer chains via unsaturated carbon-carbon bonds. [ 8 ]
Adding filler materials such as fibers, platelets, and particles is a commonly employed technique for strengthening polymer materials. Fillers such as clay, silica, and carbon network materials have been extensively researched and used in polymer composites in part due to their effect on mechanical properties. Stiffness-confinement effects near rigid interfaces, such as those between a polymer matrix and stiffer filler materials, enhance the stiffness of composites by restricting polymer chain motion. [ 9 ] This is especially present where fillers are chemically treated to strongly interact with polymer chains, increasing the anchoring of polymer chains to the filler interfaces and thus further restricting the motion of chains away from the interface. [ 10 ] Stiffness-confinement effects have been characterized in model nanocomposites, and shows that composites with length scales on the order of nanometers increase the effect of the fillers on polymer stiffness dramatically. [ 11 ]
Increasing the bulkiness of the monomer unit via incorporation of aryl rings is another strengthening mechanism. The anisotropy of the molecular structure means that these mechanisms are heavily dependent on the direction of applied stress. While aryl rings drastically increase rigidity along the direction of the chain, these materials may still be brittle in perpendicular directions. Macroscopic structure can be adjusted to compensate for this anisotropy . For example, the high strength of Kevlar arises from a stacked multilayer macrostructure where aromatic polymer layers are rotated with respect to their neighbors. When loaded oblique to the chain direction, ductile polymers with flexible linkages, such as oriented polyethylene , are highly prone to shear band formation, so macroscopic structures which place the load parallel to the draw direction would increase strength. [ 8 ]
Mixing polymers is another method of increasing strength, particularly with materials that show crazing preceding brittle fracture such as atactic polystyrene (APS). For example, by forming a 50/50 mixture of APS with polyphenylene oxide (PPO), this embrittling tendency can be almost completely suppressed, substantially increasing the fracture strength. [ 8 ]
Interpenetrating polymer networks (IPNs), consisting of interlacing crosslinked polymer networks that are not covalently bonded to one another, can lead to enhanced strength in polymer materials. The use of an IPN approach imposes compatibility (and thus macroscale homogeneity) on otherwise immiscible blends, allowing for a blending of mechanical properties. For example, silicone-polyurethane IPNs show increased tear and flexural strength over base silicone networks, while preserving the high elastic recovery of the silicone network at high strains. [ 12 ] Increased stiffness can also be achieved by pre-straining polymer networks and then sequentially forming a secondary network within the strained material. This takes advantage of the anisotropic strain hardening of the original network (chain alignment from stretching of the polymer chains) and provides a mechanism whereby the two networks transfer stress to one another due to the imposed strain on the pre-strained network. [ 13 ]
Many silicate glasses are strong in compression but weak in tension. By introducing compression stress into the structure, the tensile strength of the material can be increased. This is typically done via two mechanisms: thermal treatment (tempering) or chemical bath (via ion exchange).
In tempered glasses, air jets are used to rapidly cool the top and bottom surfaces of a softened (hot) slab of glass. Since the surface cools quicker, there is more free volume at the surface than in the bulk melt. The core of the slab then pulls the surface inward, resulting in an internal compressive stress at the surface. This substantially increases the tensile strength of the material as tensile stresses exerted on the glass must now resolve the compressive stresses before yielding.
σ y = m o d i f i e d = σ y , 0 + σ c o m p r e s s i v e {\displaystyle \sigma _{y=modified}=\sigma _{y,0}+\sigma _{compressive}}
Alternately, in chemical treatment, a glass slab treated containing network formers and modifiers is submerged into a molten salt bath containing ions larger than those present in the modifier. Due to a concentration gradient of the ions, mass transport must take place. As the larger cation diffuses from the molten salt into the surface, it replaces the smaller ion from the modifier. The larger ion squeezing into surface introduces compressive stress in the glass's surface. A common example is treatment of sodium oxide modified silicate glass in molten potassium chloride .
Examples of chemically strengthened glass are Gorilla Glass developed and manufactured by Corning , AGC Inc. 's Dragontrail and Schott AG 's Xensation.
Many of the basic strengthening mechanisms can be classified based on their dimensionality. At 0-D there is precipitate and solid solution strengthening with particulates strengthening structure, at 1-D there is work/forest hardening with line dislocations as the hardening mechanism, and at 2-D there is grain boundary strengthening with surface energy of granular interfaces providing strength improvement. The two primary types of composite strengthening, fiber reinforcement and laminar reinforcement, fall in the 1-D and 2-D classes, respectively. The anisotropy of fiber and laminar composite strength reflects these dimensionalities. The primary idea behind composite strengthening is to combine materials with opposite strengths and weaknesses to create a material which transfers load onto the stiffer material but benefits from the ductility and toughness of the softer material. [ 14 ]
Fiber-reinforced composites (FRCs) consist of a matrix of one material containing parallel embedded fibers. There are two variants of fiber-reinforced composites, one with stiff fibers and a ductile matrix and one with ductile fibers and a stiff matrix. The former variant is exemplified by fiberglass which contains very strong but delicate glass fibers embedded in a softer plastic matrix resilient to fracture. The latter variant is found in almost all buildings as reinforced concrete with ductile, high tensile-strength steel rods embedded in brittle, high compressive-strength concrete. In both cases, the matrix and fibers have complimentary mechanical properties and the resulting composite material is therefore more practical for applications in the real world.
For a composite containing aligned, stiff fibers which span the length of the material and a soft, ductile matrix, the following descriptions provide a rough model.
The condition of a fiber-reinforced composite under applied tensile stress along the direction of the fibers can be decomposed into four stages from small strain to large strain. Since the stress is parallel to the fibers, the deformation is described by the isostrain condition, i.e., the fiber and matrix experience the same strain. At each stage, the composite stress ( σ c {\displaystyle \sigma _{c}} ) is given in terms of the volume fractions of the fiber and matrix ( V f , V m {\displaystyle V_{f},V_{m}} ), the Young's moduli of the fiber and matrix ( E f , E m {\displaystyle E_{f},E_{m}} ), the strain of the composite ( ϵ c {\displaystyle \epsilon _{c}} ), and the stress of the fiber and matrix as read from a stress-strain curve ( σ f ( ϵ c ) , σ m ( ϵ c ) {\displaystyle \sigma _{f}(\epsilon _{c}),\sigma _{m}(\epsilon _{c})} ).
Due to the heterogeneous nature of FRCs, they also feature multiple tensile strengths (TS), one corresponding to each component. Given the assumptions outlined above, the first tensile strength would correspond to failure of the fibers, with some support from the matrix plastic deformation strength, and the second with failure of the matrix.
As a result of the aforementioned dimensionality (1-D) of fiber reinforcement, significant anisotropy is observed in its mechanical properties. The following equations model the tensile strength of a FRC as a function of the misalignment angle ( θ {\displaystyle \theta } ) between the fibers and the applied force, the stresses in the parallel and perpendicular, or θ = 0 {\displaystyle \theta =0} and 90 {\displaystyle 90} o , cases ( σ | | , σ ⊥ {\displaystyle \ \sigma _{||},\sigma _{\perp }} ), and the shear strength of the matrix ( τ m y {\displaystyle \tau _{my}} ).
Strengthening of materials is useful in many applications. A primary application of strengthened materials is for construction. In order to have stronger buildings and bridges, one must have a strong frame that can support high tensile or compressive load and resist plastic deformation. The steel frame used to make the building should be as strong as possible so that it does not bend under the entire weight of the building. Polymeric roofing materials would also need to be strong so that the roof does not cave in when there is build-up of snow on the rooftop.
Research is also currently being done to increase the strength of metallic materials through the addition of polymer materials such as bonded carbon fiber reinforced polymer to (CFRP) [1] .
The molecular dynamics (MD) method has been widely applied in materials science as it can yield information about the structure, properties, and dynamics on the atomic scale that cannot be easily resolved with experiments. The fundamental mechanism behind MD simulation is based on classical mechanics, from which we know the force exerted on a particle is caused by the negative gradient of the potential energy with respect to the particle position. Therefore, a standard procedure to conduct MD simulation is to divide the time into discrete time steps and solve the equations of motion over these intervals repeatedly to update the positions and energies of the particles. [ 15 ] Direct observation of atomic arrangements and energetics of particles on the atomic scale makes it a powerful tool to study microstructural evolution and strengthening mechanisms.
There have been extensive studies on different strengthening mechanisms using MD simulation. These studies reveal the microstructural evolution that cannot be either easily observed from an experiment or predicted by a simplified model. Han et al. investigated the grain boundary strengthening mechanism and the effects of grain size in nanocrystalline graphene through a series of MD simulations. [ 16 ] Previous studies observed inconsistent grain size dependence of the strength of graphene at the length scale of nm and the conclusions remained unclear. Therefore, Han et al. utilized MD simulation to observe the structural evolution of graphene with nanosized grains directly. The nanocrystalline graphene samples were generated with random shapes and distribution to simulate well-annealed polycrystalline samples. The samples were then loaded with uniaxial tensile stress, and the simulations were carried out at room temperature. By decreasing the grain size of graphene, Han et al. observed a transition from an inverse pseudo Hall-Petch behavior to pseudo Hall-Petch behavior and the critical grain size is 3.1 nm. Based on the arrangement and energetics of simulated particles, the inverse pseudo Hall-Petch behavior can be attributed to the creation of stress concentration sites due to the increase in the density of grain boundary junctions. Cracks then preferentially nucleate on these sites and the strength decreases. However, when the grain size is below the critical value, the stress concentration at the grain boundary junctions decreases because of stress cancellation between 5 and 7 defects. This cancellation helps graphene sustain the tensile load and exhibit a pseudo Hall-Petch behavior. This study explains the previous inconsistent experimental observations and provides an in-depth understanding of the grain boundary strengthening mechanism of nanocrystalline graphene, which cannot be easily obtained from either in-situ or ex-situ experiments.
There are also MD studies done on precipitate strengthening mechanisms. Shim et al. applied MD simulations to study the precipitate strengthening effects of nanosized body-centered-cubic (bcc) Cu on face-centered-cubic (fcc) Fe. [ 17 ] As discussed in the previous section, the precipitate strengthening effects are caused by the interaction between dislocations and precipitates. Therefore, the characteristics of dislocation play an important role on the strengthening effects. It is known that a screw dislocation in bcc metals has very complicated features, including a non-planar core and the twinning-anti-twinning asymmetry. This complicates the strengthening mechanism analysis and modeling and it cannot be easily revealed by high resolution electron microscopy. Thus, Shim et al. simulated coherent bcc Cu precipitates with diameters ranging from 1 to 4 nm embedded in the fcc Fe matrix. A screw dislocation is then introduced and driven to glide on a {112} plane by an increasing shear stress until it detaches from the precipitates. The shear stress that causes the detachment is regarded as the critical resolved shear stress (CRSS). Shim et al. observed that the screw dislocation velocity in the twinning direction is 2-4 times larger than that in the anti-twinning direction. The reduced velocity in the anti-twinning direction is mainly caused by a transition in the screw dislocation glide from the kink-pair to the cross-kink mechanism. In contrast, a screw dislocation overcomes the precipitates of 1–3.5 nm by shearing in the twinning direction. In addition, it also has been observed that the screw dislocation detachment mechanism with the larger, transformed precipitates involves annihilation-and-renucleation and Orowan looping in the twinning and anti-twinning direction, respectively. To fully characterize the involved mechanisms, it requires intensive transmission electron microscopy analysis and it is normally hard to give a comprehensive characterization.
A similar study has been done by Zhang et al. on studying the solid solution strengthening of Co, Ru, and Re of different concentrations in fcc Ni. [ 18 ] The edge dislocation was positioned at the center of Ni and its slip system was set to be <110> {111}. Shear stress was then applied to the top and bottom surfaces of the Ni with a solute atom (Co, Ru, or Re) embedded at the center at 300 K. Previous studies have shown that the general view of size and modulus effects cannot fully explain the solid solution strengthening caused by Re in this system due to their small values. [ 19 ] Zhang et al. took a step further to combine the first-principle DFT calculations with MD to study the influence of stacking fault energy (SFE) on strengthening, as partial dislocations can easily form in this material structure. MD simulation results indicate that Re atoms strongly drag to edge dislocation motion and the DFT calculation reveals a dramatic increase in SFE, which is due to the interaction between host atoms and solute atoms located in the slip plane. Further, similar relations have also been found in fcc Ni embedded with Ru and Co.
These studies show great examples of how the MD method can assist the studies of strengthening mechanisms and provides more insights on the atomic scale. However, it is important to note the limitations of the method.
To obtain accurate MD simulation results, it is essential to build a model that properly describes the interatomic potential based on bonding. The interatomic potentials are approximations rather than exact descriptions of interactions. The accuracy of the description varies significantly with the system and complexity of the potential form. For example, if the bonding is dynamic, which means that there is a change in bonding depending on atomic positions, the dedicated interatomic potential is required to enable the MD simulation to yield accurate results. Therefore, interatomic potentials need to be tailored based on bonding. The following interatomic potential models are commonly used in materials science: Born-Mayer potential, Morse potential, Lennard Jones potential, and Mie potential. [ 20 ] Although they give very similar results for the variation of potential energy with respect to the particle position, there is a non-negligible difference in their repulsive tails. These characteristics make them better describe materials systems with specific chemical bonds, respectively.
In addition to inherent errors in interatomic potentials, the number of atoms and the time steps in MD is limited by the computational power. Nowadays, it is common to simulate an MD system with multimillion atoms and it can even achieve simulations with multimillion atoms. [ 21 ] However this still limits the length scale of the simulation to roughly a micron in size. The time steps in MD are also very small and a long simulation will only yield results at the time scale of a few nanoseconds. To further extend the scale of simulation time, it is common to apply a bias potential that changes the barrier height, therefore, accelerating the dynamics. This method is called hyperdynamics. [ 22 ] The proper application of this method typically can extend the simulation times to microseconds.
Based on the mechanism of strengthening discussed in the previous contents, nowadays people are also working on enhancing the strength by purposely fabricating nanostructures in materials. Here we introduce several representative methods, including hierarchical nanotwined structures, pushing the limit of grain size for strengthening and dislocation engineering.
As mentioned in the previous content, hindering dislocation motion renders great strengthening to materials. Nanoscale twins – crystalline regions related by symmetry have the ability to effectively block the dislocation motion due to the microstructure change at the interface. [ 23 ] The formation of hierarchical nanotwinned structures pushes the hindrance effect to the extreme, due to the construction of a complex 3D nanotwinned network. Thus, the delicate design of hierarchical nanotwinned structures is of great importance for inventing materials with super strength. For instance, Yue et al. constructed a diamond composite with hierarchically nanotwinned structure by manipulating the synthesis pressure. The obtained composite showed the higher strength than typical engineering metals and ceramics.
The Hall-Petch effect illustrates that the yield strength of materials increases with decreasing grain size. However, many researchers have found that the nanocrystalline materials will soften when the grain size decreases to the critical point, which is called the inverse Hall-Petch effect. The interpretations of this phenomenon is that the extremely small grains are not able to support dislocation pileup which provides extra stress concentration in the large grains. [ 24 ] At this point, the strengthening mechanism changes from dislocation-dominated strain hardening to growth softening and grain rotation. Typically, the inverse Hall-Petch effect will happens at grain size ranging from 10 nm to 30 nm and makes it hard for nanocrystalline materials to achieve a high strength. To push the limit of grain size for strengthening, the hindrance of grain rotation and growth could be achieved by grain boundary stabilization.
The construction of nanolaminated structure with low-angle grain boundaries is one method to obtain ultrafine grained materials with ultra-strength. Lu et al. [ 25 ] applied a very high rate shear deformation with high strain gradients on the top surface layer of bulk Ni sample and introduced nanolaminated structures. This material exhibits an ultra-high hardness, higher than any reported ultrafine-grained nickel. The exceptional strength is resulted from the appearance of low-angle grain boundaries, which have low-energy states efficient for enhancing structure stability.
Another method to stabilize grain boundaries is the addition of nonmetallic impurities. Nonmetallic impurities often aggregate at grain boundaries and have the ability to impact the strength of materials by changing the grain boundary energy. Rupert et al. [ 26 ] conducted first-principles simulations to study the impact of the addition of common nonmetallic impurities on Σ5 (310) grain boundary energy in Cu. They claimed that the decrease of covalent radius of the impurity and the increase of electronegativity of the impurity would lead to the increase of the grain boundary energy and further strengthen the materials. For instance, boron stabilized the grain boundaries by enhancing the charge density among the adjacent Cu atoms to improve the connection between two grain boundaries.
Previous studies on the impact of dislocation motion on materials strengthening mainly focused on high density dislocation, which is effective for enhancing strength with the cost of reducing ductility. Engineering dislocation structures and distribution is promising to comprehensively improve the performance of material.
Solutes tend to aggregate at dislocations and are promising for dislocation engineering. Kimura et al. [ 27 ] conducted atom probe tomograph and observed the aggregation of niobium atoms to the dislocations. The segregation energy was calculated to be almost the same as the grain boundary segregation energy. That's to say, the interaction between niobium atoms and dislocations hindered the recovery of dislocations and thus strengthened the materials.
Introducing dislocations with heterogeneous characteristics could also be utilized for material strengthening. Lu et al. [ 28 ] introduced ordered oxygen complexes into TiZrHfNb alloy. Unlike the traditional interstitial strengthening, the introduction of the ordered oxygen complexes enhanced the strength of the alloy without the sacrifice of ductility. The mechanism was that the ordered oxygen complexes changed the dislocation motion mode from planar slip to wavy slip and promoted double cross-slip. | https://en.wikipedia.org/wiki/Strengthening_mechanisms_of_materials |
The Strep-tag system is a method which allows the purification and detection of proteins by affinity chromatography . The Strep-tag II is a synthetic peptide consisting of eight amino acids ( Trp - Ser - His - Pro - Gln - Phe - Glu - Lys ). This peptide sequence exhibits intrinsic affinity towards Strep-Tactin, a specifically engineered streptavidin , and can be N- or C- terminally fused to recombinant proteins. By exploiting the highly specific interaction, Strep -tagged proteins can be isolated in one step from crude cell lysates. Because the Strep -tag elutes under gentle, physiological conditions, it is especially suited for the generation of functional proteins. [ 1 ] [ 2 ]
Strep-tag, Twin-Strep-tag and Strep-Tactin are registered trademarks of IBA Lifesciences GmbH .
Streptavidin is a tetrameric protein expressed in Streptomyces avidinii . Because of Streptavidin's high affinity for vitamin H ( biotin ), Streptavidin is commonly used in the fields of molecular biology and biotechnology . The Strep-tag was originally selected from a genetic library to specifically bind to a proteolytically truncated "core" version of streptavidin. Over the years, the Strep-tag was systemically optimized, to permit a greater flexibility in the choice of attachment site. Further, its interaction partner, Streptavidin, was also optimized to increase peptide-binding capacity, which resulted in the development of Strep-Tactin. The binding affinity of Strep-tag to Strep-Tactin is nearly 100 times higher than from Strep-tag to Streptavidin. The so-called Strep-tag system, consisting of Strep-tag and Strep-Tactin, has proven particularly useful for the functional isolation and analysis of protein complexes in proteome research. [ 3 ]
Just like other short-affinity tags ( His-tag , FLAG-tag ), the Strep-tag can be easily fused to recombinant proteins during subcloning of its cDNA or gene . For its expression, various vectors for various host organisms ( E. coli , yeast , insect , and mammalian cells) are available. [ 4 ] A particular benefit of the Strep-tag is its rather small size and the fact that it is biochemically almost inert . Therefore, protein folding or secretion is not influenced and usually it does not interfere with protein function. Strep-tag is especially suited for analysis of functional proteins, because the purification procedure can be kept under physiological conditions. This not only allows the isolation of sensitive proteins in a native state, but it is also possible to purify intact protein complexes, [ 5 ] even if just one subunit carries the tag.
In the first step of the Strep-tag purification cycle, the cell lysate containing Strep-tag fusion protein is applied to a column with immobilized Strep-Tactin (step 1). After the tagged protein has specifically bound to Strep-Tactin, a short washing step with a physiological buffer (e.g. phosphate buffered saline, PBS) removes all other host proteins (step 2). This is due to Strep-Tactin's low tendency to bind proteins non specifically. Then, the purified Strep-tag fusion protein is gently eluted with a low concentration of desthiobiotin , which specifically competes for the biotin binding pocket (step 3). To regenerate the column, desthiobiotin is removed by application of a HABA containing solution (a yellow azo dye ). The removal of desthiobiotin is indicated by a color change from yellow-orange to red (step 4+5).
Finally, the HABA solution is washed out with a small volume of running buffer, thus making the column ready to use for the next purification run.
The Strep-tag system offers a selective tool to purify proteins under physiological conditions. The proteins obtained are bioactive and display a very high purity (above 95%). Also, the Strep-tag system can be used for protein detection in various assays. Depending on the experimental circumstances, Strep-tag antibodies or Strep-Tactin, with an enzymatic (e.g. horseradish peroxidase (HRP), alkaline phosphatase (AP)) or fluorescence (e.g. green fluorescent protein (GFP)) marker. If high purity is required, the lysate can be purified by first using Strep-Tactin and then perform a second run using antibodies against Strep-tag. This reduces the contamination with unspecific bound proteins, which might occur in some rare scenarios.
Following assays can be conducted using the Strep-tag detection system:
Because the Strep-tag is capable of isolating protein complexes, strategies for the study of protein-protein interactions can also be conducted. Another option is the immobilization of Strep-tag proteins with a specific high affinity antibody on microplates or biochips.
Strep-Tag/StrepTactin system is also used in single-molecule optical tweezers and atomic force microscope experiments, showing high mechanical stability comparable to the strongest non-covalent linkages currently available. [ 6 ] | https://en.wikipedia.org/wiki/Strep-tag |
Strep Tamer is a technology which allows the reversible isolation and staining of antigen -specific T-cells. This technology combines a current T-cell isolation method with the Strep-Tag technology. In principle, the T-cells are separated by establishing a specific interaction between the T-cell of interest and a molecule , that is conjugated to a marker which enables the isolation. The reversibility of this interaction and the low temperatures at which it is performed allow for the isolation and characterization of functional T-cells. Because T-cells remain phenotypically and functionally indistinguishable from untreated cells, this method offers modern strategies in clinical and basic T-cell research. [ 1 ]
T cells play an important role in the adaptive immune system . They are capable of orchestrating, regulating and coordinating complex immune responses. A wide array of clinically relevant aspects are associated with the function or malfunction of T-cells: Autoimmune diseases , control of viral or bacterial pathogens , development of cancer or graft versus host responses.
Over the past years, various methods ( ELISpot Assay , intracellular cytokine staining , secretion assay ) have been developed for the identification of T cells, but only major histocompatibility complex (MHC) procedures allow identification and purification of antigen-specific T cells independent of their functional status.
In principle, MHC procedures are using the T cell receptor (TCR) ligand , which is the MHC-peptide complex, as a staining probe. The MHC interacts with the TCR, which in turn is expressed on the T cells. Because TCR-MHC interactions have only a very weak affinity towards each other, monomeric MHC-epitope complexes cannot provide stable binding. This problem can be solved by using multimerized MHC-epitopes, which increases the binding avidity and therefore allows stable binding. Fluorochromes conjugated to the MHC-multimers then can be used for identification of T cells by flow cytometry .
Nowadays, MHC molecules can be produced recombinantly together with the antigenic peptides which are known for a fast-growing number of diseases.
The Streptamer staining principle combines the classic method of T cell isolation by MHC-multimers with the Strep-tag / Strep-Tactin technology. The Strep -tag is a short peptide sequence that displays moderate binding affinity for the biotin -binding site of a mutated streptavidin molecule, called Strep-Tactin. For the Streptamer technology, the Strep-Tactin molecules are multimerized and form the "backbone", thus creating a platform for binding to strep-tagged proteins . Additionally, the Strep-Tactin backbone has a fluorescent label to allow flow cytometry analysis. Incubation of MHC-Strep-tag fusion proteins with the Strep-Tactin backbone results in the formation of a MHC-multimer, which is capable for antigen-specific staining of T cells.
Because the molecule d-biotin has a much higher affinity to Strep-Tactin than Strep-tag, it can effectively compete for the binding site . [ 2 ] [ 3 ] Therefore, a MHC multimer based on the interaction of Strep-tag with Strep-Tactin is easily disrupted in the presence of relatively low concentrations of d-biotin. Without the Strep-Tactin backbone, the single MHC-Strep-tag fusion proteins spontaneously detach from the TCR of the T cell, because of weak binding affinities (monomeric MHC-epitope complexes cannot provide stable binding, see above). | https://en.wikipedia.org/wiki/Streptamer |
Streptomyces isolates have yielded the majority of human, animal, and agricultural antibiotics, as well as a number of fundamental chemotherapy medicines. Streptomyces is the largest antibiotic -producing genus of Actinomycetota , producing chemotherapy, antibacterial, antifungal , antiparasitic drugs, and immunosuppressants . [ 1 ] Streptomyces isolates are typically initiated with the aerial hyphal formation from the mycelium . [ 2 ]
Streptomyces , yielded the medicines doxorubicin ( Doxil ), daunorubicin ( DaunoXome ), and streptozotocin ( Zanosar ). Doxorubicin is the precursor to valrubicin ( Valstar ), myocet , and pirarubicin . Daunorubicin is the precursor to idarubicin ( Idamycin ), epirubicin ( Ellence ), and zorubicin . [ citation needed ]
Streptomyces is the original source of dactinomycin ( Cosmegen ), bleomycin ( Blenoxane ), pingyangmycin ( Bleomycin A 5 ), mitomycin C ( Mutamycin ), rebeccamycin , staurosporine (precursor to stauprimide and midostaurin ), neothramycin , aclarubicin , tomaymycin, sibiromycin , and mazethramycin. [ citation needed ]
Derivatives of Streptomycetes isolate migrastatin , including isomigrastatin , dorrigocin A & B, and the synthetic derivative macroketone , are being researched for anticancer activity. [ citation needed ]
Most clinical antibiotics were found during the "golden age of antibiotics" (1940s–1960s). Actinomycin was the first antibiotic isolated from Streptomyces in 1940, followed by streptomycin three years later. Antibiotics from Streptomyces isolates (including various aminoglycosides ) would go on to comprise over two-thirds of all marketed antibiotics. [ citation needed ]
Streptomyces -derived antibiotics include:
Clavulanic acid ( Streptomyces clavuligerus ) is used in combination with some antibiotics (such as amoxicillin ) to weaken bacterial-resistance. Novel anti-infectives being developed include the guadinomines (from Streptomyces sp. K01-0509), [ 14 ] inhibitors of the type III secretion system .
Non- Streptomyces actinomycetes , filamentous fungi , and non-filamentous bacteria , have also yielded important antibiotics. [ citation needed ]
Nystatin ( Streptomyces noursei ), amphotericin B ( Streptomyces nodosus ), ossamycin ( Streptomyces hygroscopicus ), and natamycin ( Streptomyces natalensis ) are antifungals isolated from Streptomyces . [ citation needed ]
Sirolimus ( Rapamycin ), ascomycin , and tacrolimus were isolated from Streptomyces . Pimecrolimus is a derivative of ascomycin. Ubenimex is derived from S. olivoreticuli . [ 15 ]
Streptomyces avermitilis synthesizes the antiparasitic ivermectin ( Stromectol ). Other antiparasitics made by Streptomyces include, milbemycin oxime , moxidectin , and milbemycin . [ citation needed ]
Traditionally, Escherichia coli is the choice bacterium to express eukaryotic and recombinant genes. E. coli is well understood and has a successful track record producing insulin , the artemisinin precursor artemisinic acid, and filgrastim ( Neupogen ). [ 16 ] [ 17 ] However, use of E. coli has limitations including misfolding of eukaryotic proteins, insolubility issues, deposition in inclusion bodies, [ 18 ] low secretion efficiency, secretion to periplasmic space.
Streptomyces offers potential advantages including superior secretion mechanisms, higher yields, a simpler end-product purification process, making Streptomyces an attractive alternative to E. coli and Bacillus subtilis . [ 18 ]
Streptomyces coelicolor , Streptomyces avermitilis , Streptomyces griseus , and Saccharopolyspora erythraea , are capable of secondary metabolite production. Streptomyces coelicolor has shown useful for the heterologous expression of proteins. Methods like "ribosome engineering" have been used to achieve 180-fold higher yields with S. coelicolor . [ 19 ]
StreptomeDB, a directory of Streptomyces isolates, contains over 2400 compounds isolated from more than 1900 strains. [ 20 ] [ 21 ] Streptomyces hygroscopicus and Streptomyces viridochromeogenes produce the herbicide bialaphos . Expansion of Streptomyces screenings have included endophytes , extremophiles , and marine varieties. [ citation needed ]
A recent screening of TCM extracts revealed a Streptomyces that produces a number of antitubercular pluramycins . [ 22 ] Wailupemycins are bio-active pyrones isolated from marine Streptomyces . [ 23 ]
Mayamycin has been shown to have cytotoxic properties. [ 24 ] [ 25 ]
Germicidin are a group of four compounds that act as autoregulatory inhibitors of spore germination . [ 26 ] [ 27 ] | https://en.wikipedia.org/wiki/Streptomyces_isolates |
In solid mechanics , a stress concentration (also called a stress raiser or a stress riser or notch sensitivity ) is a location in an object where the stress is significantly greater than the surrounding region. Stress concentrations occur when there are irregularities in the geometry or material of a structural component that cause an interruption to the flow of stress. This arises from such details as holes , grooves , notches and fillets . Stress concentrations may also occur from accidental damage such as nicks and scratches.
The degree of concentration of a discontinuity under typically tensile loads can be expressed as a non-dimensional stress concentration factor K t {\displaystyle K_{t}} , which is the ratio of the highest stress to the nominal far field stress. For a circular hole in an infinite plate, K t = 3 {\displaystyle K_{t}=3} . [ 1 ] The stress concentration factor should not be confused with the stress intensity factor , which is used to define the effect of a crack on the stresses in the region around a crack tip. [ 2 ]
For ductile materials, large loads can cause localised plastic deformation or yielding that will typically occur first at a stress concentration allowing a redistribution of stress and enabling the component to continue to carry load. Brittle materials will typically fail at the stress concentration. However, repeated low level loading may cause a fatigue crack to initiate and slowly grow at a stress concentration leading to the failure of even ductile materials. Fatigue cracks always start at stress raisers, so removing such defects increases the fatigue strength .
Stress concentrations occur when there are irregularities in the geometry or material of a structural component that cause an interruption to the flow of stress.
Geometric discontinuities cause an object to experience a localised increase in stress. Examples of shapes that cause stress concentrations are sharp internal corners, holes, and sudden changes in the cross-sectional area of the object as well as unintentional damage such as nicks, scratches and cracks. High local stresses can cause objects to fail more quickly, so engineers typically design the geometry to minimize stress concentrations.
Material discontinuities, such as inclusions in metals, may also concentrate the stress. Inclusions on the surface of a component may be broken from machining during manufacture leading to microcracks that grow in service from cyclic loading. Internally, the failure of the interfaces around inclusions during loading may lead to static failure by microvoid coalescence .
The stress concentration factor , K t {\displaystyle K_{t}} , is the ratio of the highest stress σ max {\displaystyle \sigma _{\max }} to a nominal stress σ nom {\displaystyle \sigma _{\text{nom}}} of the gross cross-section and defined as [ 3 ]
Note that the dimensionless stress concentration factor is a function of the geometry shape and independent of its size. [ 4 ] These factors can be found in typical engineering reference materials.
E. Kirsch derived the equations for the elastic stress distribution around a hole . The maximum stress felt near a hole or notch occurs in the area of lowest radius of curvature . In an elliptical hole of length 2 a {\displaystyle 2a} and width 2 b {\displaystyle 2b} , under a far-field stress σ 0 {\displaystyle \sigma _{0}} , the stress at the ends of the major axes is given by Inglis' equation: [ 5 ]
where ρ {\displaystyle \rho } is the radius of curvature of the elliptical hole. For circular holes in an infinite plate where a = b {\displaystyle a=b} , the stress concentration factor is K t = 3 {\displaystyle K_{t}=3} .
As the radius of curvature approaches zero, such as at the tip of a sharp crack, the maximum stress approaches infinity and a stress concentration factor cannot therefore be used for a crack. Instead, the stress intensity factor which defines the scaling of the stress field around a crack tip, is used. [ 2 ]
Stress concentration can arise due to various factors. The following are the main causes of stress concentration:
Material Defects : When designing mechanical components, it is generally presumed that the material used is consistent and homogeneous throughout. In practice, however, material inconsistencies such as internal cracks, blowholes, cavities in welds, air holes in metal parts, and non-metallic or foreign inclusions can occur. These defects act as discontinuities within the component, disrupting the uniform distribution of stress and thereby leading to stress concentration.
Contact Stress : Mechanical components are frequently subjected to forces that are concentrated at specific points or small areas. This localized application of force can result in disproportionately high pressures at these points, causing stress concentration. Typical instances include the interactions at the points of contact in meshing gear teeth, [ 6 ] the interfaces between cams and followers , and the contact zones in ball bearings .
Thermal Stress : Thermal stress occurs when different parts of a structure expand or contract at different rates due to variations in temperature. This differential in thermal expansion and contraction generates internal stresses, which can lead to areas of stress concentration within the structure.
Geometric Discontinuities : Features such as steps on a shaft, shoulders, and other abrupt changes in the cross-sectional area of components are often necessary for mounting elements like gears and bearings or for assembly considerations. While these features are essential for the functionality of the device, they introduce sharp transitions in geometry that become hotspots for stress concentration. Additionally, design elements like oil holes, grooves, keyways, splines, and screw threads also introduce discontinuities that further exacerbate stress concentration.
Rough Surface : Imperfections on the surface of components, such as machining scratches, stamp marks, or inspection marks, can interrupt the smooth flow of stress across the surface, leading to localized increases in stress. These imperfections, although often small, can significantly impact the durability and performance of mechanical components by initiating stress concentration. [ 7 ]
There are experimental methods for measuring stress concentration factors including photoelastic stress analysis , thermoelastic stress analysis, [ 8 ] brittle coatings or strain gauges .
During the design phase, there are multiple approaches to estimating stress concentration factors. Several catalogs of stress concentration factors have been published. [ 9 ] Perhaps most famous is Stress Concentration Design Factors by Peterson, first published in 1953. [ 10 ] [ 11 ] Finite element methods are commonly used in design today. Other methods include the boundary element method [ 12 ] and meshfree methods .
Stress concentrations can be mitigated through techniques that smoothen the flow of stress around a discontinuity:
Material Removal : Introducing auxiliary holes in the high stress region to create a more gradual transition. The size and position of these holes must be optimized. [ 13 ] [ 14 ] Known as crack tip blunting, a counter-intuitive example of reducing one of the worst types of stress concentrations, a crack , is to drill a large hole at the end of the crack. The drilled hole, with its relatively large size, serves to increase the effective crack tip radius and thus reduce the stress concentration. [ 4 ]
Hole Reinforcement : Adding higher strength material around the hole, usually in the form of bonded rings or doublers. [ 15 ] Composite reinforcements can reduce the SCF.
Shape Optimization : Adjusting the hole shape, often transitioning from circular to elliptical, to minimize stress gradients. This must be checked for feasibility. One example is adding a fillet to internal corners. [ 16 ] Another example is in a threaded component, where the force flow line is bent as it passes from shank portion to threaded portion; as a result, stress concentration takes place. To reduce this, a small undercut is made between the shank and threaded portions
Functionally Graded Materials : Using materials with properties that vary gradually can reduce the SCF compared to a sudden change in material.
The optimal mitigation technique depends on the specific geometry, loading scenario, and manufacturing constraints. In general, a combination of methods is required for the best result. While there is no universal solution, careful analysis of the stress flow and parameterization of the model can point designers toward an effective stress reduction strategy. | https://en.wikipedia.org/wiki/Stress_concentration |
Stress corrosion cracking ( SCC ) is the growth of crack formation in a corrosive environment. It can lead to unexpected and sudden failure of normally ductile metal alloys subjected to a tensile stress , especially at elevated temperature. SCC is highly chemically specific in that certain alloys are likely to undergo SCC only when exposed to a small number of chemical environments. The chemical environment that causes SCC for a given alloy is often one which is only mildly corrosive to the metal. Hence, metal parts with severe SCC can appear bright and shiny, while being filled with microscopic cracks. This factor makes it common for SCC to go undetected prior to failure. SCC often progresses rapidly, and is more common among alloys than pure metals. The specific environment is of crucial importance, and only very small concentrations of certain highly active chemicals are needed to produce catastrophic cracking, often leading to devastating and unexpected failure. [ 1 ]
The stresses can be the result of the crevice loads due to stress concentration , or can be caused by the type of assembly or residual stresses from fabrication (e.g. cold working); the residual stresses can be relieved by annealing or other surface treatments. Unexpected and premature failure of chemical process equipment, for example, due to stress corrosion cracking constitutes a serious hazard in terms of safety of personnel, operating facilities and the environment. By weakening the reliability of these types of equipment, such failures also adversely affect productivity and profitability.
Stress corrosion cracking mainly affects metals and metallic alloys . A comparable effect also known as environmental stress cracking also affects other materials such as polymers , ceramics and glass .
Lower pH and lower applied redox potential facilitate the evolution and the enrichment of hydrogen during the process of SCC, thus increasing the SCC intensity. [ 2 ]
MN/m 3/2
MN/m 3/2
With the possible exception of the latter, which is a special example of hydrogen cracking , all the others display the phenomenon of subcritical crack growth, i.e. small surface flaws propagate (usually smoothly) under conditions where fracture mechanics predicts that failure should not occur. That is, in the presence of a corrodent, cracks develop and propagate well below critical stress intensity factor ( K I c {\displaystyle K_{\mathrm {Ic} }} ). The subcritical value of the stress intensity, designated as K I s c c {\displaystyle K_{\mathrm {Iscc} }} , may be less than 1% of K I c {\displaystyle K_{\mathrm {Ic} }} .
A similar process ( environmental stress cracking ) occurs in polymers , when products are exposed to specific solvents or aggressive chemicals such as acids and alkalis . As with metals, attack is confined to specific polymers and particular chemicals. Thus polycarbonate is sensitive to attack by alkalis, but not by acids. On the other hand, polyesters are readily degraded by acids, and SCC is a likely failure mechanism. Polymers are susceptible to environmental stress cracking where attacking agents do not necessarily degrade the materials chemically. Nylon is sensitive to degradation by acids, a process known as hydrolysis , and nylon mouldings will crack when attacked by strong acids.
For example, the fracture surface of a fuel connector showed the progressive growth of the crack from acid attack (Ch) to the final cusp (C) of polymer. In this case the failure was caused by hydrolysis of the polymer by contact with sulfuric acid leaking from a car battery . The degradation reaction is the reverse of the synthesis reaction of the polymer:
Cracks can be formed in many different elastomers by ozone attack, another form of SCC in polymers. Tiny traces of the gas in the air will attack double bonds in rubber chains, with natural rubber , styrene-butadiene rubber, and nitrile butadiene rubber being most sensitive to degradation. Ozone cracks form in products under tension, but the critical strain is very small. The cracks are always oriented at right angles to the strain axis, so will form around the circumference in a rubber tube bent over. Such cracks are dangerous when they occur in fuel pipes because the cracks will grow from the outside exposed surfaces into the bore of the pipe, so fuel leakage and fire may follow. Ozone cracking can be prevented by adding anti-ozonants to the rubber before vulcanization . Ozone cracks were commonly seen in automobile tire sidewalls, but are now seen rarely thanks to the use of these additives. On the other hand, the problem does recur in unprotected products such as rubber tubing and seals.
This effect is significantly less common in ceramics which are typically more resilient to chemical attack. Although phase changes are common in ceramics under stress these usually result in toughening rather than failure (see Zirconium dioxide ). Recent studies have shown that the same driving force for this toughening mechanism can also enhance oxidation of reduced cerium oxide, resulting in slow crack growth and spontaneous failure of dense ceramic bodies. [ 3 ]
Subcritical crack propagation in glasses falls into three regions. In region I, the velocity of crack propagation increases with ambient humidity due to stress-enhanced chemical reaction between the glass and water. In region II, crack propagation velocity is diffusion controlled and dependent on the rate at which chemical reactants can be transported to the tip of the crack. In region III, crack propagation is independent of its environment, having reached a critical stress intensity. Chemicals other than water, like ammonia, can induce subcritical crack propagation in silica glass, but they must have an electron donor site and a proton donor site. [ 4 ] | https://en.wikipedia.org/wiki/Stress_corrosion_cracking |
Stress distribution in soil is a function of the type of soil, the relative rigidity of the soil and the footing, and the depth of foundation at level of contact between footing and soil. [ 1 ] The estimation of vertical stresses at any point in a soil mass due to external loading is essential to the prediction of settlements of buildings, bridges and pressure. [ 2 ] | https://en.wikipedia.org/wiki/Stress_distribution_in_soil |
A stress field is the distribution of internal forces in a body that balance a given set of external forces. Stress fields are widely used in fluid dynamics and materials science . Consider that one can picture the stress fields as the stress created by adding an extra half plane of atoms to a crystal . The bonds are stretched around the location of the dislocation and this stretching causes the stress field to form. Atomic bonds further and further away from the dislocation centre are less and less stretched which is why the stress field dissipates as the distance from the dislocation centre increases. Each dislocation within the material has a stress field associated with it. The creation of these stress fields is a result of the material trying to dissipate mechanical energy that is being exerted on the material. By convention, these dislocations are labelled as either positive or negative depending on whether the stress field of the dislocation is mostly compressive or tensile.
By modelling of dislocations and their stress fields as either a positive ( compressive field) or negative ( tensile field) charges, we can understand how dislocations interact with each other in the lattice. If two like fields come into contact with one another they will be repelled by one another. On the other hand, if two opposing charges come into contact with one another they will be attracted to one another. These two interactions will both strengthen the material in different ways. If two equivalently charged fields come in contact and are confined to a particular region, excessive force is required to overcome the repulsive forces needed to elicit dislocation movement past one another. If two oppositely charged fields come into contact with one another they will merge with one another to form a jog. A jog can be modelled as a potential well that traps dislocations. Thus, excessive force is needed to pull the dislocations apart. Since dislocation motion is the primary mechanism behind plastic deformation, increasing the stress required to move dislocations directly increases the yield strength of the material.
The theory of stress fields can be applied to various strengthening mechanisms for materials. Stress fields can be created by adding different sized atoms to the lattice (solute strengthening). If a smaller atom is added to the lattice, a tensile stress field is created. The atomic bonds are longer due to the smaller radius of the solute atom. Similarly, if a larger atom is added to the lattice, a compressive stress field is created. The atomic bonds are shorter due to the larger radius of the solute atom. The stress fields created by adding solute atoms form the basis of the material strengthening process that occurs in alloys . | https://en.wikipedia.org/wiki/Stress_field |
In cellular biology , stress granules are biomolecular condensates in the cytosol composed of proteins and RNA that assemble into 0.1–2 μm membraneless organelles when the cell is under stress . [ 1 ] [ 2 ] The mRNA molecules found in stress granules are stalled translation pre-initiation complexes associated with 40S ribosomal subunits, translation initiation factors , poly(A)+ mRNA and RNA-binding proteins (RBPs) . While they are membraneless organelles, stress granules have been proposed to be associated with the endoplasmatic reticulum . [ 3 ] There are also nuclear stress granules. This article is about the cytosolic variety.
The function of stress granules remains largely unknown. Stress granules have long been proposed to have a function to protect RNA from harmful conditions, thus their appearance under stress. [ 4 ] The accumulation of RNA into dense globules could keep them from reacting with harmful chemicals and safeguard the information coded in their RNA sequence .
Stress granules might also function as a decision point for untranslated mRNA. Molecules can go down one of three paths: further storage, degradation, or re-initiation of translation . [ 5 ] Conversely, it has also been argued that stress granules are not important sites for mRNA storage nor do they serve as an intermediate location for mRNA in transit between a state of storage and a state of degradation. [ 6 ]
Efforts to identify all RNA within stress granules (the stress granule transcriptome ) in an unbiased way by sequencing RNA from biochemically purified stress granule "cores" have shown that RNA are not recruited to stress granules in a sequence-specific manner, but rather generically, with longer and/or less-optimally translated transcripts being enriched. [ 7 ] These data imply that the stress granule transcriptome is influenced by the valency of RNA (for proteins or other RNA) and by the rates of RNA run-off from polysomes . The latter is further supported by recent single molecule imaging studies. [ 8 ] Furthermore, it was estimated that only about 15% of the total mRNA in the cell is localized to stress granules, [ 7 ] suggesting that stress granules only influence a minority of mRNA in the cell and may not be as important for mRNA processing as previously thought. [ 7 ] [ 9 ] That said, these studies represent only a snapshot in time, and it is likely that a larger fraction of mRNA are at one point stored in stress granules due to those RNA transiting in and out.
The stress proteins that are the main component of stress granules in plant cells are molecular chaperones that sequester, protect, and possibly repair proteins that unfold during heat and other types of stress. [ 10 ] [ 11 ] Therefore, any association of mRNA with stress granules may simply be a side effect of the association of partially unfolded RNA-binding proteins with stress granules, [ 12 ] similar to the association of mRNA with proteasomes . [ 13 ]
DHX9 is a distinct stress granule that has helicase activity capable of acting on double-stranded RNA , but not on DNA , to promote cell survival. [ 14 ] DHX9 acts as a non-membrane bound cytoplasmic compartment to safeguard daughter cells from parental RNA damage. [ 14 ] Assembly of DHX9 stress granules appears to be a dedicated mechanism in mammalian cells for protecting against RNA crosslinking damage. [ 14 ]
Environmental stressors trigger cellular signaling, eventually leading to the formation of stress granules. In vitro , these stressors can include heat, cold, oxidative stress (sodium arsenite), endoplasmic reticulum stress ( thapsigargin ), proteasome inhibition ( MG132 ), hyperosmotic stress , ultraviolet radiation , inhibition of eIF4A ( pateamine A , hippuristanol , or RocA ), nitric oxide accumulation after treatment with 3-morpholinosydnonimine (SIN-1) , [ 15 ] perturbation of pre- mRNA splicing , [ 16 ] and other stressors, like puromycin , which result in disassembled polysomes . [ 17 ] Many of these stressors result in the activation of particular stress-associated kinases (HRI, PERK, PKR, and GCN2), translational inhibition and stress granule formation. [ 17 ] Stress granules will also form upon Gαq activation in a mechanism that involves the release of stress granule associated proteins from the cytosolic population of the Gαq effector phospholipase C β. [ 18 ]
Stress granule formation is often downstream of the stress-activated phosphorylation of eukaryotic translation initiation factor eIF2α ; this does not hold true for all types of stressors that induce stress granules, [ 17 ] for instance, eIF4A inhibition. Further downstream, prion -like aggregation of the protein TIA-1 promotes the formation of stress granules. The term prion -like is used because aggregation of TIA-1 is concentration dependent, inhibited by chaperones , and because the aggregates are resistant to proteases . [ 19 ] It has also been proposed that microtubules play a role in the formation of stress granules, perhaps by transporting granule components. This hypothesis is based on the fact that disruption of microtubules with the chemical nocodazole blocks the appearance of the granules. [ 20 ] Furthermore, many signaling molecules have been shown to regulate the formation or dynamics of stress granules; these include the "master energy sensor" AMP-activated protein kinase (AMPK) , [ 21 ] the O-GlcNAc transferase enzyme (OGT) , [ 22 ] and the pro-apoptotic kinase ROCK1 . [ 23 ]
RNA phase transitions driven in part by intermolecular RNA-RNA interactions may play a role in stress granule formation. Similar to intrinsically disordered proteins, total RNA extracts are capable of undergoing phase separation in physiological conditions in vitro . [ 24 ] RNA-seq analyses demonstrate that these assemblies share a largely overlapping transcriptome with stress granules, [ 24 ] [ 7 ] with RNA enrichment in both being predominately based on the length of the RNA. Further, stress granules contain many RNA helicases , [ 25 ] including the DEAD/H-box helicases Ded1p / DDX3 , eIF4A1 , and RHAU . [ 26 ] In yeast, catalytic ded1 mutant alleles give rise to constitutive stress granules [ 27 ] ATPase-deficient DDX3X (the mammalian homolog of Ded1) mutant alleles are found in pediatric medulloblastoma , [ 28 ] and these coincide with constitutive granular assemblies in patient cells. [ 29 ] These mutant DDX3 proteins promote stress granule assembly in HeLa cells. [ 29 ] In mammalian cells, RHAU mutants lead to reduced stress granule dynamics. [ 26 ] Thus, some hypothesize that RNA aggregation facilitated by intermolecular RNA-RNA interactions plays a role in stress granule formation, and that this role may be regulated by RNA helicases . [ 30 ] There is also evidence that RNA within stress granules is more compacted, compared to RNA in the cytoplasm , and that the RNA is found to be post-translationally modified by N6-methyladenosine (m 6 A) on its 5' ends or RNA acetylation ac4C. [ 31 ] [ 32 ] [ 33 ] Recent work has shown that the highly abundant translation initiation factor and DEAD-box protein eIF4A limits stress granule formation. It does so through its ability to bind ATP and RNA, acting analogously to protein chaperones like Hsp70 . [ 34 ]
Stress granules and P-bodies (processing bodies) share RNA and protein components, both appear under stress, and can physically associate with one another. As of 2018, of the ~660 proteins identified as localizing to stress granules, ~11% also have been identified as processing body-localized proteins (see below). The protein G3BP1 is necessary for the proper docking of processing bodies and stress granules to each other, which may be important for the preservation of polyadenylated mRNA. [ 35 ]
Although some protein components are shared between stress granules and processing bodies, the majority of proteins in either structure are uniquely localized to either structure. [ 36 ] While both stress granules and P-bodies are associated with mRNA , processing bodies have been long proposed to be sites of mRNA degradation because they contain enzymes such as DCP1/2 and XRN1 that are known to degrade mRNA. [ 37 ] However, others have demonstrated that mRNA associated with processing bodies are largely translationally repressed but not degraded. [ 36 ] It has also been proposed that mRNA selected for degradation are passed from stress granules to processing bodies, [ 37 ] though there is also data suggesting that processing bodies precede and promote stress granule formation. [ 38 ]
The complete proteome of stress granules is still unknown, but efforts have been made to catalog all of the proteins that have been experimentally demonstrated to transit into stress granules. [ 39 ] [ 40 ] [ 41 ] Importantly, different stressors can result in stress granules with different protein components. [ 17 ] Many stress granule-associated proteins have been identified by transiently stressing cultured cells and utilizing microscopy to detect the localization of a protein of interest either by expressing that protein fused to a fluorescent protein (i.e. green fluorescent protein (GFP)) and/or by fixing cells and using antibodies to detect the protein of interest along with known protein markers of stress granules ( immunocytochemistry ). [ 42 ]
In 2016, stress granule "cores" were experimentally identified and then biochemically purified for the first time. Proteins in the cores were identified in an unbiased manner using mass spectrometry . This technical advance lead to the identification of hundreds of new stress granule-localized proteins. [ 43 ] [ 25 ] [ 44 ]
The proteome of stress granules has also been experimentally determined by using two slightly different proximity labeling approaches. One of these proximity labeling approaches is the ascorbate peroxidase (APEX) method, in which cells are engineered to express a known stress granule protein, such as G3BP1 , fused to a modified ascorbate peroxidase enzyme called APEX. [ 39 ] [ 45 ] Upon incubating the cells in biotin and treating the cells with hydrogen peroxide , the APEX enzyme will be briefly activated to biotinylate all proteins in close proximity to the protein of interest, in this case G3BP1 within stress granules. Proteins that are biotinylated can then be isolated via streptavidin and identified using mass spectrometry . The APEX technique was used to identify ~260 stress granule-associated proteins in several cell types, including neurons , and with various stressors. Of the 260 proteins identified in this study, ~143 had not previously been demonstrated to be stress granule-associated. [ 45 ]
Another proximity labeling method used to determine the proteome of stress granules is BioID. [ 46 ] BioID is similar to the APEX approach, in that a biotinylating protein (BirA* instead of APEX) was expressed in cells as a fusion protein with several known stress granule-associated proteins. Proteins in close proximity to BirA* will be biotinylated and are then identified by mass spectrometry . Youn et al. used this method to identify/predict 138 proteins as stress granule-associated and 42 as processing body-associated. [ 46 ]
A curated database of stress granule-associated proteins can be found here [1] . [ 41 ]
The following is a list of proteins that have been demonstrated to localize to stress granules (compiled from [ 39 ] [ 40 ] [ 25 ] [ 45 ] [ 46 ] [ 47 ] ):
Laboratories: | https://en.wikipedia.org/wiki/Stress_granule |
In fracture mechanics , the stress intensity factor ( K ) is used to predict the stress state ("stress intensity") near the tip of a crack or notch caused by a remote load or residual stresses . [ 1 ] It is a theoretical construct usually applied to a homogeneous, linear elastic material and is useful for providing a failure criterion for brittle materials, and is a critical technique in the discipline of damage tolerance . The concept can also be applied to materials that exhibit small-scale yielding at a crack tip.
The magnitude of K depends on specimen geometry, the size and location of the crack or notch, and the magnitude and the distribution of loads on the material. It can be written as: [ 2 ] [ 3 ]
where f ( a / W ) {\displaystyle f(a/W)} is a specimen geometry dependent function of the crack length, a , and the specimen width, W , and σ is the applied stress.
Linear elastic theory predicts that the stress distribution ( σ i j {\displaystyle \sigma _{ij}} ) near the crack tip, in polar coordinates ( r , θ {\displaystyle r,\theta } ) with origin at the crack tip, has the form [ 4 ]
where K is the stress intensity factor (with units of stress × length 1/2 ) and f i j {\displaystyle f_{ij}} is a dimensionless quantity that varies with the load and geometry. Theoretically, as r goes to 0, the stress σ i j {\displaystyle \sigma _{ij}} goes to ∞ {\displaystyle \infty } resulting in a stress singularity. [ 5 ] Practically however, this relation breaks down very close to the tip (small r ) because plasticity typically occurs at stresses exceeding the material's yield strength and the linear elastic solution is no longer applicable. Nonetheless, if the crack-tip plastic zone is small in comparison to the crack length, the asymptotic stress distribution near the crack tip is still applicable.
In 1957, G. Irwin found that the stresses around a crack could be expressed in terms of a scaling factor called the stress intensity factor . He found that a crack subjected to any arbitrary loading could be resolved into three types of linearly independent cracking modes. [ 6 ] These load types are categorized as Mode I, II, or III as shown in the figure. Mode I is an opening ( tensile ) mode where the crack surfaces move directly apart. Mode II is a sliding (in-plane shear ) mode where the crack surfaces slide over one another in a direction perpendicular to the leading edge of the crack. Mode III is a tearing ( antiplane shear ) mode where the crack surfaces move relative to one another and parallel to the leading edge of the crack. Mode I is the most common load type encountered in engineering design.
Different subscripts are used to designate the stress intensity factor for the three different modes. The stress intensity factor for mode I is designated K I {\displaystyle K_{\rm {I}}} and applied to the crack opening mode. The mode II stress intensity factor, K I I {\displaystyle K_{\rm {II}}} , applies to the crack sliding mode and the mode III stress intensity factor, K I I I {\displaystyle K_{\rm {III}}} , applies to the tearing mode. These factors are formally defined as: [ 7 ]
The mode I stress field expressed in terms of K I {\displaystyle K_{\rm {I}}} is [ 6 ]
and
The displacements are
Where, for plane stress conditions
and for plane strain
For mode II
and
And finally, for mode III
with σ x x = σ y y = σ r r = σ θ θ = σ z z = σ x y = σ r θ = 0 {\displaystyle \sigma _{xx}=\sigma _{yy}=\sigma _{rr}=\sigma _{\theta \theta }=\sigma _{zz}=\sigma _{xy}=\sigma _{r\theta }=0} .
In plane stress conditions, the strain energy release rate ( G {\displaystyle G} ) for a crack under pure mode I, or pure mode II loading is related to the stress intensity factor by:
where E {\displaystyle E} is the Young's modulus and ν {\displaystyle \nu } is the Poisson's ratio of the material. The material is assumed to be an isotropic, homogeneous, and linear elastic. The crack has been assumed to extend along the direction of the initial crack
For plane strain conditions, the equivalent relation is a little more complicated:
For pure mode III loading,
where μ {\displaystyle \mu } is the shear modulus . For general loading in plane strain, the linear combination holds:
A similar relation is obtained for plane stress by adding the contributions for the three modes.
The above relations can also be used to connect the J-integral to the stress intensity factor because
The stress intensity factor, K {\displaystyle K} , is a parameter that amplifies the magnitude of the applied stress that includes the geometrical parameter Y {\displaystyle Y} (load type). Stress intensity in any mode situation is directly proportional to the applied load on the material. If a very sharp crack, or a V- notch can be made in a material, the minimum value of K I {\displaystyle K_{\mathrm {I} }} can be empirically determined, which is the critical value of stress intensity required to propagate the crack. This critical value determined for mode I loading in plane strain is referred to as the critical fracture toughness ( K I c {\displaystyle K_{\mathrm {Ic} }} ) of the material. K I c {\displaystyle K_{\mathrm {Ic} }} has units of stress times the root of a distance (e.g. MN/m 3/2 ). The units of K I c {\displaystyle K_{\mathrm {Ic} }} imply that the fracture stress of the material must be reached over some critical distance in order for K I c {\displaystyle K_{\mathrm {Ic} }} to be reached and crack propagation to occur. The Mode I critical stress intensity factor, K I c {\displaystyle K_{\mathrm {Ic} }} , is the most often used engineering design parameter in fracture mechanics and hence must be understood if we are to design fracture tolerant materials used in bridges, buildings, aircraft, or even bells.
Polishing cannot detect a crack. Typically, if a crack can be seen it is very close to the critical stress state predicted by the stress intensity factor [ citation needed ] .
The G-criterion is a fracture criterion that relates the critical stress intensity factor (or fracture toughness) to the stress intensity factors for the three modes. This failure criterion is written as [ 8 ]
where K c {\displaystyle K_{\rm {c}}} is the fracture toughness, E ′ = E / ( 1 − ν 2 ) {\displaystyle E'=E/(1-\nu ^{2})} for plane strain and E ′ = E {\displaystyle E'=E} for plane stress . The critical stress intensity factor for plane stress is often written as K c {\displaystyle K_{\rm {c}}} .
The stress intensity factor for an assumed straight crack of length 2 a {\displaystyle 2a} perpendicular to the loading direction, in an infinite plane, having a uniform stress field σ {\displaystyle \sigma } is [ 5 ] [ 7 ]
The stress intensity factor at the tip of a penny-shaped crack of radius a {\displaystyle a} in an infinite domain under uniaxial tension σ {\displaystyle \sigma } is [ 1 ]
If the crack is located centrally in a finite plate of width 2 b {\displaystyle 2b} and height 2 h {\displaystyle 2h} , an approximate relation for the stress intensity factor is [ 7 ]
If the crack is not located centrally along the width, i.e., d ≠ b {\displaystyle d\neq b} , the stress intensity factor at location A can be approximated by the series expansion [ 7 ] [ 9 ]
where the factors C n {\displaystyle C_{n}} can be found from fits to stress intensity curves [ 7 ] : 6 for various values of d {\displaystyle d} . A similar (but not identical) expression can be found for tip B of the crack. Alternative expressions for the stress intensity factors at A and B are [ 10 ] : 175
where
with
In the above expressions d {\displaystyle d} is the distance from the center of the crack to the boundary closest to point A . Note that when d = b {\displaystyle d=b} the above expressions do not simplify into the approximate expression for a centered crack.
For a plate having dimensions 2 h × b {\displaystyle 2h\times b} containing an unconstrained edge crack of length a {\displaystyle a} , if the dimensions
of the plate are such that h / b ≥ 0.5 {\displaystyle h/b\geq 0.5} and a / b ≤ 0.6 {\displaystyle a/b\leq 0.6} , the stress intensity factor at the
crack tip under a uniaxial stress σ {\displaystyle \sigma } is [ 5 ]
For the situation where h / b ≥ 1 {\displaystyle h/b\geq 1} and a / b ≥ 0.3 {\displaystyle a/b\geq 0.3} , the stress intensity factor can be approximated
by
For a slanted crack of length 2 a {\displaystyle 2a} in a biaxial stress field with stress σ {\displaystyle \sigma } in the y {\displaystyle y} -direction and α σ {\displaystyle \alpha \sigma } in the x {\displaystyle x} -direction, the stress intensity factors are [ 7 ] [ 11 ]
where β {\displaystyle \beta } is the angle made by the crack with the x {\displaystyle x} -axis.
Consider a plate with dimensions 2 h × 2 b {\displaystyle 2h\times 2b} containing a crack of length 2 a {\displaystyle 2a} . A point force with components F x {\displaystyle F_{x}} and F y {\displaystyle F_{y}} is applied at the point ( x , y {\displaystyle x,y} ) of the plate.
For the situation where the plate is large compared to the size of the crack and the location of the force is relatively close to the crack, i.e., h ≫ a {\displaystyle h\gg a} , b ≫ a {\displaystyle b\gg a} , x ≪ b {\displaystyle x\ll b} , y ≪ h {\displaystyle y\ll h} , the plate can be considered infinite. In that case, for the stress intensity factors for F x {\displaystyle F_{x}} at crack tip B ( x = a {\displaystyle x=a} ) are [ 11 ] [ 12 ]
where
with z = x + i y {\displaystyle z=x+iy} , z ¯ = x − i y {\displaystyle {\bar {z}}=x-iy} , κ = 3 − 4 ν {\displaystyle \kappa =3-4\nu } for plane strain , κ = ( 3 − ν ) / ( 1 + ν ) {\displaystyle \kappa =(3-\nu )/(1+\nu )} for plane stress , and ν {\displaystyle \nu } is the Poisson's ratio .
The stress intensity factors for F y {\displaystyle F_{y}} at tip B are
The stress intensity factors at the tip A ( x = − a {\displaystyle x=-a} ) can be determined from the above relations. For the load F x {\displaystyle F_{x}} at location ( x , y ) {\displaystyle (x,y)} ,
Similarly for the load F y {\displaystyle F_{y}} ,
If the crack is loaded by a point force F y {\displaystyle F_{y}} located at y = 0 {\displaystyle y=0} and − a < x < a {\displaystyle -a<x<a} , the stress intensity factors at point B are [ 7 ]
If the force is distributed uniformly between − a < x < a {\displaystyle -a<x<a} , then the stress intensity factor at tip B is
If the crack spacing is much greater than the crack length (h >> a), the interaction effect between neighboring cracks can be ignored, and the stress intensity factor is equal to that of a single crack of length 2a.
Then the stress intensity factor at crack tip is
K I = σ π a {\displaystyle {\begin{aligned}K_{\rm {I}}&=\sigma {\sqrt {\pi a}}\end{aligned}}}
If the crack length is much greater than the spacing (a >> h ), the cracks can be considered as a stack of semi-infinite cracks.
Then the stress intensity factor at crack tip is
K I = σ h {\displaystyle {\begin{aligned}K_{\rm {I}}&=\sigma {\sqrt {h}}\end{aligned}}}
The stress intensity factor at the crack tip of a compact tension specimen is [ 14 ]
where P {\displaystyle P} is the applied load, B {\displaystyle B} is the thickness of the specimen, a {\displaystyle a} is the crack length, and W {\displaystyle W} is the width of the specimen.
The stress intensity factor at the crack tip of a single-edge notch-bending specimen is [ 14 ]
where P {\displaystyle P} is the applied load, B {\displaystyle B} is the thickness of the specimen, a {\displaystyle a} is the crack length, and W {\displaystyle W} is the width of the specimen. | https://en.wikipedia.org/wiki/Stress_intensity_factor |
Stress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of n {\displaystyle n} m {\displaystyle m} -dimensional data items, a configuration X {\displaystyle X} of n {\displaystyle n} points in r {\displaystyle r} ( ≪ m ) {\displaystyle (\ll m)} -dimensional space is sought that minimizes the so-called stress function σ ( X ) {\displaystyle \sigma (X)} . Usually r {\displaystyle r} is 2 {\displaystyle 2} or 3 {\displaystyle 3} , i.e. the ( n × r ) {\displaystyle (n\times r)} matrix X {\displaystyle X} lists points in 2 − {\displaystyle 2-} or 3 − {\displaystyle 3-} dimensional Euclidean space so that the result may be visualised (i.e. an MDS plot ). The function σ {\displaystyle \sigma } is a cost or loss function that measures the squared differences between ideal ( m {\displaystyle m} -dimensional) distances and actual distances in r -dimensional space. It is defined as:
where w i j ≥ 0 {\displaystyle w_{ij}\geq 0} is a weight for the measurement between a pair of points ( i , j ) {\displaystyle (i,j)} , d i j ( X ) {\displaystyle d_{ij}(X)} is the euclidean distance between i {\displaystyle i} and j {\displaystyle j} and δ i j {\displaystyle \delta _{ij}} is the ideal distance between the points (their separation) in the m {\displaystyle m} -dimensional data space. Note that w i j {\displaystyle w_{ij}} can be used to specify a degree of confidence in the similarity between points (e.g. 0 can be specified if there is no information for a particular pair).
A configuration X {\displaystyle X} which minimizes σ ( X ) {\displaystyle \sigma (X)} gives a plot in which points that are close together correspond to points that are also close together in the original m {\displaystyle m} -dimensional data space.
There are many ways that σ ( X ) {\displaystyle \sigma (X)} could be minimized. For example, Kruskal [ 1 ] recommended an iterative steepest descent approach. However, a significantly better (in terms of guarantees on, and rate of, convergence) method for minimizing stress was introduced by Jan de Leeuw . [ 2 ] De Leeuw's iterative majorization method at each step minimizes a simple convex function which both bounds σ {\displaystyle \sigma } from above and touches the surface of σ {\displaystyle \sigma } at a point Z {\displaystyle Z} , called the supporting point . In convex analysis such a function is called a majorizing function. This iterative majorization process is also referred to as the SMACOF algorithm ("Scaling by MAjorizing a COmplicated Function").
The stress function σ {\displaystyle \sigma } can be expanded as follows:
Note that the first term is a constant C {\displaystyle C} and the second term is quadratic in X {\displaystyle X} (i.e. for the Hessian matrix V {\displaystyle V} the second term is equivalent to tr X ′ V X {\displaystyle X'VX} ) and therefore relatively easily solved. The third term is bounded by:
where B ( Z ) {\displaystyle B(Z)} has:
and b i j = 0 {\displaystyle b_{ij}=0} for d i j ( Z ) = 0 , i ≠ j {\displaystyle d_{ij}(Z)=0,i\neq j}
and b i i = − ∑ j = 1 , j ≠ i n b i j {\displaystyle b_{ii}=-\sum _{j=1,j\neq i}^{n}b_{ij}} .
Proof of this inequality is by the Cauchy-Schwarz inequality, see Borg [ 3 ] (pp. 152–153).
Thus, we have a simple quadratic function τ ( X , Z ) {\displaystyle \tau (X,Z)} that majorizes stress:
The iterative minimization procedure is then:
This algorithm has been shown to decrease stress monotonically (see de Leeuw [ 2 ] ).
Stress majorization and algorithms similar to SMACOF also have application in the field of graph drawing . [ 4 ] [ 5 ] That is, one can find a reasonably aesthetically appealing layout for a network or graph by minimizing a stress function over the positions of the nodes in the graph. In this case, the δ i j {\displaystyle \delta _{ij}} are usually set to the graph-theoretic distances between nodes i {\displaystyle i} and j {\displaystyle j} and the weights w i j {\displaystyle w_{ij}} are taken to be δ i j − α {\displaystyle \delta _{ij}^{-\alpha }} . Here, α {\displaystyle \alpha } is chosen as a trade-off between preserving long- or short-range ideal distances. Good results have been shown for α = 2 {\displaystyle \alpha =2} . [ 6 ] | https://en.wikipedia.org/wiki/Stress_majorization |
In materials science , stress relaxation is the observed decrease in stress in response to strain generated in the structure. This is primarily due to keeping the structure in a strained condition for some finite interval of time hence causing some amount of plastic strain. This should not be confused with creep , which is a constant state of stress with an increasing amount of strain.
Since relaxation relieves the state of stress, it has the effect of also relieving the equipment reactions. Thus, relaxation has the
same effect as cold springing, except it occurs over a longer period of time.
The amount of relaxation which takes place is a function of time, temperature and stress level, thus the actual effect it has on the system is not precisely known, but can be bounded.
Stress relaxation describes how polymers relieve stress under constant strain. Because they are viscoelastic, polymers behave in a nonlinear , non-Hookean fashion. [ 1 ] This nonlinearity is described by both stress relaxation and a phenomenon known as creep , which describes how polymers strain under constant stress. Experimentally, stress relaxation is determined by step strain experiments, i.e. by applying a sudden one-time strain and measuring the build-up and subsequent relaxation of stress in the material (see figure), in either extensional or shear rheology .
Viscoelastic materials have the properties of both viscous and elastic materials and can be modeled by combining elements that represent these characteristics. One viscoelastic model, called the Maxwell model predicts behavior akin to a spring (elastic element) being in series with a dashpot (viscous element), while the Voigt model places these elements in parallel. Although the Maxwell model is good at predicting stress relaxation, it is fairly poor at predicting creep. On the other hand, the Voigt model is good at predicting creep but rather poor at predicting stress relaxation (see viscoelasticity ).
The extracellular matrix and most tissues are stress relaxing, and the kinetics of stress relaxation have been recognized as an important mechanical cue that affects the migration, proliferation , and differentiation of embedded cells . [ 2 ]
Stress relaxation calculations can differ for different materials:
To generalize, Obukhov uses power dependencies: [ 3 ]
where σ 0 {\displaystyle \sigma _{0}} is the maximum stress at the time the loading was removed ( t* ), and n is a material parameter.
Vegener et al. use a power series to describe stress relaxation in polyamides: [ 3 ]
σ ( t ) = ∑ m , n A m n [ ln ( 1 + t ) ] m ( ϵ 0 ′ ) n {\displaystyle \sigma (t)=\sum _{m,n}^{}{A_{mn}[\ln(1+t)]^{m}(\epsilon '_{0})^{n}}}
To model stress relaxation in glass materials Dowvalter uses the following: [ 3 ]
σ ( t ) = 1 b ⋅ log 10 α ( t − t n ) + 1 10 α ( t − t n ) − 1 {\displaystyle \sigma (t)={\frac {1}{b}}\cdot \log {\frac {10^{\alpha }(t-t_{n})+1}{10^{\alpha }(t-t_{n})-1}}} where α {\displaystyle \alpha } is a material constant and b and t n {\displaystyle t_{n}} depend on processing conditions.
The following non-material parameters all affect stress relaxation in polymers : [ 3 ] | https://en.wikipedia.org/wiki/Stress_relaxation |
Stress resultants are simplified representations of the stress state in structural elements such as beams , plates , or shells . [ 1 ] The geometry of typical structural elements allows the internal stress state to be simplified because of the existence of a "thickness'" direction in which the size of the element is much smaller than in other directions. As a consequence the three traction components that vary from point to point in a cross-section can be replaced with a set of resultant forces and resultant moments. These are the stress resultants (also called membrane forces , shear forces , and bending moment ) that may be used to determine the detailed stress state in the structural element. A three-dimensional problem can then be reduced to a one-dimensional problem (for beams) or a two-dimensional problem (for plates and shells).
Stress resultants are defined as integrals of stress over the thickness of a structural element. The integrals are weighted by integer powers the thickness coordinate z (or x 3 ). Stress resultants are so defined to represent the effect of stress as a membrane force N (zero power in z ), bending moment M (power 1) on a beam or shell (structure) . Stress resultants are necessary to eliminate the z dependency of the stress from the equations of the theory of plates and shells.
Consider the element shown in the adjacent figure. Assume that the thickness direction is x 3 . If the element has been extracted from a beam, the width and thickness are comparable in size. Let x 2 be the width direction. Then x 1 is the length direction.
The resultant force vector due to the traction in the cross-section ( A ) perpendicular to the x 1 axis is
where e 1 , e 2 , e 3 are the unit vectors along x 1 , x 2 , and x 3 , respectively. We define the stress resultants such that
where N 11 is the membrane force and V 2 , V 3 are the shear forces. More explicitly, for a beam of height t and width b ,
Similarly the shear force resultants are
The bending moment vector due to stresses in the cross-section A perpendicular to the x 1 -axis is given by
Expanding this expression we have,
We can write the bending moment resultant components as
For plates and shells, the x 1 and x 2 dimensions are much larger than the size in the x 3 direction. Integration over the area of cross-section would have to include one of the larger dimensions and would lead to a model that is too simple for practical calculations. For this reason the stresses are only integrated through the thickness and the stress resultants are typically expressed in units of force per unit length (or moment per unit length ) instead of the true force and moment as is the case for beams.
For plates and shells we have to consider two cross-sections. The first is perpendicular to the x 1 axis and the second is perpendicular to the x 2 axis. Following the same procedure as for beams, and keeping in mind that the resultants are now per unit length, we have
We can write the above as
where the membrane forces are defined as
and the shear forces are defined as
For the bending moment resultants, we have
where r = x 3 e 3 .
Expanding these expressions we have,
Define the bending moment resultants such that
Then, the bending moment resultants are given by
These are the resultants that are often found in the literature but care has to be taken to make sure that the signs are correctly interpreted. | https://en.wikipedia.org/wiki/Stress_resultants |
Acoustic or stress wave tomography is a non-destructive measurement method for the visualization of the structural integrity of a solid object. It is being used to test the preservation of wood or concrete , for example. The term acoustic tomography refers to the perceptible sounds that are caused by the mechanical impulses used for measuring. The term stress wave tomography describes the measurement method more accurately.
The method is based on multiple measurements of the time of flight of stress waves between sensors which are connected to a two- or three-dimensional sampling grid. In the acoustic stress wave tomography of trees ( see also: tree diagnosis), concussion sensors are attached in one or several planes around a trunk or a branch and their positions are measured. Impulses are induced through strokes of a hammer and the arrival times at the sensors are recorded.
The propagation speed of impulses in solid objects correlates with the density and the elastic modulus of the material ( see also: speed of sound ). Internal damage, like rot or cracks, slows down the impulses or forms barriers that render transition of impulses more difficult. This leads to longer propagation times and gets interpreted as reduced speed. Apparent velocity of sound is calculated by dividing the smallest distance between sensors by the time of flight between them.
Special mathematical algorithms turn the matrix of velocities into a color or greyscale image (tomogram) which enables an assessment of the extent of damage. The precision of the method is limited by the number of sensors used. Image resolution is inferior to X-ray computed tomography due to the longer wavelength of the signals, but avoids issues with high energy radiation.
Devices of this kind are the Arbotom, the PiCUS acoustic tomograph and the Arborsonic 3D. | https://en.wikipedia.org/wiki/Stress_wave_tomography |
A stressed member engine is a vehicle engine used as an active structural element of the chassis to transmit forces and torques, rather than being passively contained by the chassis with anti-vibration mounts . Automotive engineers use the method for weight reduction and mass centralization in vehicles . Applications are found in several vehicles where mass reduction is critical for performance reasons, usually after several iterations of conventional frame/chassis designs have been employed.
Stressed member engines was patented in 1900 by Joah ("John") Carver Phelon and his nephew Harry Rayner. [ 1 ] and were pioneered at least as early as the 1916 Harley-Davidson 8-valve racer, and incorporated in the production Harley-Davidson Model W by 1919. [ 2 ] The technique was developed in the 20th century by Vincent and others, and by the end of the century was common feature of chassis built by Ducati , BMW and others. In 2019, KTM Duke 790's engine is used as a stressed member.
Many mid-engine sport cars [ example needed ] have used stressed engine design.
The 1967 Lotus 49 is credited for establishing a solution copied by "everyone" in Formula One . [ 3 ] This requirement is cited as a reason the rules committee changed from an inline-four to a V-6 configuration for the 2014 Formula One season. [ 4 ]
The limited-production De Tomaso Vallelunga mid-engine car prototyped in 1963 used the engine as a stressed member. [ 5 ]
In GM's Chevrolet Bolt and Tesla Motors Model S and Roadster electric cars, the battery pack is a stressed member to increase rigidity. [ 6 ] [ 7 ]
The Fordson tractor Model F, designed during World War I, eliminated the frame to reduce cost of materials and assembly, and was probably influenced by the similar design of the 1913 Wallis Cub. [ 8 ] | https://en.wikipedia.org/wiki/Stressed_member_engine |
In mechanical engineering , stressed skin is a rigid construction in which the skin or covering takes a portion of the structural load, intermediate between monocoque , in which the skin assumes all or most of the load, and a rigid frame, which has a non-loaded covering. Typically, the main frame has a rectangular structure and is triangulated by the covering; a stressed skin structure has localized compression -taking elements (rectangular frame) and distributed tension -taking elements (skin).
A simple framework box with four discrete members is not inherently rigid as it will distort from being square under relatively light loads; however, adding one or more diagonal element(s) that take either tension or compression makes it rigid, because the box cannot deviate from right angles without also altering the diagonals. Sometimes the diagonal elements are flexible like wires, which are used to provide tension, or the elements can be rigid to resist compression, as with a Warren or Pratt truss ; in either case, adding discrete diagonal members results in full frame structures in which the skin contributes very little or nothing to the structural rigidity.
In a stressed-skin design, the skin or outer covering is bonded or pinned to the frame, adding structural rigidity by serving as the triangulating member which resists distortion of the rectangular structure. [ 1 ] The skin provides a significant portion of the overall structural rigidity by taking the in-plane shear stress; however, the skin provides very little resistance to out-of-plane loads. [ 2 ] : 1
These types of structures may also be called semi-monocoque to distinguish them from monocoque designs. There is some overlap between monocoque, semi-monocoque (stressed skin), and rigid frame structures, depending on the proportion of the structural rigidity contributed by the skin. In a monocoque design, the skin assumes all or most of the stress and the structure has fewer discrete framing elements, sometimes including only longitudinal or lateral members. [ 3 ] : 175 In contrast, a rigid frame structure derives only a minor portion of the overall stiffness from the skin, and the discrete framing elements provide the majority.
This stressed skin method of construction is lighter than a full frame structure and not as complex to design as a full monocoque.
William Fairbairn documented the development of the Britannia and Conwy tubular bridges for the Chester and Holyhead Railway in 1849; [ 4 ] in it, Fairbairn describes how Robert Stephenson enlisted his aid to revise Stephenson's original concepts, which would route rail traffic inside riveted steel tubes, supported by chains, with a circular- or egg-shaped cross-section. [ 4 ] : 2 Experiments with scale models led Fairbairn to suggest a hollow rectangular beam instead, with longitudinal stringers on top and bottom fixed firmly to structural coverings: "two longitudinal plates, divided by vertical plates so as to form squares, calculated to resist the crushing strain in the first instance, and the lower parts [...], also longitudinal plates, well-connected with riveted joints, and of considerable thickness to resist the tensile strain in the second". [ 4 ] : 16 This has been credited as the first instance of stressed skin design, also known as sandwich or double hull . [ 5 ]
The first aircraft from the early 1900s were constructed with full frames consisting of wood or steel tube frame members, covered with varnished fabric or plywood, although some companies began developing monocoque structures which were built by bending and laminating thin layers of tulipwood . [ 6 ] : 8 Oswald Short patented an all-metal, stressed-skin wing in the early 1920s. [ 7 ] : 97 [ 8 ] Dr.-Ing Adolf Rohrbach is credited with coining the term "stressed skin" in 1923. [ 7 ] : 169 By 1940, duralumin sheets had replaced wood and nearly all new designs used monocoque construction. [ 6 ] : 8
The adoption of stressed-skin construction resulted in improved aircraft speed and range, accomplished by reduced drag through smoother surfaces, elimination of external bracing, and providing internal space for retractable landing gear. [ 9 ] : 25
Examples include nearly all modern all-metal airplanes , as well as some railway vehicles, buses and motorhomes . The London Transport AEC Routemaster incorporated internal panels riveted to the frames which took most of the structure's shear load. Automobile unibodies are a form of stressed skin as well, as are some framed buildings which lack diagonal bracing. | https://en.wikipedia.org/wiki/Stressed_skin |
A stressor is a chemical or biological agent , environmental condition, external stimulus or an event seen as causing stress to an organism . [ 1 ] Psychologically speaking, a stressor can be events or environments that individuals might consider demanding, challenging, and/or threatening individual safety. [ 2 ]
Events or objects that may trigger a stress response may include:
Stressors can cause physical, chemical and mental responses internally. Physical stressors produce mechanical stresses on skin, bones, ligaments, tendons, muscles and nerves that cause tissue deformation and (in extreme cases) tissue failure. Chemical stresses also produce biomechanical responses associated with metabolism and tissue repair. Physical stressors may produce pain and impair work performance. Chronic pain and impairment requiring medical attention may result from extreme physical stressors or if there is not sufficient recovery time between successive exposures. [ 4 ] [ 5 ] Stressors may also affect mental function and performance. Mental and social stressors may affect behavior and how individuals respond to physical and chemical stressors. [ 6 ]
Social and environmental stressors and the events associated with them can range from minor to traumatic. Traumatic events involve very debilitating stressors, and oftentimes these stressors are uncontrollable. Traumatic events can deplete an individual's coping resources to an extent where the individual may develop acute stress disorder or even post-traumatic stress disorder . People who have been abused, victimized, or terrorized are often more susceptible to stress disorders. [ 7 ] [ 8 ] Most stressor-stress relationships can be evaluated and determined - either by the individual or by a psychologist. Therapeutic measures are often taken to help replenish and rebuild the individual's coping resources while simultaneously aiding the individual in dealing with current stress.
Stressors occur when an individual is unable to cope with the demands of their environment (such as crippling debt with no clear path to resolving it). [ 2 ] Generally, stressors take many forms, such as: traumatic events, life demands, sudden medical emergencies, and daily inconveniences, to name a few. There are also a variety of characteristics that a stressor may possess (different durations, intensity, predictability, and controllability). [ 2 ]
Due to the wide impact and the far-reaching consequences of psychological stressors (especially their profound effects on mental well-being), it is particularly important to devise tools to measure such stressors. Two common psychological stress tests include the Perceived Stress Scale (PSS) [ 9 ] devised by American psychologist Sheldon Cohen , and the Social Readjustment Rating Scale (SRRS) [ 10 ] or the Holmes-Rahe Stress Scale . While the PSS is a traditional Likert scale , the SRRS assigns specific predefined numerical values to stressors.
Traumatic events or any type of shock to the body can cause an acute stress response disorder (ASD). The extent to which one experiences ASD depends on the extent of the shock. If the shock was pushed past a certain extreme after a particular period in time ASD can develop into what is commonly known as Post-traumatic stress disorder (PTSD). [ 11 ] There are two ways that the body responds biologically in order to reduce the amount of stress an individual is experiencing. One thing that the body does to combat stressors is to create stress hormones, which in turn create energy reservoirs that are there in case a stressful event were to occur. The second way our biological components respond is through an individual's cells. Depending on the situation our cells obtain more energy in order to combat any negative stressor and any other activity those cells are involved in seize. [ 12 ]
One possible mechanism of stressors influencing biological pathways involves stimulation of the hypothalamus , CRF ( corticotropin release factor ) causing the pituitary gland to releases ACTH ( adrenocorticotropic hormone ), which causes the adrenal cortex to secrete various stress hormones (e.g., cortisol ). Stress hormones travel in the blood stream to relevant organs , e.g., glands , heart , intestines , triggering a flight-or-fight response . Between this flow there is an alternate path that can be taken after the stressor is transferred to the hypothalamus , which leads to the sympathetic nervous system ; after which the adrenal medulla secretes epinephrine . [ 6 ]
When individuals are informed about events before they occur, the magnitude of the stressor is less than when compared to individuals who were not informed of the stressor. [ 13 ] For example, an individual would prefer to know when they have a deadline ahead of time in order to prepare for it in advance, rather than find out about the deadline the day of. In knowing that there is a deadline ahead of time, the intensity of the stressor is smaller for the individual, as opposed to the magnitude of intensity for the other unfortunate individual who found out about the deadline the day of. When this was tested, psychologists found that when given the choice, individuals had a preference for the predictable stressors, rather than the unpredictable stressors. [ 14 ] The pathologies caused by the lack of predictability are experienced by some individuals working in fields of emergency medicine , military defense , disaster response and others.
Additionally, the degree to which the stressor can be controlled plays a variable in how the individual perceives stress. [ 2 ] Research has found that if an individual is able to take some control over the stressor, then the level of stress will be decreased. During this study, it was found that the individuals become increasingly anxious and distressed if they were unable to control their environment. [ 15 ] As an example, imagine an individual who detests baths in the Middle Ages, taking a bath. If the individual was forced to take the bath with no control over the temperature of the bath (one of the variables), then their anxiety and stress levels would be higher than if the individual was given some control over the environment (such as being able to control the temperature of the water).
Based on these two principles (predictability and control), there are two hypotheses that attempt to account for these preferences; the preparatory response hypothesis and safety hypothesis attempt to accommodate these preferences.
The idea behind this hypothesis is that an organism can better prepare for an event if they are informed beforehand, as this allows them to prepare for it (biologically). [ 2 ] In biologically preparing for this event beforehand, the individual is able to better decrease the event's aversiveness. [ 16 ] In knowing when a potential stressor will occur (such as an exam), the individual could, in theory, prepare for it in advance, thus decreasing the stress that may result from that event.
In this hypothesis, there are two time periods, one in which is deemed safe (where there is no stressor), and one which is deemed unsafe (in which the stressor is present). [ 17 ] This is similar to procrastination and cramming; during the safe intervals (weeks before an exam) the individual is relaxed and not anxious, and during the unsafe intervals (the day or night before the exam) the individual most likely experiences anxiety. [ 2 ] | https://en.wikipedia.org/wiki/Stressor |
In the theory of general relativity , a stress–energy–momentum pseudotensor , such as the Landau–Lifshitz pseudotensor , is an extension of the non-gravitational stress–energy tensor that incorporates the energy–momentum of gravity. It allows the energy–momentum of a system of gravitating matter to be defined. In particular it allows the total of matter plus the gravitating energy–momentum to form a conserved current within the framework of general relativity , so that the total energy–momentum crossing the hypersurface (3-dimensional boundary) of any compact space–time hypervolume (4-dimensional submanifold) vanishes.
Some people (such as Erwin Schrödinger [ citation needed ] ) have objected to this derivation on the grounds that pseudotensors are inappropriate objects in general relativity, but the conservation law only requires the use of the 4- divergence of a pseudotensor which is, in this case, a tensor (which also vanishes). Mathematical developments in the 1980's have allowed pseudotensors to be understood as sections of jet bundles , thus providing a firm theoretical foundation for the concept of pseudotensors in general relativity. [ citation needed ]
The Landau–Lifshitz pseudotensor , a stress–energy–momentum pseudotensor for gravity, [ 1 ] when combined with terms for matter (including photons and neutrinos), allows the energy–momentum conservation laws to be extended into general relativity .
Landau and Lifshitz were led by four requirements in their search for a gravitational energy momentum pseudotensor, t LL μ ν {\displaystyle t_{\text{LL}}^{\mu \nu }} : [ 1 ]
Landau and Lifshitz showed that there is a unique construction that satisfies these requirements, namely t LL μ ν = − 1 κ G μ ν + 1 2 κ ( − g ) ( ( − g ) ( g μ ν g α β − g μ α g ν β ) ) , α β {\displaystyle t_{\text{LL}}^{\mu \nu }=-{\frac {1}{\kappa }}G^{\mu \nu }+{\frac {1}{2\kappa (-g)}}\left((-g)\left(g^{\mu \nu }g^{\alpha \beta }-g^{\mu \alpha }g^{\nu \beta }\right)\right)_{,\alpha \beta }} where:
Examining the 4 requirement conditions we can see that the first 3 are relatively easy to demonstrate:
When the Landau–Lifshitz pseudotensor was formulated it was commonly assumed that the cosmological constant , Λ {\displaystyle \Lambda } , was zero. Nowadays, that assumption is suspect , and the expression frequently gains a Λ {\displaystyle \Lambda } term, giving: t LL μ ν = − 1 κ ( G μ ν + Λ g μ ν ) + 1 2 κ ( − g ) ( ( − g ) ( g μ ν g α β − g μ α g ν β ) ) , α β {\displaystyle t_{\text{LL}}^{\mu \nu }=-{\frac {1}{\kappa }}\left(G^{\mu \nu }+\Lambda g^{\mu \nu }\right)+{\frac {1}{2\kappa (-g)}}\left(\left(-g\right)\left(g^{\mu \nu }g^{\alpha \beta }-g^{\mu \alpha }g^{\nu \beta }\right)\right)_{,\alpha \beta }}
This is necessary for consistency with the Einstein field equations .
Landau and Lifshitz also provide two equivalent but longer expressions for the Landau–Lifshitz pseudotensor:
This definition of energy–momentum is covariantly applicable not just under Lorentz transformations, but also under general coordinate transformations.
This pseudotensor was originally developed by Albert Einstein . [ 4 ] [ 5 ]
Paul Dirac showed [ 6 ] that the mixed Einstein pseudotensor t μ ν = 1 2 κ − g ( ( g α β − g ) , μ ( Γ α β ν − δ β ν Γ α σ σ ) − δ μ ν g α β ( Γ α β σ Γ σ ρ ρ − Γ α σ ρ Γ β ρ σ ) − g ) {\displaystyle {t_{\mu }}^{\nu }={\frac {1}{2\kappa {\sqrt {-g}}}}\left(\left(g^{\alpha \beta }{\sqrt {-g}}\right)_{,\mu }\left(\Gamma _{\alpha \beta }^{\nu }-\delta _{\beta }^{\nu }\Gamma _{\alpha \sigma }^{\sigma }\right)-\delta _{\mu }^{\nu }g^{\alpha \beta }\left(\Gamma _{\alpha \beta }^{\sigma }\Gamma _{\sigma \rho }^{\rho }-\Gamma _{\alpha \sigma }^{\rho }\Gamma _{\beta \rho }^{\sigma }\right){\sqrt {-g}}\right)} satisfies a conservation law ( ( T μ ν + t μ ν ) − g ) , ν = 0. {\displaystyle \left(\left({T_{\mu }}^{\nu }+{t_{\mu }}^{\nu }\right){\sqrt {-g}}\right)_{,\nu }=0.}
Clearly this pseudotensor for gravitational stress–energy is constructed exclusively from the metric tensor and its first derivatives. Consequently, it vanishes at any event when the coordinate system is chosen to make the first derivatives of the metric vanish because each term in the pseudotensor is quadratic in the first derivatives of the metric tensor field. However it is not symmetric, and is therefore not suitable as a basis for defining the angular momentum. | https://en.wikipedia.org/wiki/Stress–energy–momentum_pseudotensor |
Stress–strength analysis is the analysis of the strength of the materials and the interference of the stresses placed on the materials, where "materials" is not necessarily the raw goods or parts, but can be an entire system. Stress-Strength Analysis is a tool used in reliability engineering .
Environmental stresses have a distribution with a mean ( μ x ) {\displaystyle \left(\mu _{x}\right)} and a standard deviation ( s x ) {\displaystyle \left(s_{x}\right)} and component strengths have a distribution with a mean ( μ y ) {\displaystyle \left(\mu _{y}\right)} and a standard deviation ( s y ) {\displaystyle \left(s_{y}\right)} . The overlap of these distributions is the probability of failure ( Z ) {\displaystyle \left(Z\right)} . This overlap is also referred to stress-strength interference.
If the distributions for both the stress and the strength both follow a Normal distribution, then the reliability (R) of a component can be determined by the following equation: [ 1 ] R = 1 − P ( Z ) {\displaystyle R=1-P(Z)} , where
Z = − μ y − μ x s x 2 + s y 2 {\displaystyle Z=-{\frac {\mu _{y}-\mu _{x}}{\sqrt {s_{x}^{2}+s_{y}^{2}}}}}
P(Z) can be determined from a Z table or a statistical software package. | https://en.wikipedia.org/wiki/Stress–strength_analysis |
In classical mechanics , the stretch rule (sometimes referred to as Routh 's rule ) states that the moment of inertia of a rigid object is unchanged when the object is stretched parallel to an axis of rotation that is a principal axis , provided that the distribution of mass remains unchanged except in the direction parallel to the axis. [ 1 ] This operation leaves cylinders oriented parallel to the axis unchanged in radius.
This rule can be applied with the parallel axis theorem and the perpendicular axis theorem to find moments of inertia for a variety of shapes.
The (scalar) moment of inertia of a rigid body around the z-axis is given by:
Where r {\displaystyle r} is the distance of a point from the z-axis. We can expand as follows, since we are dealing with stretching over the z -axis only:
Here, L {\displaystyle L} is the body's height. Stretching the object by a factor of a {\displaystyle a} along the z-axis is equivalent to dividing the mass density by a {\displaystyle a} (meaning ρ ′ ( x , y , z ) = ρ ( x , y , z / a ) / a {\displaystyle \rho '(x,y,z)=\rho (x,y,z/a)/a} ), as well as integrating over new limits 0 {\displaystyle 0} and a L {\displaystyle aL} (the new height of the object), thus leaving the total mass unchanged. This means the new moment of inertia will be: | https://en.wikipedia.org/wiki/Stretch_rule |
A stretch sensor is a sensor which can be used to measure deformation and stretching forces such as tension or bending . They are usually made from a material that is itself soft and stretchable.
Most stretch sensors fall into one of three categories. The first type consists of an electrical conductor for which the electrical resistance changes (usually increases) substantially when the sensor is deformed. [ 1 ]
The second type consists of a capacitor for which the capacitance changes under deformation. [ 2 ] [ 3 ] Known properties of the sensor can then be used to deduce the deformation from the resistance/capacitance. Both the rheostatic and capacitive types often take the form of a cord, tape, or mesh.
The third type of sensor uses high performance piezoelectric systems in soft, flexible/stretchable formats for measuring signals using the capability of piezoelectric materials to interconvert mechanical and electrical forms of energy. [ 4 ]
Wearable stretch sensors can be used for tasks such as measuring body posture or movement. [ 5 ] [ 6 ] in 2018, New Zealand based company StretchSense began making a motion capture glove ( data glove ) using stretch sensors. [ 7 ] Unlike gloves that use inertial or optical sensors, stretchable sensors do not suffer from drift or occlusion.
They can also be used in robotics , particularly in soft robots .
Stretch sensors are now widely used in medical fields for analysis and measuring the human dielectric properties w.r.t skin. [ 8 ] | https://en.wikipedia.org/wiki/Stretch_sensor |
Stretchable electronics , also known as elastic electronics or elastic circuits, is a group of technologies for building electronic circuits by depositing or embedding electronic devices and circuits onto stretchable substrates such as silicones or polyurethanes , to make a completed circuit that can experience large strains without failure. In the simplest case, stretchable electronics can be made by using the same components used for rigid printed circuit boards, with the rigid substrate cut (typically in a serpentine pattern) to enable in-plane stretchability. [ 1 ] However, many researchers have also sought intrinsically stretchable conductors, such as liquid metals . [ 2 ]
One of the major challenges in this domain is designing the substrate and the interconnections to be stretchable , rather than flexible (see Flexible electronics ) or rigid ( Printed Circuit Boards ). Typically, polymers are chosen as substrates or material to embed. [ 3 ] When bending the substrate, the outermost radius of the bend will stretch (see Strain in an Euler–Bernoulli beam , subjecting the interconnects to high mechanical strain . Stretchable electronics often attempts biomimicry of human skin and flesh , in being stretchable, whilst retaining full functionality. The design space for products is opened up with stretchable electronics, including sensitive electronic skin for robotic devices [ 4 ] and in vivo implantable sponge-like electronics.
Skin is composed of collagen, keratin, and elastin fibers, which provide robust mechanical strength, low modulus, tear resistance, and softness. The skin can be considered as a bilayer of epidermis and dermis. The epidermal layer has a modulus of about 140-600 kPa and a thickness of 0.05-1.5 mm. Dermis has a modulus of 2-80 kPa and a thickness of 0.3–3 mm. [ 5 ] This bilayer skin exhibits an elastic linear response for strains less than 15% and a non linear response at larger strains. To achieve conformability, it is preferable for devices to match the mechanical properties of the epidermis layer when designing skin-based stretchy electronics.
Conventional high performance electronic devices are made of inorganic materials such as silicon, which is rigid and brittle in nature and exhibits poor biocompatibility due to mechanical mismatch between the skin and the device, making skin integrated electronics applications difficult. To solve this challenge, researchers employed the method of constructing flexible electronics in the form of ultrathin layers. The resistance to bending of a material object (Flexural rigidity) is related to the third power of the thickness, according to the Euler-Bernoulli equation for a beam. [ 6 ] It implies that objects with less thickness can bend and stretch more easily. As a result, even though the material has a relatively high Young's modulus, devices manufactured on ultrathin substrates exhibit a decrease in bending stiffness and allow bending to a small radius of curvature without fracturing. Thin devices have been developed as a result of significant advancements in the field of nanotechnology, fabrication, and manufacturing. The aforementioned approach was used to create devices composed of 100-200 nm thick silicon (Si) nano membranes deposited on thin flexible polymeric substrates. [ 6 ]
Furthermore, structural design considerations can be used to tune the mechanical stability of the devices. Engineering the original surface structure allows us to soften the stiff electronics. Buckling, island connection, and the Kirigami concept have all been employed successfully to make the entire system stretchy. [ 7 ] [ 8 ]
Mechanical buckling can be used to create wavy structures on elastomeric thin substrates. This feature improves the device's stretchability. The buckling approach was used to create Si nanoribbons from single crystal Si on an elastomeric substrate. The study demonstrated the device could bear a maximum strain of 10% when compressed and stretched. [ 9 ]
In the case of island interconnect, the rigid material connects with flexible bridges made from different geometries, such as zig-zag, serpentine-shaped structures, etc., to reduce the effective stiffness, tune the stretchability of the system, and elastically deform under applied strains in specific directions. It has been demonstrated that serpentine-shaped structures have no significant effect on the electrical characteristics of epidermal electronics. It has also been shown that the entanglement of the interconnects, which oppose the movement of the device above the substrate, causes the spiral interconnects to stretch and deform significantly more than the serpentine structures. [ 7 ] CMOS inverters constructed on a polydimethylsiloxane (PDMS) substrate employing 3D island interconnect technologies demonstrated 140% strain at stretching. [ 9 ]
Kirigami is built around the concept of folding and cutting in 2D membranes. This contributes to an increase in the tensile strength of the substrate, as well as its out-of-plane deformation and stretchability. These 2D structures can subsequently be turned to 3D structures with varied topography, shape, and size controllability via the Buckling process, resulting in interesting properties and applications. [ 7 ] [ 9 ]
Several stretchable energy storage devices and supercapacitors are made using carbon-based materials such as single-walled carbon nanotubes (SWCNTs). A study by Li et al. showed a stretchable supercapacitor (composed of buckled SWCNTs macrofilm and elastomeric separators on an elastic PDMS substrate), that performed dynamic charging and discharging. [ 10 ] The key drawback of this stretchable energy storage technology is the low specific capacitance and energy density, although this can potentially be improved by the incorporation of redox materials, for example the SWNT/MnO2 electrode. [ 11 ] Another approach to creating a stretchable energy storage device is the use of origami folding principles. [ 12 ] The resulting origami battery achieved significant linear and areal deformability, large twistability and bendability.
Stretchable electronics could be integrated into smart garments to interact seamlessly with the human body and detect diseases or collect patient data in a non-invasive manner. For example, researchers from Seoul National University and MC10 (a flexible-electronics company) have developed a patch that is able to detect glucose levels in sweat and can deliver the medicine needed on demand (insulin or metformin). The patch consists of graphene riddled with gold particles and contains sensors that are able to detect temperature, pH level, glucose, and humidity. [ 13 ] Stretchable electronics also permit developers to create soft robots, to implement minimally invasive surgeries in hospitals. Especially when it comes to surgeries of the brain and every millimeter is important, such robots may have a more precise scope of action than a human.
Rigid electronics does not typically conform well to soft, biological organisms and tissue. Since stretchable electronics is not limited by this, some researchers try to implement it as sensors for touch, or tactile sensing. One way of achieving this is to make an array of conductive OFET (Organic Field Effect Transistors) forming a network that can detect local changes in capacitance, which gives the user information about where the contact occurred. [ 14 ] This could have potential use in robotics and virtual reality applications. [ 6 ] [ 7 ] [ 5 ] [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Stretchable_electronics |
In applied mathematics , stretching fields provide the local deformation of an infinitesimal circular fluid element over a finite time interval ∆ t . The logarithm of the stretching (after first dividing by ∆ t ) gives the finite-time Lyapunov exponent λ for separation of nearby fluid elements at each point in a flow. For periodic two-dimensional flows, stretching fields have been shown to be closely related to the mixing of a passive scalar concentration field. Until recently, however, the extension of these ideas to systems that are non-periodic or weakly turbulent has been possible only in numerical simulations.
This mathematical physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Stretching_field |
The Stretford process was developed during the late 1950s to remove hydrogen sulfide (H 2 S) from town gas . It was the first liquid phase, oxidation process for converting H 2 S into sulfur to gain widespread commercial acceptance. [ 1 ] Developed by Tom Nicklin [ a ] of the North Western Gas Board (NWGB) and the Clayton Aniline Company , in Manchester , England, the name of the process was derived from the location of the NWGB's laboratories, in Stretford .
The process uses reduction-oxidation ( redox ) chemistry to oxidise the H 2 S into elemental sulfur, in an alkaline solution containing vanadium as an oxygen carrier. [ 2 ]
The process earned the NWGB a Queen's Award to Industry in 1968. Although it was used in the gas industry for only a relatively short time, the process was licensed by the NWGB and used successfully in a variety of industries worldwide. [ 3 ] [ b ] At the height of its popularity during the 1970s, there were more than a dozen companies offering the Stretford technology. By 1987, about 170 Stretford plants had been built worldwide, and more than 100 were still operating in 1992, capable of removing 400,000 tons of sulfur per year. [ 4 ] The first US plant was commissioned in 1971 at Long Beach, California , to process the gas from offshore oil wells. [ citation needed ] | https://en.wikipedia.org/wiki/Stretford_process |
Striations are marks produced on the fracture surface that show the incremental growth of a fatigue crack. A striation marks the position of the crack tip at the time it was made. The term striation generally refers to ductile striations which are rounded bands on the fracture surface separated by depressions or fissures and can have the same appearance on both sides of the mating surfaces of the fatigue crack. Although some research has suggested that many loading cycles are required to form a single striation, it is now generally thought that each striation is the result of a single loading cycle. [ 1 ]
The presence of striations is used in failure analysis as an indication that a fatigue crack has been growing. Striations are generally not seen when a crack is small even though it is growing by fatigue, but will begin to appear as the crack becomes larger. Not all periodic marks on the fracture surface are striations. The size of a striation for a particular material is typically related to the magnitude of the loading characterised by stress intensity factor range, the mean stress and the environment. The width of a striation is indicative of the overall crack growth rate but can be locally faster or slower on the fracture surface.
The study of the fracture surface is known as fractography . Images of the crack can be used to reveal features and understand the mechanisms of crack growth. While striations are fairly straight, they tend to curve at the ends allowing the direction of crack growth to be determined from an image. Striations generally form at different levels in metals and are separated by a tear band between them. Tear bands are approximately parallel to the direction of crack growth and produce what is known as a river pattern , so called, because it looks like the diverging pattern seen with river flows. The source of the river pattern converges to a single point that is typically the origin of the fatigue failure. [ 2 ]
Striations can appear on both sides of the mating fracture surface. There is some dispute as to whether striations produced on both sides of the fracture surface match peak-to-peak or peak-to-valley. The shape of striations may also be different on each side of the fracture surface. [ 3 ] Striations do not occur uniformly over all of the fracture surface and many areas of a fatigue crack may be devoid of striations. Striations are most often observed in metals but also occur in plastics such as Poly(methyl_methacrylate) . [ 4 ]
Small striations can be seen with the aid of a scanning electron microscope . [ 5 ] Once the size of a striation is over 500 nm (resolving wavelength of light), they can be seen with an optical microscope . The first image of striations was taken by Zapffe and Worden in 1951 using an optical microscope. [ 1 ]
The width of a striation indicates the local rate of crack growth and is typical of the overall rate of growth over the fracture surface. The rate of growth can be predicted with a crack growth equation such as the Paris-Erdogan equation . Defects such as inclusions and grain boundaries may locally slow down the rate of growth.
Variable amplitude loads produce striations of different widths and the study of these striation patterns has been used to understand fatigue. [ 6 ] [ 7 ] Although various cycle counting methods can be used to extract the equivalent constant amplitude cycles from a variable amplitude sequence, the striation pattern differs from the cycles extracted using the rainflow counting method .
The height of a striation has been related to the stress ratio R {\displaystyle R} of the applied loading cycle, where R = K min / K max {\displaystyle R=K_{\text{min}}/K_{\text{max}}} and is thus a function of the minimum K min {\displaystyle K_{\text{min}}} and maximum K max {\displaystyle K_{\text{max}}} stress intensity of the applied loading cycle. [ 8 ]
The striation profile depends on the degree of loading and unloading in each cycle. The unloading part of the cycle causing plastic deformation on the surface of the striation. Crack extension only occurs from the rising part of the load cycle. [ 9 ]
Other periodic marks on the fracture surface can be mistaken for striations.
Variable amplitude loading causes cracks to change the plane of growth and this effect can be used to create marker bands on the fracture surface. When a number of constant amplitude cycles are applied they may produce a plateau of growth on the fracture surface. Marker bands (also known as progression marks or beach marks ) may be produced and readily identified on the fracture surface even though the magnitude of the loads may too small to produce individual striations. [ 10 ]
In addition, marker bands may also be produced by large loads (also known as overloads) producing a region of fast fracture on the crack surface. Fast fracture can produce a region of rapid extension before blunting of the crack tip stops the growth and further growth occurs during fatigue. Fast fracture occurs through a process of microvoid coalescence where failures initiate around inter-metallic particles. The F111 aircraft was subjected to periodic proof testing to ensure any cracks present were smaller than a certain critical size. These loads left marks on the fracture surface that could be identified, allowing the rate of intermediate growth occurring in service to be measured. [ 11 ]
Marks also occur from a change in the environment where oil or corrosive environments can deposit or from excessive heat exposure and colour the fracture surface up to the current position of the crack tip. [ 10 ]
Marker bands may be used to measure the instantaneous rate of growth of the applied loading cycles. By applying a repeated sequence separated by loads that produce a distinctive pattern the growth from each segment of loading can be measured using a microscope in a technique called quantitative fractography , the rate of growth for loading segments of constant amplitude or variable amplitude loading can be directly measured from the fracture surface. [ 12 ]
Tyre tracks are the marks on the fracture surface produced by something making an impression onto the surface from the repeated opening and closing of the crack faces. This can be produced by either a particle that becomes trapped between the crack faces or the faces themselves shifting and directly contacting the opposite surface. [ 13 ]
Coarse striations are a general rumpling of the fracture surface and do not correspond to a single loading cycle and are therefore not considered to be true striations. They are produced instead of regular striations when there is insufficient atmospheric moisture to form hydrogen on the surface of the crack tip in aluminium alloys, thereby preventing the slip planes activation. The wrinkles in the surface cross over and so do not represent the position of the crack tip.
Striations are often produced in high strength aluminium alloys. In these alloys, the presence of water vapour is necessary to produce ductile striations, although too much water vapour will produce brittle striations also known as cleavage striations . Brittle striations are flatter and larger than ductile striations produced with the same load. There is sufficient water vapour present in the atmosphere to generate ductile striations. Cracks growing internally are isolated from the atmosphere and grow in a vacuum . [ 14 ] When water vapour deposits onto the freshly exposed aluminium fracture surface, it dissociates into hydroxides and atomic hydrogen . Hydrogen interacts with the crack tip affecting the appearance and size of the striations. The growth rate increases typically by an order of magnitude, with the presence of water vapour. [ 15 ] The mechanism is thought to be hydrogen embrittlement as a result of hydrogen being absorbed into the plastic zone at the crack tip. [ 16 ]
When an internal crack breaks through to the surface, the rate of crack growth and the fracture surface appearance will change due to the presence of water vapour. Coarse striations occur when a fatigue crack grows in a vacuum such as when growing from an internal flaw. [ 15 ]
In aluminium (a face-centred cubic material), cracks grow close to low index planes such as the {100} and the {110} planes (see Miller Index ). [ 3 ] Both of these planes bisect a pair of slip planes . Crack growth involving a single slip plane is term Stage I growth and crack growth involving two slip planes is termed Stage II growth. [ 17 ] Striations are typically only observed in Stage II growth.
Brittle striations are typically formed on {100} planes. [ 17 ]
There have been many models developed to explain the process of how a striation is formed and their resultant shape. Some of the significant models are: | https://en.wikipedia.org/wiki/Striation_(fatigue) |
The Stribeck curve is a fundamental concept in the field of tribology . It shows that friction in fluid-lubricated contacts is a non-linear function of the contact load, the lubricant viscosity and the lubricant entrainment speed. The discovery and underlying research is usually attributed to Richard Stribeck [ 1 ] [ 2 ] [ 3 ] and Mayo D. Hersey , [ 4 ] [ 5 ] who studied friction in journal bearings for railway wagon applications during the first half of the 20th century; however, other researchers have arrived at similar conclusions before. The mechanisms along the Stribeck curve have been in parts also understood today on the atomistic level. [ 6 ]
For a contact of two fluid -lubricated surfaces, the Stribeck curve shows the relationship between the so-called Hersey number , a dimensionless lubrication parameter, and the friction coefficient. The Hersey number is defined as:
Hersey number = η ⋅ N P , {\displaystyle {\begin{aligned}{\text{Hersey number}}={\frac {\eta \cdot N}{P}},\end{aligned}}}
where η is the dynamic viscosity of the fluid, N is the entrainment speed of the fluid and P is the normal load per length of the tribological contact.
Hersey's original formula uses the rotational speed (revolutions per unit time) for N and the load per projected area (i.e. the product of a journal bearing's length and diameter) for P .
Alternatively, the Hersey number is the dimensionless number obtained from the velocity (m/s) times the dynamic viscosity (Pa∙s = N∙s/m2), divided by the load per unit length of bearing (N/m).
Thus, for a given viscosity and load, the Stribeck curve shows how friction changes with increasing velocity. Based on the typical progression of the Stribeck curve (see right), three lubrication regimes can be identified.
Richard Stribeck 's research was performed in Berlin at the Royal Prussian Technical Testing Institute (MPA, now BAM), and his results were presented on 5 December 1901 during a public session of the railway society and published on 6 September 1902. Similar work was previously performed around 1885 by Adolf Martens at the same institute, [ 7 ] and also in the mid-1870s by Robert Henry Thurston [ 8 ] [ 9 ] at the Stevens Institute of Technology in the U.S. The reason why the form of the friction curve for liquid lubricated surfaces was later attributed to Stribeck – although both Thurston and Martens achieved their results considerably earlier – may be because Stribeck published his findings in the most important technical journal in Germany at that time, Zeitschrift des Vereins Deutscher Ingenieure (VDI, Journal of German Mechanical Engineers). Martens published his results in the official journal of the Royal Prussian Technical Testing Institute, which has now become BAM. The VDI journal provided wide access to Stribeck's data and later colleagues rationalized the results into the three classical friction regimes. Thurston did not have the experimental means to record a continuous graph of the coefficient of friction but only measured it at discrete points. This may be the reason why the minimum in the coefficient of friction for a liquid-lubricated journal bearing was not discovered by him, but was demonstrated by the graphs of Martens and Stribeck.
The graphs plotted by Martens show the coefficient of friction either as a function of pressure, speed or temperature (i.e. viscosity), but not of their combination to the Hersey number. Schmidt [ 10 ] attempts to do this using Marten's data. The curves' characteristic minima seem to correspond to very low Hersey numbers in the range 0.00005-0.00015.
In general, there are two approaches for the calculation of Stribeck curve in all lubrication regimes. [ 11 ] In the first approach, the governing flow and surface deformation equations (the system of the Elastohydrodynamic Lubrication equations [ 12 ] ) are solved numerically. Although the numerical solutions can be relatively accurate, this approach is computationally expensive and requires substantial computational resources. The second approach relies on the load-sharing concept [ 13 ] that can be used to solve the problem approximately but at a significantly less computational cost.
In the second approach, the general problem is split up into two sub-problems: 1) lubrication problem assuming smooth surfaces and 2) a “dry” rough contact problem. The two sub-problems are coupled through the load carried by the lubricant and by the “dry” contact. In its simplest approximation, the lubrication sub-problem can be represented via a central film thickness fit [ 14 ] to calculate the film thickness and the Greenwood-Williamson model [ 15 ] for the “dry” contact sub-problem. This approach can give a reasonable qualitative prediction of the friction evolution; however, it is likely to overestimate friction due to the simplification assumptions used in central film thickness calculations and Greenwood-Williamson model.
An online calculator is available on www.tribonet.org that allows calculating Stribeck curve for line [ 16 ] and point [ 17 ] contacts. These tools are based on the load-sharing concept.
Also molecular simulation based on classical force fields can be used for predicting the Stribeck curve. [ 18 ] Thereby, underlying molecular mechanisms can be elucidated. | https://en.wikipedia.org/wiki/Stribeck_curve |
In mathematical analysis , Strichartz estimates are a family of inequalities for linear dispersive partial differential equations . These inequalities establish size and decay of solutions in mixed norm Lebesgue spaces . They were first noted by Robert Strichartz and arose out of connections to the Fourier restriction problem. [ 1 ]
Consider the linear Schrödinger equation in R d {\displaystyle \mathbb {R} ^{d}} with h = m = 1. Then the solution for initial data u 0 {\displaystyle u_{0}} is given by e i t Δ / 2 u 0 {\displaystyle e^{it\Delta /2}u_{0}} . Let q and r be real numbers satisfying 2 ≤ q , r ≤ ∞ {\displaystyle 2\leq q,r\leq \infty } ; 2 q + d r = d 2 {\displaystyle {\frac {2}{q}}+{\frac {d}{r}}={\frac {d}{2}}} ; and ( q , r , d ) ≠ ( 2 , ∞ , 2 ) {\displaystyle (q,r,d)\neq (2,\infty ,2)} .
In this case the homogeneous Strichartz estimates take the form: [ 2 ]
Further suppose that q ~ , r ~ {\displaystyle {\tilde {q}},{\tilde {r}}} satisfy the same restrictions as q , r {\displaystyle q,r} and q ~ ′ , r ~ ′ {\displaystyle {\tilde {q}}',{\tilde {r}}'} are their dual exponents, then the dual homogeneous Strichartz estimates take the form: [ 2 ]
The inhomogeneous Strichartz estimates are: [ 2 ]
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Strichartz_estimate |
In mathematical writing, the term strict refers to the property of excluding equality and equivalence [ 1 ] and often occurs in the context of inequality and monotonic functions . [ 2 ] It is often attached to a technical term to indicate that the exclusive meaning of the term is to be understood. The opposite is non-strict , which is often understood to be the case but can be put explicitly for clarity. In some contexts, the word "proper" can also be used as a mathematical synonym for "strict".
This term is commonly used in the context of inequalities — the phrase "strictly less than" means "less than and not equal to" (likewise "strictly greater than" means "greater than and not equal to"). More generally, a strict partial order , strict total order , and strict weak order exclude equality and equivalence.
When comparing numbers to zero, the phrases "strictly positive" and "strictly negative" mean "positive and not equal to zero" and "negative and not equal to zero", respectively. In the context of functions, the adverb "strictly" is used to modify the terms "monotonic", "increasing", and "decreasing".
On the other hand, sometimes one wants to specify the inclusive meanings of terms. In the context of comparisons, one can use the phrases "non-negative", "non-positive", "non-increasing", and "non-decreasing" to make it clear that the inclusive sense of the terms is being used.
The use of such terms and phrases helps avoid possible ambiguity and confusion. For instance, when reading the phrase " x is positive", it is not immediately clear whether x = 0 is possible, since some authors might use the term positive loosely to mean that x is not less than zero. Such an ambiguity can be mitigated by writing " x is strictly positive" for x > 0, and " x is non-negative" for x ≥ 0. (A precise term like non-negative is never used with the word negative in the wider sense that includes zero.)
The word "proper" is often used in the same way as "strict". For example, a " proper subset " of a set S is a subset that is not equal to S itself, and a " proper class " is a class which is not also a set.
This article incorporates material from strict on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Strict |
In mathematics , strict differentiability is a modification of the usual notion of differentiability of functions that is particularly suited to p-adic analysis . In short, the definition is made more restrictive by allowing both points used in the difference quotient to "move".
The simplest setting in which strict differentiability can be considered, is that of a real-valued function defined on an interval I of the real line.
The function f : I → R is said strictly differentiable in a point a ∈ I if
exists, where ( x , y ) → ( a , a ) {\displaystyle (x,y)\to (a,a)} is to be considered as limit in R 2 {\displaystyle \mathbb {R} ^{2}} , and of course requiring x ≠ y {\displaystyle x\neq y} .
A strictly differentiable function is obviously differentiable, but the converse is wrong, as can be seen from the counter-example
One has however the equivalence of strict differentiability on an interval I , and being of differentiability class C 1 ( I ) {\displaystyle C^{1}(I)} (i.e. continuously differentiable).
In analogy with the Fréchet derivative , the previous definition can be generalized to the case where R is replaced by a Banach space E (such as R n {\displaystyle \mathbb {R} ^{n}} ), and requiring existence of a continuous linear map L such that
where o ( ⋅ ) {\displaystyle o(\cdot )} is defined in a natural way on E × E .
In the p -adic setting, the usual definition of the derivative fails to have certain desirable properties. For instance, it is possible for a function that is not locally constant to have zero derivative everywhere. An example of this is furnished by the function F : Z p → Z p , where Z p is the ring of p-adic integers , defined by
One checks that the derivative of F , according to usual definition of the derivative, exists and is zero everywhere, including at x = 0. That is, for any x in Z p ,
Nevertheless F fails to be locally constant at the origin.
The problem with this function is that the difference quotients
do not approach zero for x and y close to zero. For example, taking x = p n − p 2 n and y = p n , we have
which does not approach zero. The definition of strict differentiability avoids this problem by imposing a condition directly on the difference quotients.
Let K be a complete extension of Q p (for example K = C p ), and let X be a subset of K with no isolated points. Then a function F : X → K is said to be strictly differentiable at x = a if the limit
exists. | https://en.wikipedia.org/wiki/Strict_differentiability |
Stride was a cloud -based team business communication and collaboration tool , launched by Atlassian on 7 September 2017 to replace the cloud-based version of HipChat . [ 1 ] Stride software was available to download onto computers running Windows , Mac or Linux , as well as Android , iOS smartphones , and tablets . [ 2 ] Stride was bought by Atlassian's competitor Slack Technologies and was discontinued on February 15, 2019. [ 3 ] [ 4 ]
The features of Stride include chat rooms, one-on-one messaging, file sharing, 5 GB of file storage, group voice and video calling, built-in collaboration tools , and up to 25,000 of searchable message history. Premium features include unlimited file storage, users, group chat rooms, file sharing and storage, apps, and history retention. The premium version, priced at $3/user/month, also includes advanced meeting functionality like group screen sharing , remote desktop control, and dial-in/dial-out capabilities. Stride offered integrations with Atlassian's other products as well as other third-party applications listed in the Atlassian Marketplace, such as GitHub , Giphy , Stand-Bot and Google Calendar . [ 5 ]
Stride offered additional features beyond messaging to improve efficiency and productivity. It aimed to reduce collaboration noise by introducing a "focus" mode, and eliminates the divisions between text chat, voice meetings, and videoconferencing , by simplifying transitioning between these modes in the same channel. [ 1 ] [ 6 ]
On July 26, 2018, Atlassian announced that HipChat and Stride would be discontinued February 15, 2019, and that it had reached a deal to sell their intellectual property to Slack . [ 3 ] Slack will pay an undisclosed amount over three years to assume the user bases of the services, and Atlassian will take a minority investment in Slack. The companies also announced a commitment to work on integration of Slack with Atlassian services. [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Stride_(software) |
In geology , strike and dip is a measurement convention used to describe the plane orientation or attitude of a planar geologic feature . A feature's strike is the azimuth of an imagined horizontal line across the plane, and its dip is the angle of inclination (or depression angle ) measured downward from horizontal. [ 1 ] They are used together to measure and document a structure's characteristics for study or for use on a geological map . [ 2 ] A feature's orientation can also be represented by dip and dip direction , using the azimuth of the dip rather than the strike value. Linear features are similarly measured with trend and plunge , where "trend" is analogous to dip direction and "plunge" is the dip angle. [ 3 ]
Strike and dip are measured using a compass and a clinometer . A compass is used to measure the feature's strike by holding the compass horizontally against the feature. A clinometer measures the feature's dip by recording the inclination perpendicular to the strike. [ 1 ] These can be done separately, or together using a tool such as a Brunton transit or a Silva compass .
Any planar feature can be described by strike and dip, including sedimentary bedding , fractures , faults , joints , cuestas , igneous dikes and sills , metamorphic foliation and fabric , etc. Observations about a structure's orientation can lead to inferences about certain parts of an area's history, such as movement, deformation, or tectonic activity . [ 3 ]
When measuring or describing the attitude of an inclined feature, two quantities are needed. The angle the slope descends, or dip, and the direction of descent, which can be represented by strike or dip direction. [ 4 ]
Dip is the inclination of a given feature, and is measured from the steepest angle of descent of a tilted bed or feature relative to a horizontal plane. [ 5 ] [ 6 ] True dip is always perpendicular to the strike. It is written as a number (between 0° and 90°) indicating the angle in degrees below horizontal. It can be accompanied with the rough direction of dip (N, SE, etc) to avoid ambiguity. [ 1 ] The direction can sometimes be omitted, as long as the convention used (such as right-hand rule) is known. [ 3 ]
A feature that is completely flat will have the same dip value over the entire surface. The dip of a curved feature, such as an anticline or syncline , will change at different points along the feature and be flat on any fold axis . [ 1 ]
Strike is a representation of the orientation of a tilted feature. The strike line of a bed , fault, or other planar feature, is a line representing the intersection of that feature with a horizontal plane. The strike of the feature is the azimuth (compass direction) of the strike line. [ 5 ] This can be represented by either a quadrant compass bearing (such as N25°E), or as a single three-digit number in terms of the angle from true north (for example, N25°E would simply become 025 or 025°). [ 3 ] [ 1 ]
A feature's orientation can also be represented by its dip direction. Rather than the azimuth of a horizontal line on the plane, the azimuth of the steepest line on the plane is used. [ 3 ] The direction of dip can be visualized as the direction water would flow if poured onto a plane. [ 7 ]
While true dip is measured perpendicular to the strike, apparent dip refers to an observed dip which is not perpendicular to the strike line. This can be seen in outcroppings or cross-sections which do not run parallel to the dip direction. [ 7 ] Apparent dip is always shallower than the true dip. [ 1 ] If the strike is known, the apparent dip or true dip can be calculated using trigonometry:
α = arctan ( sin β × tan δ ) {\displaystyle \alpha =\arctan(\sin \beta \times \tan \delta )} δ = arctan ( tan α ÷ sin β ) {\displaystyle \delta =\arctan(\tan \alpha \div \sin \beta )}
where δ is the true dip, α is the apparent dip, and β is the angle between the strike direction and the apparent dip direction, all in degrees. [ 8 ]
The measurement of a linear feature's orientation is similar to strike and dip, though the terminology differs because "strike" and "dip" are reserved for planes. Linear features use trend and plunge instead. Plunge, or angle of plunge, is the inclination of the feature measured downward relative to horizontal. Trend is the feature's azimuth, measured in the direction of plunge. A horizontal line would have a plunge of 0°, and a vertical line would have a plunge of 90°. [ 3 ] [ 7 ] A linear feature which lies within a plane can also be measured by its rake (or pitch). Unlike plunge, which is the feature's azimuth, the rake is the angle measured within the plane from the strike line. [ 3 ]
On geological maps, strike and dip can be represented by a T symbol with a number next to it. The longer line represents strike, and is in the same orientation as the strike angle. Dip is represented by the shorter line, which is perpendicular to the strike line in the downhill direction. The number gives the dip angle, in degrees, below horizontal, and often does not have the degree symbol. Vertical and horizontal features are not marked with numbers, and instead use their own symbols. Beds dipping vertically have the dip line on both sides of the strike, and horizontal bedding is denoted by a cross within a circle. [ 2 ] [ 9 ]
Interpretation of strike and dip is a part of creating a cross-section of an area. Strike and dip information recorded on a map can be used to reconstruct various structures, determine the orientation of subsurface features, or detect the presence of anticline or syncline folds. [ 1 ] [ 2 ]
There are a few conventions geologists use when measuring a feature's azimuth. When using the strike, two directions can be measured at 180° apart, at either clockwise or counterclockwise of north. One common convention is to use the "right-hand rule" (RHR) where the plane dips down towards the right when facing the strike direction, or that the dip direction should be 90° clockwise of the strike direction. However, in the UK, the right-hand rule has sometimes been specified so that the dip direction is instead counterclockwise from the strike. Some geologists prefer to use whichever strike direction is less than 180°. Others prefer to use the "dip-direction, dip" (DDD) convention instead of using the strike direction. Strike and dip are generally written as 'strike/dip' or 'dip direction,dip', with the degree symbol typically omitted. The general alphabetical dip direction (N, SE, etc) can be added to reduce ambiguity. For a feature with a dip of 45° and a dip direction of 75°, the strike and dip can be written as 345/45 NE, 165/45 NE, or 075,45. The compass quadrant direction for the strike can also be used in place of the azimuth, written as S15E or N15W. [ 1 ] [ 3 ]
Strike and dip are measured in the field using a compass and with a clinometer . A compass is used to measure the azimuth of the strike, and the clinometer measures inclination of the dip. [ 2 ] Dr. E. Clar first described the modern compass-clinometer in 1954, and some continue to be referred to as Clar compasses. [ 10 ] Compasses in use today include the Brunton compass and the Silva compass .
Smartphone apps which can make strike and dip measurements are also available, including apps such as GeoTools . These apps can make use of the phone's internal accelerometer to provide orientation measurements. Combined with the GPS functionality of such devices, this allows readings to be recorded and later downloaded onto a map. [ 11 ]
When studying subsurface features, a dipmeter can be used. A dipmeter is a tool that is lowered into a borehole , and has arms radially attached which can detect the microresistivity of the rock. By recording the times at which the rock's properties change across each of the sensors, the strike and dip of subsurface features can be worked out. [ 12 ] | https://en.wikipedia.org/wiki/Strike_and_dip |
Autologous CD34+ enriched cell fraction that contains CD34+ cells transduced with retroviral vector that encodes for the human ADA cDNA sequence , sold under the brand name Strimvelis , is a medication used to treat severe combined immunodeficiency due to adenosine deaminase deficiency (ADA-SCID). [ 1 ]
ADA-SCID is a rare inherited condition in which there is a change (mutation) in the gene needed to make an enzyme called adenosine deaminase (ADA). [ 1 ] As a result, people lack the ADA enzyme. [ 1 ] Because ADA is essential for maintaining healthy lymphocytes (white blood cells that fight off infections), the immune system of people with ADA-SCID does not work properly and without effective treatment they rarely survive more than two years. [ 1 ]
Strimvelis is the first ex vivo autologous gene therapy approved by the European Medicines Agency (EMA). [ 2 ]
Strimvelis is indicated for the treatment of people with severe combined immunodeficiency due to adenosine deaminase deficiency (ADA-SCID), for whom no suitable human leukocyte antigen (HLA)-matched related stem cell donor is available. [ 1 ]
The treatment is personalized for each person; hematopoietic stem cell (HSCs) are extracted from the person and purified so that only CD34 -expressing cells remain. Those cells are cultured with cytokines and growth factors and then transduced with a gammaretrovirus containing the human adenosine deaminase gene and then reinfused into the person. These cells take root in the person's bone marrow , replicating and creating cells that mature and create normally functioning adenosine deaminase protein, resolving the problem. [ 3 ] [ 4 ] [ 5 ] As of April 2016, the transduced cells had a shelf life of about six hours. [ 6 ]
Prior to extraction, the person is treated with granulocyte colony-stimulating factor in order to increase the number of stem cells and improve the harvest; after that but prior to reinfusion, the person is treated with busulfan or melphalan to kill as many of the person's existing HSCs to increase the chances of the new cells' survival. [ 4 ] [ 5 ]
The most common side effect is pyrexia (fever). [ 1 ]
Serious side effects may include effects linked to autoimmunity (when the immune system attacks the body's own cells) such as hemolytic anemia (low red blood cell counts due to their too rapid breakdown), aplastic anemia (low blood cell counts due to damaged bone marrow), hepatitis (liver inflammation), thrombocytopenia (low blood platelet count) and Guillain-Barré syndrome (damage to nerves that can result in pain, numbness, muscle weakness and difficulty walking). [ 1 ]
Leukemia is a risk of treatment with Strimvelis. [ 7 ]
The treatment was developed at San Raffaele Telethon Institute for Gene Therapy and developed by GlaxoSmithKline (GSK) through a 2010 collaboration with Fondazione Telethon and Ospedale San Raffaele. GSK, working with the biotechnology company MolMed S.p.A., developed a manufacturing process that was previously only suitable for clinical trials into one demonstrated to be robust and suitable for commercial supply. [ 8 ] [ 9 ]
Strimvelis, the first ex vivo autologous gene therapy approved by the EMA, has demonstrated remarkable efficacy and safety in clinical trials for the treatment of ADA-SCID. [ 10 ]
In April 2016, a committee at the European Medicines Agency (EMA) recommended marketing approval for its use in children with adenosine deaminase deficiency , for whom no matched HSC donor is available, on the basis of a clinical trial that produced a 100% survival rate; the median follow-up time was 7 years after the treatment was administered. [ 3 ] 75% of people who received the treatment needed no further enzyme replacement therapy . [ 11 ] Efforts had begun 14 years before. The total number of children treated was reported as 22 [ 12 ] and 18. [ 13 ] Around 80% of patients have no matched donor. [ 14 ] Strimvelis was approved [ 15 ] by the European Commission on 27 May 2016.
As of 2016, [update] the only site approved to manufacture the treatment was MolMed. [ 6 ]
In 2016, Strimvelis obtained Marketing Authorization in Europe while under GSK holding. [ 16 ]
In 2017, GSK announced it was looking to sell off Strimvelis, [ 17 ] and in March 2018, GSK sold Strimvelis to Orchard Therapeutics Ltd.; as of that time there had been only five sales of the product. [ 18 ]
As of 2023, [update] the product has been licensed in Iceland, Norway, Liechtenstein, and the UK. [ 16 ]
The condition affects about 14 people per year in Europe and 12 in the U.S. [ 19 ]
The price for the treatment was set at €594,000 , twice the annual cost of enzyme replacement therapy injections. [ 20 ] Enzyme replacement therapy for ADA requires weekly injections and costs about US$4.25 million for one patient over ten years. [ 14 ]
Strimvelis is the brand name. [ 3 ] The common name is autologous CD34+ enriched cell fraction that contains CD34+ cells transduced with retroviral vector that encodes for the human ADA cDNA sequence . [ 3 ] | https://en.wikipedia.org/wiki/Strimvelis |
In condensed matter physics , a string-net is an extended object whose collective behavior has been proposed as a physical mechanism for topological order by Michael A. Levin and Xiao-Gang Wen . A particular string-net model may involve only closed loops; or networks of oriented, labeled strings obeying branching rules given by some gauge group ; or still more general networks. [ 1 ]
The string-net model is claimed to show the derivation of photons, electrons, and U(1) gauge charge, small (relative to the Planck mass ) but nonzero masses, and suggestions that the leptons , quarks , and gluons can be modeled in the same way. In other words, string-net condensation provides a unified origin for photons and electrons (or gauge bosons and fermions ). It can be viewed as an origin of light and electron (or gauge interactions and Fermi statistics ).
However, their model does not account for the chiral coupling between the fermions and the SU(2) gauge bosons in the standard model .
For strings labeled by the positive integers, string-nets are the spin networks studied in loop quantum gravity . This has led to the proposal by Levin and Wen, [ 2 ] and Smolin, Markopoulou and Konopka [ 3 ] that loop quantum gravity's spin networks can give rise to the standard model of particle physics through this mechanism, along with fermi statistics and gauge interactions . To date, a rigorous derivation from LQG's spin networks to Levin and Wen's spin lattice has yet to be done, but the project to do so is called quantum graphity , and in a more recent paper, Tomasz Konopka, Fotini Markopoulou , Simone Severini argued that there are some similarities to spin networks (but not necessarily an exact equivalence) that gives rise to U(1) gauge charge and electrons in the string net mechanism. [ 4 ]
Herbertsmithite may be an example of string-net matter. [ 5 ] [ 6 ]
Z2 spin liquid obtained using slave-particle approach may be the first theoretical example of string-net liquid. [ 7 ] [ 8 ]
The toric code is a two-dimensional spin-lattice that acts as a quantum error-correcting code. It is defined on a two-dimensional lattice with toric boundary conditions with a spin-1/2 on each link. It can be shown that the ground-state of the standard toric code Hamiltonian is an equal-weight superposition of closed-string states. [ 9 ] Such a ground-state is an example of a string-net condensate [ 10 ] which has the same topological order as the Z2 spin liquid above. | https://en.wikipedia.org/wiki/String-net_liquid |
A string bog or string mire is a bog consisting of slightly elevated ridges and islands, with woody plants, alternating with flat, wet sedge mat areas. String bogs occur on slightly sloping surfaces, with the ridges at right angles to the direction of water flow. They are an example of patterned vegetation .
String bogs are also known as aapa moors or aapa mires (from Finnish aapasuo ) or Strangmoor (from the German ). [ 1 ]
A string bog has a pattern of narrow (2–3m wide), low (less than 1m high) ridges oriented at right angles to the direction of drainage with wet depressions or pools occurring between the ridges. The water and peat are very low in nutrients because the water has been derived from other ombrotrophic wetlands, which receive all of their water and nutrients from precipitation rather than from streams or springs. [ clarification needed ] The peat thickness is greater than 1m.
String bogs are features associated with periglacial climates, where the temperature results in long periods of subzero temperatures. The active layer exists as a frozen ground for long periods and melts in the spring thaw. Slow melting produces characteristic mass movement processes and features associated with specific periglacial environments.
This article related to topography is a stub . You can help Wikipedia by expanding it .
This ecology -related article is a stub . You can help Wikipedia by expanding it .
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/String_bog |
String girdling Earth is a mathematical puzzle with a counterintuitive solution. In a version of this puzzle, string is tightly wrapped around the equator of a perfectly spherical Earth. If the string should be raised 1 metre (3 ft 3 in) off the ground, all the way along the equator, how much longer would the string be?
Alternatively, 1 metre (3 ft 3 in) of string is spliced into the original string, and the extended string rearranged so that it is at a uniform height above the equator. The question that is then posed is whether the gap between string and Earth will allow the passage of a car, a cat or a thin knife blade.
As the string must be raised all along the entire 40,000 km (25,000 mi) circumference, one might expect several kilometres of additional string. Surprisingly, the answer is 2 π m or around 6.3 metres (21 ft).
In the second phrasing, considering that 1 metre (3 ft 3 in) is almost negligible compared with the 40,000 km (25,000 mi) circumference, the first response may be that the new position of the string will be no different from the original surface-hugging position. The answer is that a cat will easily pass through the gap, the size of which will be 1 / 2 π metres or about 16 cm (6.3 in).
Even more surprising is that the size of the sphere or circle around which the string is spanned is irrelevant, and may be anything from the size of an atom to the Milky Way — the result depends only on the amount it is raised. Moreover, as in the coin-rolling problem , the shape the string girdles need not be a circle: 2 π times the offset is added when it is any simple polygon or closed curve which does not intersect itself. If the shape is complex , 2 π times the offset, times the absolute value of its turning number must be added. [ 1 ]
This diagram gives a visual analogue using a square: regardless of the size of the square, the added perimeter is the sum of the four blue arcs, a circle with the same radius as the offset.
More formally, let c be the Earth's circumference, r its radius, Δc the added string length and Δr the added radius. As a circle of radius R has a circumference of 2 π R ,
{{{annotations}}}
regardless of the value of c .
This observation means that an athletics track has the same offset between starting lines on each lane, equal to 2 π times the width of the lane, whether the circumference of the inside lane is the standard 400 m (1,300 ft) or the size of a galaxy.
As aircraft fly at high altitude to save fuel due to lower drag , the formula shows that a Δa rise in altitude lengthens a flight along an entire great circle by 2 π Δa . For example, each additional 1000 feet adds 2 × π × 1000 ≈ 6283 feet (about 1 nautical mile ) around the whole Earth. In SI units , each kilometre altitude increases the distance by 15.7 cm, per kilometre travelled. The higher efficiency far exceeds the negligible distance added. [ 2 ] | https://en.wikipedia.org/wiki/String_girdling_Earth |
In machine learning and data mining , a string kernel is a kernel function that operates on strings , i.e. finite sequences of symbols that need not be of the same length. String kernels can be intuitively understood as functions measuring the similarity of pairs of strings: the more similar two strings a and b are, the higher the value of a string kernel K ( a , b ) will be.
Using string kernels with kernelized learning algorithms such as support vector machines allow such algorithms to work with strings, without having to translate these to fixed-length, real-valued feature vectors . [ 1 ] String kernels are used in domains where sequence data are to be clustered or classified , e.g. in text mining and gene analysis . [ 2 ]
Suppose one wants to compare some text passages automatically and indicate their relative similarity.
For many applications, it might be sufficient to find some keywords which match exactly.
One example where exact matching is not always enough is found in spam detection . [ 3 ] Another would be in computational gene analysis, where homologous genes have mutated , resulting in common subsequences along with deleted, inserted or replaced symbols.
Since several well-proven data clustering, classification and information retrieval
methods (for example support vector machines) are designed to work on vectors
(i.e. data are elements of a vector space), using a string kernel allows the extension of these methods to handle sequence data.
The string kernel method is to be contrasted with earlier approaches for text classification where feature vectors only indicated
the presence or absence of a word.
Not only does it improve on these approaches, but it is an example for a whole class of kernels adapted to data structures, which
began to appear at the turn of the 21st century. A survey of such methods has been compiled by Gärtner. [ 4 ]
In bioinformatics string kernels are used especially to transform biological sequences such as proteins or DNA into vectors for further use in machine learning models. An example of a string kernel used for that purpose is the profile kernel. [ 5 ]
A kernel on a domain D {\displaystyle D} is a function K : D × D → R {\displaystyle K:D\times D\rightarrow \mathbb {R} } satisfying some conditions (being symmetric in the arguments, continuous and positive semidefinite in a certain sense).
Mercer's theorem asserts that K {\displaystyle K} can then be expressed as K ( x , y ) = φ ( x ) ⋅ φ ( y ) {\displaystyle K(x,y)=\varphi (x)\cdot \varphi (y)} with φ {\displaystyle \varphi } mapping the arguments into an inner product space .
We can now reproduce the definition of a string subsequence kernel [ 1 ] on strings over an alphabet Σ {\displaystyle \Sigma } . Coordinate-wise, the mapping is defined as follows:
The i {\displaystyle \mathbf {i} } are multiindices and u {\displaystyle u} is a string of length n {\displaystyle n} :
subsequences can occur in a non-contiguous manner, but gaps are penalized.
The multiindex i {\displaystyle \mathbf {i} } gives the positions of the characters matching u {\displaystyle u} in s {\displaystyle s} . l ( i ) {\displaystyle l(\mathbf {i} )} is the difference between the first and last entry in i {\displaystyle \mathbf {i} } , that is: how far apart in s {\displaystyle s} the subsequence matching u {\displaystyle u} is.
The parameter λ {\displaystyle \lambda } may be set to any value between 0 {\displaystyle 0} (gaps are not allowed, as only 0 0 {\displaystyle 0^{0}} is not 0 {\displaystyle 0} but 1 {\displaystyle 1} ) and 1 {\displaystyle 1} (even widely-spread "occurrences" are weighted the same as appearances as a contiguous substring, as 1 l ( i ) = 1 {\displaystyle 1^{l(\mathbf {i} )}=1} ).
For several relevant algorithms, data enters into the algorithm only in expressions involving an inner product of feature vectors,
hence the name kernel methods . A desirable consequence of this is that one does not need to explicitly calculate the transformation ϕ ( x ) {\displaystyle \phi (x)} , only the inner product via the kernel, which may be a lot quicker, especially when approximated . [ 1 ] | https://en.wikipedia.org/wiki/String_kernel |
A string potentiometer is a transducer used to detect and measure linear position and velocity using a flexible cable and spring-loaded spool. Other common names include string pot , cable-extension transducer , draw wire sensor , and yo-yo sensor .
String potentiometers are composed of four main parts: a measuring cable, spool, spring, and rotational sensor. Inside the transducer's housing, a stainless steel cable is wound on a precisely machined constant diameter cylindrical spool that turns as the measuring cable reels and unreels.
To maintain cable tension, a torsion spring is coupled to the spool. The spool is coupled to the shaft of a rotational sensor (a potentiometer or rotary encoder ). As the transducer's cable extends along with the movable object, it causes the spool and sensor shafts to rotate. The rotating shaft creates an electrical signal proportional to the cable's linear extension or velocity.
String potentiometers are used to measure the position of a moving object. The measurement cable can be connected directly to the moving part, giving a constant measurement of its linear position. This simple type of measurement device has been in use by engineers and designers for about 40 years. [ citation needed ] String potentiometers are generally durable, simple to use, and inexpensive.
The original application for string pots in the 1960s was aerospace cyclic fatigue testing . [ citation needed ] Engineers designed and built these units originally to measure movement of airplane parts as they were cycled during testing. [ citation needed ] Today, the string pot is used both for testing and as a component of equipment. [ citation needed ] Common applications include:
Hydraulic cylinders are used in many industries such as forklifts , cranes and aerials, material handling, die-casting , oil and gas, robotics and automation. [ citation needed ] Measurement of the extension of the cylinder requires knowledge of its current position, and often a string potentiometer is used.
The string potentiometer may be connected as a three-wire tapped resistor (voltage divider), in a control circuit, or may be packaged with electronics to produce a measurement signal in a useful form, such as a variable voltage 0-10 VDC, variable current 4-20mA , pulse encoder, Bus ( DeviceNet and Canbus ) and RS-232 communications.
Measurement ranges vary from about 1 inch to over 100 feet and are available in many appropriate package sizes.
Since the measuring cable may sag or be deflected by wind or gravity, the overall precision of a string potentiometer measurement is limited. The cable mechanism limits the speed at which the measured object can move. Changing temperatures affect both the length of the cable and the resistance value of the potentiometer. Where multiple objects, such as articles on an assembly line, or objects that are hot or coated with wet paint are to be measured, a non-contacting method is required.
Other linear position measurements methods include LVDTs , capacitive and inductive sensors, and rack-and-pinion transducers that convert linear motion into rotary motion. Optical (time-of-flight), ultrasonic, and radar transducers exist and find specialized applications. | https://en.wikipedia.org/wiki/String_potentiometer |
The stringent response , also called stringent control , is a stress response of bacteria and plant chloroplasts in reaction to amino-acid starvation, [ 1 ] fatty acid limitation, [ 2 ] iron limitation, [ 3 ] heat shock [ 4 ] and other stress conditions and growth processes. [ 5 ] The stringent response is signaled by the alarmone (p)ppGpp , and modulates transcription of up to 1/3 of all genes in the cell. This in turn causes the cell to divert resources away from growth and division and toward amino acid synthesis in order to promote survival until nutrient conditions improve.
In Escherichia coli , (p)ppGpp production is mediated by the ribosomal protein L11 ( rplK resp. relC ) and the ribosome-associated (p)ppGpp synthetase I, RelA; deacylated tRNA bound in the ribosomal A-site is the primary induction signal. [ 1 ] RelA converts GTP and ATP into pppGpp by adding the pyrophosphate from ATP onto the 3' carbon of the ribose in GTP, releasing AMP . pppGpp is converted to ppGpp by the gpp gene product, releasing Pi . ppGpp is converted to GDP by the spoT gene product, releasing pyrophosphate ( PPi ).
GDP is converted to GTP by the ndk gene product. Nucleoside triphosphate (NTP) provides the Pi, and is converted to Nucleoside diphosphate (NDP).
In other bacteria, the stringent response is mediated by a variety of RelA/SpoT Homologue (RSH) proteins, [ 6 ] with some having only synthetic, or hydrolytic or both (Rel) activities. [ 7 ]
During the stringent response, (p)ppGpp accumulation affects the resource-consuming cell processes replication , transcription , and translation . (p)ppGpp is thought to bind RNA polymerase and alter the transcriptional profile, decreasing the synthesis of translational machinery (such as rRNA and tRNA ), and increasing the transcription of biosynthetic genes. [ 8 ] Additionally, the initiation of new rounds of replication is inhibited and the cell cycle arrests until nutrient conditions improve. [ 9 ] Translational GTPases involved in protein biosynthesis are also affected by ppGpp, with Initiation Factor 2 (IF2) being the main target. [ 10 ]
Chemical reaction catalyzed by RelA:
Chemical reaction catalyzed by SpoT:
or | https://en.wikipedia.org/wiki/Stringent_response |
Stringers are filaments of slag left in wrought iron after the production process. In their correct proportions their presence is beneficial, as they help to control the ductility of the finished product, but when the proportion of slag is too high, or when the filaments run at right angles to the direction of tension , they can cause weakness.
Wrought iron is no longer made. [ 1 ] The particles of slag present in the iron after preparation by puddling were drawn into long fibres during the forging or rolling process. The proportion of slag was intended to be about 3%, but the process was difficult to control and examples with up to 10% slag were produced. [ 2 ]
Stays made from puddled iron bar were used as a cheaper alternative to copper for joining the inner and outer firebox plates of steam locomotives. The incorporated stringers gave flexibility akin to stranded wire rope and stays made of the material were therefore resistant to snapping in service. [ 3 ] Wrought iron rivets made from iron bar typically contained stringer filaments running the length of the rivet, but filaments at right angles to the tension, particularly beneath the head, caused weakness. [ 2 ]
Anisotropy | https://en.wikipedia.org/wiki/Stringer_(slag) |
The strip packing problem is a 2-dimensional geometric minimization problem.
Given a set of axis-aligned rectangles and a strip of bounded width and infinite height, determine an overlapping-free packing of the rectangles into the strip, minimizing its height.
This problem is a cutting and packing problem and is classified as an Open Dimension Problem according to Wäscher et al. [ 1 ]
This problem arises in the area of scheduling, where it models jobs that require a contiguous portion of the memory over a given time period. Another example is the area of industrial manufacturing, where rectangular pieces need to be cut out of a sheet of material (e.g., cloth or paper) that has a fixed width but infinite length, and one wants to minimize the wasted material.
This problem was first studied in 1980. [ 2 ] It is strongly-NP hard and there exists no polynomial-time approximation algorithm with a ratio smaller than 3 / 2 {\displaystyle 3/2} unless P = N P {\displaystyle P=NP} . However, the best approximation ratio achieved so far (by a polynomial time algorithm by Harren et al. [ 3 ] ) is ( 5 / 3 + ε ) {\displaystyle (5/3+\varepsilon )} , imposing an open question of whether there is an algorithm with approximation ratio 3 / 2 {\displaystyle 3/2} .
An instance I = ( I , W ) {\displaystyle I=({\mathcal {I}},W)} of the strip packing problem consists of a strip with width W = 1 {\displaystyle W=1} and infinite height, as well as a set I {\displaystyle {\mathcal {I}}} of rectangular items.
Each item i ∈ I {\displaystyle i\in {\mathcal {I}}} has a width w i ∈ ( 0 , 1 ] ∩ Q {\displaystyle w_{i}\in (0,1]\cap \mathbb {Q} } and a height h i ∈ ( 0 , 1 ] ∩ Q {\displaystyle h_{i}\in (0,1]\cap \mathbb {Q} } .
A packing of the items is a mapping that maps each lower-left corner of an item i ∈ I {\displaystyle i\in {\mathcal {I}}} to a position ( x i , y i ) ∈ ( [ 0 , 1 − w i ] ∩ Q ) × Q ≥ 0 {\displaystyle (x_{i},y_{i})\in ([0,1-w_{i}]\cap \mathbb {Q} )\times \mathbb {Q} _{\geq 0}} inside the strip.
An inner point of a placed item i ∈ I {\displaystyle i\in {\mathcal {I}}} is a point from the set i n n ( i ) = { ( x , y ) ∈ Q × Q | x i < x < x i + w i , y i < y < y i + h i } {\displaystyle \mathrm {inn} (i)=\{(x,y)\in \mathbb {Q} \times \mathbb {Q} |x_{i}<x<x_{i}+w_{i},y_{i}<y<y_{i}+h_{i}\}} .
Two (placed) items overlap if they share an inner point.
The height of the packing is defined as max { y i + h i | i ∈ I } {\displaystyle \max\{y_{i}+h_{i}|i\in {\mathcal {I}}\}} .
The objective is to find an overlapping-free packing of the items inside the strip while minimizing the height of the packing.
This definition is used for all polynomial time algorithms. For pseudo-polynomial time and FPT -algorithms, the definition is slightly changed for the simplification of notation. In this case, all appearing sizes are integral. Especially the width of the strip is given by an arbitrary integer number larger than 1. Note that these two definitions are equivalent.
There are several variants of the strip packing problem that have been studied. These variants concern the objects' geometry, the problem's dimension, the rotateability of the items, and the structure of the packing. [ 4 ]
Geometry: In the standard variant of this problem, the set of given items consists of rectangles.
In an often considered subcase, all the items have to be squares. This variant was already considered in the first paper about strip packing. [ 2 ] Additionally, variants where the shapes are circular or even irregular have been studied. In the latter case, it is referred to as irregular strip packing .
Dimension: When not mentioned differently, the strip packing problem is a 2-dimensional problem. However, it also has been studied in three or even more dimensions. In this case, the objects are hyperrectangles , and the strip is open-ended in one dimension and bounded in the residual ones.
Rotation: In the classical strip packing problem, the items are not allowed to be rotated. However, variants have been studied where rotating by 90 degrees or even an arbitrary angle is allowed.
Structure: In the general strip packing problem, the structure of the packing is irrelevant.
However, there are applications that have explicit requirements on the structure of the packing. One of these requirements is to be able to cut the items from the strip by horizontal or vertical edge-to-edge cuts.
Packings that allow this kind of cutting are called guillotine packing .
The strip packing problem contains the bin packing problem as a special case when all the items have the same height 1.
For this reason, it is strongly NP-hard, and there can be no polynomial time approximation algorithm that has an approximation ratio smaller than 3 / 2 {\displaystyle 3/2} unless P = N P {\displaystyle P=NP} .
Furthermore, unless P = N P {\displaystyle P=NP} , there cannot be a pseudo-polynomial time algorithm that has an approximation ratio smaller than 5 / 4 {\displaystyle 5/4} , [ 5 ] which can be proven by a reduction from the strongly NP-complete 3-partition problem .
Note that both lower bounds 3 / 2 {\displaystyle 3/2} and 5 / 4 {\displaystyle 5/4} also hold for the case that a rotation of the items by 90 degrees is allowed.
Additionally, it was proven by Ashok et al. [ 6 ] that strip packing is W[1]-hard when parameterized by the height of the optimal packing.
There are two trivial lower bounds on optimal solutions.
The first is the height of the largest item.
Define h max ( I ) := max { h ( i ) | i ∈ I } {\displaystyle h_{\max }(I):=\max\{h(i)|i\in {\mathcal {I}}\}} .
Then it holds that
O P T ( I ) ≥ h max ( I ) {\displaystyle OPT(I)\geq h_{\max }(I)} .
Another lower bound is given by the total area of the items.
Define A R E A ( I ) := ∑ i ∈ I h ( i ) w ( i ) {\displaystyle \mathrm {AREA} ({\mathcal {I}}):=\sum _{i\in {\mathcal {I}}}h(i)w(i)} then it holds that
O P T ( I ) ≥ A R E A ( I ) / W {\displaystyle OPT(I)\geq \mathrm {AREA} ({\mathcal {I}})/W} .
The following two lower bounds take notice of the fact that certain items cannot be placed next to each other in the strip, and can be computed in O ( n log ( n ) ) {\displaystyle {\mathcal {O}}(n\log(n))} . [ 7 ] For the first lower bound assume that the items are sorted by non-increasing height. Define k := max { i : ∑ j = 1 k w ( j ) ≤ W } {\displaystyle k:=\max\{i:\sum _{j=1}^{k}w(j)\leq W\}} . For each l > k {\displaystyle l>k} define i ( l ) ≤ k {\displaystyle i(l)\leq k} the first index such that w ( l ) + ∑ j = 1 i ( l ) w ( j ) > W {\displaystyle w(l)+\sum _{j=1}^{i(l)}w(j)>W} . Then it holds that
O P T ( I ) ≥ max { h ( l ) + h ( i ( l ) ) | l > k ∧ w ( l ) + ∑ j = 1 i ( l ) w ( j ) > W } {\displaystyle OPT(I)\geq \max\{h(l)+h(i(l))|l>k\wedge w(l)+\sum _{j=1}^{i(l)}w(j)>W\}} . [ 7 ]
For the second lower bound, partition the set of items into three sets. Let α ∈ [ 1 , W / 2 ] ∩ N {\displaystyle \alpha \in [1,W/2]\cap \mathbb {N} } and define I 1 ( α ) := { i ∈ I | w ( i ) > W − α } {\displaystyle {\mathcal {I}}_{1}(\alpha ):=\{i\in {\mathcal {I}}|w(i)>W-\alpha \}} , I 2 ( α ) := { i ∈ I | W − α ≥ w ( i ) > W / 2 } {\displaystyle {\mathcal {I}}_{2}(\alpha ):=\{i\in {\mathcal {I}}|W-\alpha \geq w(i)>W/2\}} , and I 3 ( α ) := { i ∈ I | W / 2 ≥ w ( i ) > α } {\displaystyle {\mathcal {I}}_{3}(\alpha ):=\{i\in {\mathcal {I}}|W/2\geq w(i)>\alpha \}} . Then it holds that
O P T ( I ) ≥ max α ∈ [ 1 , W / 2 ] ∩ N { ∑ i ∈ I 1 ( α ) ∪ I 2 ( α ) h ( i ) + ( ∑ i ∈ I 3 ( α ) h ( i ) w ( i ) − ∑ i ∈ I 2 ( α ) ( W − w ( i ) ) h ( i ) W ) + } {\displaystyle OPT(I)\geq \max _{\alpha \in [1,W/2]\cap \mathbb {N} }{\Bigg \{}\sum _{i\in {\mathcal {I}}_{1}(\alpha )\cup {\mathcal {I}}_{2}(\alpha )}h(i)+\left({\frac {\sum _{i\in {\mathcal {I}}_{3}(\alpha )h(i)w(i)-\sum _{i\in {\mathcal {I}}_{2}(\alpha )}(W-w(i))h(i)}}{W}}\right)_{+}{\Bigg \}}} , [ 7 ] where ( x ) + := max { x , 0 } {\displaystyle (x)_{+}:=\max\{x,0\}} for each x ∈ R {\displaystyle x\in \mathbb {R} } .
On the other hand, Steinberg [ 8 ] has shown that the height of an optimal solution can be upper bounded by
O P T ( I ) ≤ 2 max { h max ( I ) , A R E A ( I ) / W } . {\displaystyle OPT(I)\leq 2\max\{h_{\max }(I),\mathrm {AREA} ({\mathcal {I}})/W\}.}
More precisely he showed that given a W ≥ w max ( I ) {\displaystyle W\geq w_{\max }({\mathcal {I}})} and a H ≥ h max ( I ) {\displaystyle H\geq h_{\max }(I)} then the items I {\displaystyle {\mathcal {I}}} can be placed inside a box with width W {\displaystyle W} and height H {\displaystyle H} if
W H ≥ 2 A R E A ( I ) + ( 2 w max ( I ) − W ) + ( 2 h max ( I ) − H ) + {\displaystyle WH\geq 2\mathrm {AREA} ({\mathcal {I}})+(2w_{\max }({\mathcal {I}})-W)_{+}(2h_{\max }(I)-H)_{+}} , where ( x ) + := max { x , 0 } {\displaystyle (x)_{+}:=\max\{x,0\}} .
Since this problem is NP-hard, approximation algorithms have been studied for this problem.
Most of the heuristic approaches have an approximation ratio between 3 {\displaystyle 3} and 2 {\displaystyle 2} .
Finding an algorithm with a ratio below 2 {\displaystyle 2} seems complicated, and
the complexity of the corresponding algorithms increases regarding their running time and their descriptions.
The smallest approximation ratio achieved so far is ( 5 / 3 + ε ) {\displaystyle (5/3+\varepsilon )} .
This algorithm was first described by Baker et al. [ 2 ] It works as follows:
Let L {\displaystyle L} be a sequence of rectangular items.
The algorithm iterates the sequence in the given order.
For each considered item r ∈ L {\displaystyle r\in L} , it searches for the bottom-most position to place it and then shifts it as far to the left as possible.
Hence, it places r {\displaystyle r} at the bottom-most left-most possible coordinate ( x , y ) {\displaystyle (x,y)} in the strip.
This algorithm has the following properties:
This algorithm was first described by Coffman et al. [ 9 ] in 1980 and works as follows:
Let I {\displaystyle {\mathcal {I}}} be the given set of rectangular items.
First, the algorithm sorts the items by order of nonincreasing height.
Then, starting at position ( 0 , 0 ) {\displaystyle (0,0)} , the algorithm places the items next to each other in the strip until the next item will overlap the right border of the strip.
At this point, the algorithm defines a new level at the top of the tallest item in the current level and places the items next to each other in this new level.
This algorithm has the following properties:
This algorithm, first described by Coffman et al. [ 9 ] in 1980, works similar to the NFDH algorithm.
However, when placing the next item, the algorithm scans the levels from bottom to top and places the item in the first level on which it will fit.
A new level is only opened if the item does not fit in any previous ones.
This algorithm has the following properties:
This algorithm was first described by Coffman et al. [ 9 ] For a given set of items I {\displaystyle {\mathcal {I}}} and strip with width W {\displaystyle W} , it works as follows:
This algorithm has the following properties:
For a given set of items I {\displaystyle {\mathcal {I}}} and strip with width W {\displaystyle W} , it works as follows:
This algorithm has the following properties:
This algorithm is an extension of Sleator's approach and was first described by Golan. [ 11 ] It places the items in nonincreasing order of width.
The intuitive idea is to split the strip into sub-strips while placing some items.
Whenever possible, the algorithm places the current item i {\displaystyle i} side-by-side of an already placed item j {\displaystyle j} .
In this case, it splits the corresponding sub-strip into two pieces: one containing the first item j {\displaystyle j} and the other containing the current item i {\displaystyle i} .
If this is not possible, it places i {\displaystyle i} on top of an already placed item and does not split the sub-strip.
This algorithm creates a set S of sub-strips. For each sub-strip s ∈ S we know its lower left corner s.xposition and s.yposition , its width s.width , the horizontal lines parallel to the upper and lower border of the item placed last inside this sub-strip s.upper and s.lower , as well as the width of it s.itemWidth .
This algorithm has the following properties:
This algorithm was first described by Schiermeyer. [ 13 ] The description of this algorithm needs some additional notation.
For a placed item i ∈ I {\displaystyle i\in {\mathcal {I}}} , its lower left corner is denoted by ( a i , c i ) {\displaystyle (a_{i},c_{i})} and its upper right corner by ( b i , d i ) {\displaystyle (b_{i},d_{i})} .
Given a set of items I {\displaystyle {\mathcal {I}}} and a strip of width W {\displaystyle W} , it works as follows:
This algorithm has the following properties:
Steinbergs algorithm is a recursive one. Given a set of rectangular items I {\displaystyle {\mathcal {I}}} and a rectangular target region with width W {\displaystyle W} and height H {\displaystyle H} , it proposes four reduction rules, that place some of the items and leaves a smaller rectangular region with the same properties as before regarding of the residual items.
Consider the following notations: Given a set of items I {\displaystyle {\mathcal {I}}} we denote by h max ( I ) {\displaystyle h_{\max }({\mathcal {I}})} the tallest item height in I {\displaystyle {\mathcal {I}}} , w max ( I ) {\displaystyle w_{\max }({\mathcal {I}})} the largest item width appearing in I {\displaystyle {\mathcal {I}}} and by A R E A ( I ) := ∑ i ∈ I w ( i ) h ( i ) {\displaystyle \mathrm {AREA} ({\mathcal {I}}):=\sum _{i\in {\mathcal {I}}}w(i)h(i)} the total area of these items.
Steinbergs shows that if
h max ( I ) ≤ H {\displaystyle h_{\max }({\mathcal {I}})\leq H} , w max ( I ) ≤ W {\displaystyle w_{\max }({\mathcal {I}})\leq W} , and A R E A ( I ) ≤ W ⋅ H − ( 2 h max ( I ) − h ) + ( 2 w max ( I ) − W ) + {\displaystyle \mathrm {AREA} ({\mathcal {I}})\leq W\cdot H-(2h_{\max }({\mathcal {I}})-h)_{+}(2w_{\max }({\mathcal {I}})-W)_{+}} , where ( a ) + := max { 0 , a } {\displaystyle (a)_{+}:=\max\{0,a\}} ,
then all the items can be placed inside the target region of size W × H {\displaystyle W\times H} .
Each reduction rule will produce a smaller target area and a subset of items that have to be placed. When the condition from above holds before the procedure started, then the created subproblem will have this property as well.
Procedure 1 : It can be applied if w max ( I ′ ) ≥ W / 2 {\displaystyle w_{\max }({\mathcal {I}}')\geq W/2} .
Procedure 2 : It can be applied if the following conditions hold: w max ( I ) ≤ W / 2 {\displaystyle w_{\max }({\mathcal {I}})\leq W/2} , h max ( I ) ≤ H / 2 {\displaystyle h_{\max }({\mathcal {I}})\leq H/2} , and there exist two different items i , i ′ ∈ I {\displaystyle i,i'\in {\mathcal {I}}} with w ( i ) ≥ W / 4 {\displaystyle w(i)\geq W/4} , w ( i ′ ) ≥ W / 4 {\displaystyle w(i')\geq W/4} , h ( i ) ≥ H / 4 {\displaystyle h(i)\geq H/4} , h ( i ′ ) ≥ H / 4 {\displaystyle h(i')\geq H/4} and 2 ( A R E A ( I ) − w ( i ) h ( i ) − w ( i ′ ) h ( i ′ ) ) ≤ ( W − max { w ( i ) , w ( i ′ ) } ) H {\displaystyle 2(\mathrm {AREA} ({\mathcal {I}})-w(i)h(i)-w(i')h(i'))\leq (W-\max\{w(i),w(i')\})H} .
Procedure 3 : It can be applied if the following conditions hold: w max ( I ) ≤ W / 2 {\displaystyle w_{\max }({\mathcal {I}})\leq W/2} , h max ( I ) ≤ H / 2 {\displaystyle h_{\max }({\mathcal {I}})\leq H/2} , | I | > 1 {\displaystyle |{\mathcal {I}}|>1} , and when sorting the items by decreasing width there exist an index m {\displaystyle m} such that when defining I ′ {\displaystyle {\mathcal {I'}}} as the first m {\displaystyle m} items it holds that A R E A ( I ) − W H / 4 ≤ A R E A ( I ′ ) ≤ 3 W H / 8 {\displaystyle \mathrm {AREA} ({\mathcal {I}})-WH/4\leq \mathrm {AREA} ({\mathcal {I'}})\leq 3WH/8} as well as w ( i m + 1 ) ≤ W / 4 {\displaystyle w(i_{m+1})\leq W/4}
Note that procedures 1 to 3 have a symmetric version when swapping the height and the width of the items and the target region.
Procedure 4 : It can be applied if the following conditions hold: w max ( I ) ≤ W / 2 {\displaystyle w_{\max }({\mathcal {I}})\leq W/2} , h max ( I ) ≤ H / 2 {\displaystyle h_{\max }({\mathcal {I}})\leq H/2} , and there exists an item i ∈ I {\displaystyle i\in {\mathcal {I}}} such that w ( i ) h ( i ) ≥ A R E A ( I ) − W H / 4 {\displaystyle w(i)h(i)\geq \mathrm {AREA} ({\mathcal {I}})-WH/4} .
This algorithm has the following properties:
To improve upon the lower bound of 3 / 2 {\displaystyle 3/2} for polynomial-time algorithms, pseudo-polynomial time algorithms for the strip packing problem have been considered.
When considering this type of algorithms, all the sizes of the items and the strip are given as integrals. Furthermore, the width of the strip W {\displaystyle W} is allowed to appear polynomially in the running time.
Note that this is no longer considered as a polynomial running time since, in the given instance, the width of the strip needs an encoding size of log ( W ) {\displaystyle \log(W)} .
The pseudo-polynomial time algorithms that have been developed mostly use the same approach. It is shown that each optimal solution can be simplified and transformed into one that has one of a constant number of structures. The algorithm then iterates all these structures and places the items inside using linear and dynamic programming. The best ratio accomplished so far is ( 5 / 4 + ε ) O P T ( I ) {\displaystyle (5/4+\varepsilon )OPT(I)} . [ 20 ] while there cannot be a pseudo-polynomial time algorithm with ratio better than 5 / 4 {\displaystyle 5/4} unless P = N P {\displaystyle P=NP} [ 5 ]
In the online variant of strip packing, the items arrive over time. When an item arrives, it has to be placed immediately before the next item is known. There are two types of online algorithms that have been considered. In the first variant, it is not allowed to alter the packing once an item is placed. In the second, items may be repacked when another item arrives. This variant is called the migration model.
The quality of an online algorithm is measured by the (absolute) competitive ratio
s u p I A ( I ) / O P T ( I ) {\displaystyle \mathrm {sup} _{I}A(I)/OPT(I)} ,
where A ( I ) {\displaystyle A(I)} corresponds to the solution generated by the online algorithm and O P T ( I ) {\displaystyle OPT(I)} corresponds to the size of the optimal solution.
In addition to the absolute competitive ratio, the asymptotic competitive ratio of online algorithms has been studied. For instances I {\displaystyle I} with h max ( I ) ≤ 1 {\displaystyle h_{\max }(I)\leq 1} it is defined as
lim s u p O P T ( I ) → ∞ A ( I ) / O P T ( I ) {\displaystyle \lim \mathrm {sup} _{OPT(I)\rightarrow \infty }A(I)/OPT(I)} .
Note that all the instances can be scaled such that h max ( I ) ≤ 1 {\displaystyle h_{\max }(I)\leq 1} .
The framework of Han et al. [ 29 ] is applicable in the online setting if the online bin packing
algorithm belongs to the class Super Harmonic. Thus, Seiden's online bin packing algorithm
Harmonic++ [ 30 ] implies an algorithm for online strip packing with asymptotic ratio 1.58889. | https://en.wikipedia.org/wiki/Strip_packing_problem |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.