text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In information theory and communication , the Slepian–Wolf coding , also known as the Slepian–Wolf bound , is a result in distributed source coding discovered by David Slepian and Jack Wolf in 1973. It is a method of theoretically coding two lossless compressed correlated sources . [ 1 ]
Distributed coding is the coding of two, in this case, or more dependent sources with separate encoders and a joint decoder . Given two statistically dependent independent and identically distributed finite-alphabet random sequences X n {\displaystyle X^{n}} and Y n {\displaystyle Y^{n}} , the Slepian–Wolf theorem gives a theoretical bound for the lossless coding rate for distributed coding of the two sources.
The bound for the lossless coding rates as shown below: [ 1 ]
If both the encoder and the decoder of the two sources are independent, the lowest rate it can achieve for lossless compression is H ( X ) {\displaystyle H(X)} and H ( Y ) {\displaystyle H(Y)} for X {\displaystyle X} and Y {\displaystyle Y} respectively, where H ( X ) {\displaystyle H(X)} and H ( Y ) {\displaystyle H(Y)} are the entropies of X {\displaystyle X} and Y {\displaystyle Y} . However, with joint decoding, if vanishing error probability for long sequences is accepted, the Slepian–Wolf theorem shows that much better compression rate can be achieved. As long as the total rate of X {\displaystyle X} and Y {\displaystyle Y} is larger than their joint entropy H ( X , Y ) {\displaystyle H(X,Y)} and none of the sources is encoded with a rate smaller than its entropy , distributed coding can achieve arbitrarily small error probability for long sequences. [ 1 ]
A special case of distributed coding is compression with decoder side information, where source Y {\displaystyle Y} is available at the decoder side but not accessible at the encoder side. This can be treated as the condition that R Y = H ( Y ) {\displaystyle R_{Y}=H(Y)} has already been used to encode Y {\displaystyle Y} , while we intend to use H ( X | Y ) {\displaystyle H(X|Y)} to encode X {\displaystyle X} . In other words, two isolated sources can compress data as efficiently as if they were communicating with each other. The whole system is operating in an asymmetric way (compression rate for the two sources are asymmetric). [ 1 ]
This bound has been extended to the case of more than two correlated sources by Thomas M. Cover in 1975, [ 2 ] and similar results were obtained in 1976 by Aaron D. Wyner and Jacob Ziv with regard to lossy coding of joint Gaussian sources. [ 3 ]
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slepian–Wolf_coding |
Slewing is the rotation of an object around an axis, usually the z axis . An example is a radar scanning 360 degrees by slewing around the z axis. This is also common terminology in astronomy. The process of rotating a telescope to observe a different region of the sky is referred to as slewing.
The term slewing is also found in motion control applications. Often the slew axis is combined with another axis to form a motion profile.
In crane terminology, slewing is the angular movement of a crane boom or crane jib in a horizontal plane.
The term is also used in the computer game Microsoft Flight Simulator wherein the user presses a key and he or she can rotate and move the virtual aircraft along all three spatial planes .
In the modern day use of CNC programs, slewing is a vital part of the process.
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slewing |
Slice is a 2020 oil painting by the American artist Jasper Johns .
The work is a horizontal, mostly black oil painting that contains references to two outside sources: an anatomical diagram of a knee drawn by a Cameroonian émigré student Jéan-Marc Togodgue; and a map of the distribution of galaxies in a slice of the universe by Valérie de Lapparent, Margaret Geller , and John Huchra with graphics by Michael J. Kurtz . [ 1 ] [ 2 ] The painting was shown publicly for the first time in September 2021 at the Whitney Museum of American Art in Johns's double museum retrospective Mind/Mirror , held simultaneously at the Whitney and the Philadelphia Museum of Art . The painting is currently a promised gift to the Museum of Modern Art in New York. [ 1 ]
Much controversy has ensued over the fact that Johns initially used Togodgue's anatomical drawing of a knee without his knowledge. The artist informed the young student, who attended and played basketball at the Salisbury School near Johns's estate in Sharon, Connecticut after Slice was completed. Johns originally saw the drawing in his orthopedist 's office; Togodgue had given the drawing to the same doctor as a thank you for his own surgery. In August 2021, Johns and Togodgue reached an undisclosed settlement for a licensing agreement. [ 3 ]
Johns received the image of the galaxies from astrophysicist Margaret Geller prior to executing the painting. The title Slice is taken from the concept that the map represents a slice of the universe. [ 4 ] [ 3 ] [ 5 ] | https://en.wikipedia.org/wiki/Slice_(painting) |
The slice preparation or brain slice is a laboratory technique in electrophysiology that allows the study of neurons from various brain regions in isolation from the rest of the brain, in an ex-vivo condition. Brain tissue is initially sliced via a tissue slicer then immersed in artificial cerebrospinal fluid (aCSF) for stimulation and/or recording. [ 1 ] The technique allows for greater experimental control , through elimination of the effects of the rest of the brain on the circuit of interest, careful control of the physiological conditions through perfusion of substrates through the incubation fluid , to precise manipulation of neurotransmitter activity through perfusion of agonists and antagonists . However, the increase in control comes with a decrease in the ease with which the results can be applied to the whole neural system. [ 2 ]
Free hand sectioning is a type of preparation techniques where a skilled operator uses razor blade for slicing. The blade is wetted with an isotonic solution before cutting to avoid tissue smudging during cutting. This method has several drawbacks such as sample size limitation and difficult to observe progress. Modern microtome devices such as Compresstome microtomes are used to prepare slices as these devices have less limitations. [ 3 ]
When investigating mammalian CNS activity, slice preparation has several advantages and disadvantages when compared to in vivo study.
Slice preparation is both faster and cheaper than in vivo preparation, and does not require anaesthesia beyond the initial sacrifice. The removal of the brain tissue from the body removes the mechanical effects of heartbeat and respiration , which allows for extended intracellular recording . The physiological conditions of the sample, such as oxygen and carbon dioxide levels, or pH of the extracellular fluid can be carefully adjusted and maintained. Slice work under a microscope also allows for careful placement of the recording electrode, which would not be possible in the closed in vivo system. Removing the brain tissue means that there is no blood–brain barrier , which allows drugs, neurotransmitters or their modulators , or ions to be perfused throughout the neural tissue. Furthermore, the slice preparation method can also be used as a brain-injury model. [ 4 ] Finally, whilst the circuit isolated in a brain slice represents a simplified model of the circuit in situ , it maintains structural connections that are lost in cell cultures, or homogenised tissue .
Slice preparation also has some drawbacks. Most obviously, an isolated slice lacks the usual input and output connections present in the whole brain. Further, the slicing process may itself compromise the tissue. To minimize complications in the slicing process, a more sophisticated tissue slicer may be used such as the Compresstome, a type of vibrating microtome used to maximizes the amount of viable tissue cells. Additionally, slicing of the brain can damage the top and bottom of the section, but beyond that, the process of decapitation and extraction of the brain before the slice is placed in solution may have effects on the tissue which are not yet understood. The slice preparation procedure itself induces a rapid and robust phenotype change in microglia , the consequences of which need to be taken into consideration when interpreting results. [ 4 ] During recording, the tissue also "ages", degrading at a faster rate than in the intact animal. Finally, the artificial composition of the bathing solution means that the presence and relative concentrations of the necessary compounds may not be present. [ 5 ] | https://en.wikipedia.org/wiki/Slice_preparation |
A slicer is an effects unit which is similar to a tremolo , vibrato , phaser , or autopan . It combines a modulation sequence with a noise gate or envelope filter to create a percussive and rhythmic effect like a helicopter, with rapid cutting out and coming in—on and off. [ 1 ] Most have variable speeds and depths, creating different sounds. It may be implemented through an effects unit or a VST . The Boss SL-20 is an example of a slicer effect in a guitar pedal.
This article relating to musical instruments is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slicer_(guitar_effect) |
Slicing the Truth: On the Computability Theoretic and Reverse Mathematical Analysis of Combinatorial Principles is a book on reverse mathematics in combinatorics , the study of the axioms needed to prove combinatorial theorems. It was written by Denis R. Hirschfeldt, based on a course given by Hirschfeldt at the National University of Singapore in 2010, [ 1 ] and published in 2014 by World Scientific , as volume 28 of the Lecture Notes Series of the Institute for Mathematical Sciences, National University of Singapore.
The book begins with five chapters that discuss the field of reverse mathematics , which has the goal of classifying mathematical theorems by the axiom schemes needed to prove them, and the big five subsystems of second-order arithmetic into which many theorems of mathematics have been classified. [ 2 ] [ 3 ] These chapters also review some of the tools needed in this study, including computability theory , forcing , and the low basis theorem . [ 4 ]
Chapter six, "the real heart of the book", [ 2 ] applies this method to an infinitary form of Ramsey's theorem : every edge coloring of a countably infinite complete graph or complete uniform hypergraph , using finitely many colors, contains a monochromatic infinite induced subgraph . The standard proof of this theorem uses the arithmetical comprehension axiom , falling into one of the big five subsystems, ACA 0 . However, as David Seetapun originally proved, the version of the theorem for graphs is weaker than ACA 0 , and it turns out to be inequivalent to any one of the big five subsystems. The version for uniform hypergraphs of fixed order greater than two is equivalent to ACA 0 , and the version of the theorem stated for all numbers of colors and all orders of hypergraphs simultaneously is stronger than ACA 0 . [ 2 ]
Chapter seven discusses conservative extensions of theories, in which the statements of a powerful theory (such as one of the forms of second-order arithmetic) that are both provable in that theory and expressible in a weaker theory (such as Peano arithmetic ) are only the ones that are already provably in the weaker theory. Chapter eight summarizes the results so far in diagrammatic form. Chapter nine discusses ways to weaken Ramsey's theorem, [ 2 ] and the final chapter discusses stronger theorems in combinatorics including the Dushnik–Miller theorem on self-embedding of infinite linear orderings, Kruskal's tree theorem , Laver's theorem on order embedding of countable linear orders , and Hindman's theorem on IP sets . [ 3 ] An appendix provides a proof of a theorem of Jiayi Liu, part of the collection of results showing that the graph Ramsey theorem does not fall into the big five subsystems. [ 1 ] [ 3 ] [ 4 ]
This is a technical monograph, requiring its readers to have some familiarity with computability theory and Ramsey theory. Prior knowledge of reverse mathematics is not required. [ 2 ] It is written in a somewhat informal style, and includes many exercises, making it usable as a graduate textbook or beginning work in reverse mathematics; [ 3 ] [ 4 ] reviewer François Dorais writes that it is an "excellent introduction to reverse mathematics and the computability theory of combinatorial principles" as well as a case study in the methods available for proving results in reverse mathematics. [ 3 ]
Reviewer William Gasarch complains about two missing topics, the work of Joe Mileti on the reverse mathematics of canonical versions of Ramsey's theorem, and the work of James Schmerl on the reverse mathematics of graph coloring . Nevertheless he recommends this book to anyone interested in reverse mathematics and Ramsey theory. [ 2 ] And reviewer Benedict Eastaugh calls it "a welcome addition ... providing a fresh and accessible look at a central aspect of contemporary reverse mathematical research." [ 4 ]
A "classic reference" in reverse mathematics is the book Subsystems of Second Order Arithmetic (2009) by Stephen Simpson ; [ 4 ] it is centered around the big five subsystems and contains many more examples of results equivalent in strength to one of these five. [ 2 ] Dorais suggests using the two books together as companion volumes. [ 3 ]
Reviewer Jeffry Hirst suggests Computability Theory by Rebecca Weber as a good source for the background needed to read this book. [ 1 ] | https://en.wikipedia.org/wiki/Slicing_the_Truth |
Slickline refers to a single strand wire which is used to run a variety of tools down into the wellbore for several purposes. It is used during well drilling operations in the oil and gas industry. In general,
it can also describe a niche of the industry that involves using a slickline truck or doing a slickline job. Slickline looks like a long, smooth, unbraided wire, often shiny, silver/chrome in appearance. It comes in varying lengths, according to the depth of wells in the area it is used (it can be ordered to specification) up to 35,000 feet in length. It is used to lower and raise downhole tools used in oil and gas well maintenance to the appropriate depth of the drilled well.
In use and appearance it is connected by a drum as it is spooled off the back of the slickline truck to the wireline sheave , a round wheel grooved and sized to accept a specified line and positioned to redirect the line to another sheave that will allow the slickline to enter the wellbore. [ 1 ] Slickline is used to lower downhole tools into an oil or gas well to perform a specified maintenance job downhole. Downhole refers to the area in the pipe below surface, the pipe being either the casing cemented in the hole by the drilling rig (which keeps the drilled hole from caving in and pressure from the various oil or gas zones downhole from feeding into one another) or the tubing, a smaller diameter pipe hung inside the casing.
Slickline is more commonly used in production tubing. The wireline operator monitors at surface the slickline tension via a weight indicator gauge and the depth via a depth counter 'zeroed' from surface, lowers the downhole tool to the proper depth, completes the job by manipulating the downhole tool mechanically, checks to make sure it worked if possible, and pulls the tool back out by winding the slickline back onto the drum it was spooled from. The slickline drum is controlled by a hydraulic pump, which in turn is controlled by the 'slickline operator'
Slickline comes in different sizes and grades. The larger the size, and higher the grade, generally means the higher line tension can be pulled before the line snaps at the weakest spot and causes a costly 'fishing' job. Due to downhole tools getting stuck because of malfunctions or 'downhole conditions' including sand, scale, salt, asphaltenes, and other well byproducts settling or loosening off the pipe walls because of agitation either by the downhole tools or a change in downhole inflow, sometimes it is necessary to pull hard on the tools to bring them back uphole to surface. If the tools are stuck, and the operator pulls too hard, the line will snap or pull apart at the weakest spot, which is generally closer to surface as the further uphole the weak point in the line is, the more weight it has to support (the weight of the line).
Weak spots in the line can be caused by making the circle around the counter wheel, making a bend around a sheave, a kink in a line from normal use (when rigging up the equipment extra line must be pulled out from the truck to give enough slack when the pressure control lubricator is picked up – this leaves line coiled on the often rutted ground, and sometimes it snags and kinks the line).
When the slickline parts, this can create an expensive 'fishing' job. It is called fishing because you often have to try different 'fishing' tools until you get a 'bite', then you have to work the original tools downhole free, or cut off the slickline where they join the tools downhole so that you can pull the broken slickline back to surface and out of the way, in order to fish the stuck toolstring. Because of the downtime involved in 'fishing', meaning not being able to flow the oil/gas well, the client is losing money by lack of production and also the cost of the slickline unit to fish, and the cost of what is left in the hole if it is not fished out (in the oil/gas industry, if the cause of the fishing job was not the fault of the slickline company, the oil/gas company is usually responsible to pay for it, and it can be very expensive).
Slickline was originally called measuring line, because the line was flat like a tape measure, and marked with depth increments so the operators would know how deep in the hole they were. This probably changed because the flat measuring line wasn't as strong as the modern slickline, and separate depth counters were developed.
It is advantageous to keep the diameter of the wire as small as possible for the following reasons:
The disadvantage of a smaller-diameter slickline is the lower strength. Depth and the nature of the job (a tool that must be pulled hard or might be stuck) will affect what slickline truck (different trucks specialize in different sizes of line) used.
The sizes of solid wireline in most common uses are: 0.092", 0.108", 0.125", 0.140", 0.150", and 0.160" in diameter, and are obtainable from the wire-drawing mills in one-piece standard lengths of 18,000, 20,000, 25,000 and 30,000 foot lengths. Other diameters and lengths are usually available on request from the suppliers, with the largest size currently available at 0.188".
Slickline tools operate with a mechanical action, controlled from surface in the wireline trucks operators compartment. Typically, this mechanical action is accomplished by the operation of jars. There are generally two types of jars; mechanical and hydraulic.
Mechanical jars look like a long, tubular piece of machined metal that slides longer or shorter approximately 75% to 90% of its total length. They give the effect of hammering on the downhole tools. The weight or hit of the 'hammer' depends on how much sinker bar is added above the jars. Generally, a slickline operator controls the downhole tools with taps and hits from the sinker bar via the mechanical jars, controlled at surface by lowering or raising the toolstring and monitoring weight, depth, and pressure. Mechanical jars for slickline can hit up or down the hole, making them a versatile form of jarring.
Hydraulic jars for slickline are generally meant to jar up only, because not enough sinker bar is able to feasibly be lubricated in to jar down on the downhole tools. Hydraulic jars work by the operator pulling up on the line, which puts an upward force on the top of the hydraulic jars. The bottom of the hydraulic jars is usually attached by threaded connection to the mechanical jars, which are attached to the downhole tools. Depending on how hard the operator pulls on the hydraulic jars will affect how fast they hit, and how hard they hit. When the top is pulled on, the inner mandrel begins to slide upwards. It has a restriction in it that hydraulic fluid has to bypass as it is pulled upwards, until it reaches an area of no restriction, allowing it to slide rapidly. The reason for the initial tighter restriction is to allow the operator to pull his line to the desired hitting range.
Generally once he hits that range on his weight indicator, he waits while the jars open to the less restricted point, whereupon the sinker bar travels upwards rapidly, providing an upwards hit on the downhole tools. The jars can then be 'reset' by lowering the line until the weight of the sinker bar closes, or pushes the inner mandrel of the hydraulic jars back to the starting position. Because the hydraulic jars are designed to provide a wait time to allow the operator to get up to the desired line tension, they can provide a very effective upwards hit.
Mechanical jar and hydraulic jar hitting power is affected by the length of the jars (the longer the length, they faster they can travel before they stop), the mass of the weight above them (the more the mass, the harder they will hit), and the tension of the line pulling on them.
Some completion components may be deployed and retrieved on slickline such as wireline retrievable safety valves , battery powered downhole gauges, perforating, placing explosively set bridge plugs, and placing or retrieving gas lift valves. Slickline can also be used for fishing, the process of trying to retrieve other equipment and wire, which has been dropped down the hole.
The most common applications for slickline are:
Braided line is generally used when the strength of slickline is insufficient for the task. Most commonly, this is for heavy fishing such as retrieving broken drill pipe.
This type of tool can be extended and closed rapidly to induce a mechanical shock to the tool string. This shock can induce certain components such as plugs to lock into place and then unlock for retrieving. Jars are commonly used to shear small brass or steel pins that are put in place to function certain down-hole tools at a certain moment. The operator can use the jars to shear the pins at a predetermined depth. Spang jars are manually operated by the wireline operator, who either lifts or lowers wire rapidly, requiring a great deal of expertise.
Stem essentially just serves to add weight to the toolstring. The weight may be necessary to overcome the pressure of the well. Some variations of stem, called roller stem, may have wheels built into the tool to allow the tool string to glide more easily down moderately deviated wells. Stem give the hammering action to the tool string which in turn allows the jars to transmit the force given by the movement of the stem's bars. Depending on well conditions, either extra-small OD stems are used or extra-large. The range can be from .75" to 3.50" OD and the stems normally come in 2 ft, 3 ft or 5 ft lengths. The connection to the rope socket or other tools can be a threaded connection or a QLS system (quick connect).
These are tools designed for fishing other wireline components which have been dropped or placed in the well down hole. All wireline tools are designed with 'fishing necks' on their top side, intended to be easily grabbed by pulling tools with a matching 'ID' to that of the 'OD' of the fishing neck. Pulling tools are also used for retrieving seated components such as plug prongs. Almost all pulling tools are equipped with a safety feature (shear pin) so they may release a stuck tool and allow the tool string to be brought to the surface for changes in components (hydraulic jars for example).
A gauge cutter is a tool with a round, open-ended bottom which is milled to an accurate size. Large openings above the bottom of the tool allow for fluid bypass while running in the hole.
Most often a gauge ring will be the first tool run on a slickline operation. A gauge ring that is just undersized will allow the operator to ensure clear tubing down to the deepest projected working depth; for example 2 7/8" tubing containing 2.313" profiles would call for a gauge ring between 2.25" and 2.30".
A gauge ring can also be used to remove light paraffin that may have built up in the tubing. Often a variety of different-sized gauges and/or scratchers will be run to remove paraffin little by little. Gauge cutter can be used for drift runs also.
If an obstruction is found downhole, a lead impression block can be run to help determine its nature. The LIB has a malleable lead base in which the obstruction can leave an impression when they meet.
The LIB is called Wireline Camera because of its function to mark any object downhole.
They are also sometimes called "confusion" blocks because they only give a two-dimensional view of the down-hole object, making it hard for an inexperienced person to determine what three-dimensional object is in the hole
Bailers are downhole tools that are generally long and tubular shaped, and are used for both getting samples of downhole solids (sand, scale, asphaltenes, rust, rubber and debris from well servicing operations) and for 'bailing' the unwanted downhole solids from the well. Bailers are attached either via threaded connection or releasable downhole tool to the wireline toolstring, and are manipulated from surface by the wireline operator. Bailers usually have an interchangeable bottom (the shoe) which also houses a check to keep the solids from falling or washing out of the bottom.
A sample bailer is a hollow tube (the barrel), generally around a meter long, around 40 mm in diameter, with a 'ball check' – a form of non-return valve – on the bottom and an opening at the top. This tool is beat downwards into the obstruction using the mechanical jars and weight above of the wireline toolstring. Generally, after a number of 'hits', hopefully allowing a usable sample of solids to enter the barrel, the tool is pulled upwards, and the sample in the barrel should settle the ball check onto its seat, which will keep the solids in the barrel during the return trip to surface, where the sample can be inspected to determine the composition of the obstruction. The success of this depends on how readily the solid was accepted into the barrel, and if the ball check was properly seated on the return trip to surface. If the ball check is not seated (sometimes a large, hard piece of solid will sit in between the ball and seat) downhole fluids tend to wash the sample out of the bottom of the sample bailer, leaving the inspectors at surface uncertain whether the tool actually collected a sample. The procedure may have to be repeated until a sample is successfully retrieved.
A stroke bailer functions like a 'Chinese water pump', and is used to collect unwanted solids from the wellbore. A stroke bailer is long and tubular looking, with a smaller rod that extends from the top, a hole in the bottom, and is generally around 7 meters long, but the length depends on how much barrel section is added to the bailer. The barrel 'free floats' on the stroke rod, which is attached to the wireline toolstring. The tool is usually 'spudded' into the downhole solid, then the wireline toolstring is pulled upwards, which in turn pulls the stroke up through the barrel. Ideally, this draws the downhole solid in through the bottom 'shoe' of the tool, past the check and into the barrel for collection. The tool is usually stroked either a predetermined number of times, or until it appears the tool is not stroking, which can mean either it is full, or stuck.
A hydrostatic bailer functions like a 'vacuum', and is used to suck up unwanted solids from the wellbore. A hydrostatic bailer is generally around 2.5 meters long and is tubular looking, with two 10 mm holes on opposing sides at the top of the tool, and a hole in the bottom. A hydrostatic bailer uses a pinned plug with o-ring seals at the bottom, and a plug at the top to maintain the surface pressure that it was assembled at (nominally around 100 kPa) all the way to the bottom of the well, whereupon it is spudded into the downhole solids, which ideally pushes the shoe into the bottom plug, which shears the pin on the bottom plug. An oil or gas well's pressure downhole is always more than atmospheric pressure at surface, due to the formation pressure, and a combination of depth and hydrostatic weight of wellbore fluids. Sometimes fluid will be added to the wellbore to assist in bailing by bringing up the pressure, and also lubricating the downhole solid. Because the pressure inside the bailer is much less than the downhole wellbore pressure, any solids that are loose enough are 'sucked up' by the vacuum formed when the bottom plug is sheared and travels upwards through the barrel, followed by the solids. At the same time, due to the changed from negative pressure to positive pressure, the top plug pops out (and is caught by the top part of the tool), and excess flow is directed out through the 10 mm ports on the sides of the top of the tool. These ports allow the barrel to fill more readily. Then the bailer is returned to surface where it is taken apart, the solids are emptied, and it is cleaned and serviced with new o-ring seals. Care must be taken when disassembling at surface as the tool is potentially charged with the downhole pressure (possibly many tens of thousands of kpa) and may 'blow apart' when being unthreaded if not bled off first.
These tools are primarily used to 'set' plugs into locking profiles (nipples) located in the tubing; however, the term 'running tool' refers to a downhole tool attached to the wireline toolstring that is used to 'run' another tool that is meant to be left downhole when the toolstring returns to surface. In general, a running tool is attached to a downhole 'locking tool' that locates and locks into the selected downhole profile (nipple). The 'locking tool', or 'lock' for short, can be attached via threaded connection to the top of a variety of different tools, including but not limited to, downhole chokes (flow rate restrictors sized according to a pre-determined calculation), one-way check valves (TKX style plugs), instrument hangers, and most commonly, tubing plugs. The lock is fitted onto the running tool and attached using shear pins made of brass or steel. When the target profile is reached the lock can be set by seating the lock into the profile using mechanical jars (spangs) until the locking keys have locked the lock into the profile, whereupon the operator usually 'pull tests' the lock to give an indication it is properly 'set', then shears off the shear pins with his mechanical or hydraulic jars to allow the 'toolstring' to return to surface. There are many different types of running tools; some are mechanically complex and able to be made 'selective' in order to pass through profiles in order to reach one of the same size but a different depth; some are relatively simple, such as an 'F' collarstop running tool, which is essentially a metal rod which fits inside the collarstop downhole tool which is pinned in place. | https://en.wikipedia.org/wiki/Slickline |
A slide-tape work (often slide-tape presentation ) is an audiovisual work consisting of a slide show using a filmstrip machine with synchronised accompanying audio, traditionally audio tape . These have frequently been used for education and for tourism, but also include artistic uses.
The slide-tape presentation originated in and is particularly associated with the mid-to-late 20th century, when magnetic tape and slide projectors were common, but digital audio (such as compact discs ) and digital video projectors were not. Even with the advent of video tapes in the 1970s and 1980s, producing videos was significantly more difficult than producing a slide show, and the image quality of videos was significantly lower than that of slides, resulting in slide-tape works continuing to be used into the 1980s and 1990s.
Analog slide-tape works have declined in use in the developed world, though digital ones continue to be produced, and can now be created with photo slideshow software . [ 1 ] Analog use continues in countries in the less developed world.
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slide-tape |
Sliding is a type of motion between two surfaces in contact. This can be contrasted to rolling motion. Both types of motion may occur in bearings .
The relative motion or tendency toward such motion between two surfaces is resisted by friction . This means that the force of friction always acts on an object in the direction opposite to its velocity (relative to the surface it's sliding on). Friction may damage or " wear " the surfaces in contact. However, wear can be reduced by lubrication . The science and technology of friction, lubrication, and wear is known as tribology .
Sliding may occur between two objects of arbitrary shape, whereas rolling friction is the frictional force associated with the rotational movement of a somewhat disclike or other circular object along a surface. Generally, the frictional force of rolling friction is less than that associated with sliding kinetic friction . [ 1 ] Typical values for the coefficient of rolling friction are less than that of sliding friction. [ 2 ] Correspondingly sliding friction typically produces greater sound and thermal bi-products. One of the most common examples of sliding friction is the movement of braking motor vehicle tires on a roadway , a process which generates considerable heat and sound , and is typically taken into account in assessing the magnitude of roadway noise pollution . [ 3 ]
Sliding friction (also called kinetic friction) is a contact force that resists the sliding motion of two objects or an object and a surface. Sliding friction is almost always less than that of static friction; this is why it is easier to move an object once it starts moving rather than to get the object to begin moving from a rest position.
F k = μ k ⋅ N {\displaystyle F_{k}=\mu _{k}\cdot N}
Where F k , is the force of kinetic friction. μ k is the coefficient of kinetic friction, and N is the normal force .
The motion of sliding friction can be modelled (in simple systems of motion) by Newton's second law
∑ F = m a {\displaystyle \sum F=ma}
F E − F k = m a {\displaystyle F_{E}-F_{k}=ma}
Where F E {\displaystyle F_{E}} is the external force.
A common problem presented in introductory physics classes is a block subject to friction as it slides up or down an inclined plane . This is shown in the free body diagram to the right.
The component of the force of gravity in the direction of the incline is given by: [ 4 ]
F g = m g sin θ {\displaystyle F_{g}=mg\sin {\theta }}
The normal force (perpendicular to the surface) is given by:
N = m g cos θ {\displaystyle N=mg\cos {\theta }}
Therefore, since the force of friction opposes the motion of the block,
F k = μ k ⋅ m g cos θ {\displaystyle F_{k}=\mu _{k}\cdot mg\cos {\theta }}
To find the coefficient of kinetic friction on an inclined plane, one must find the moment where the force parallel to the plane is equal to the force perpendicular; this occurs when the block is moving at a constant velocity at some angle θ {\displaystyle \theta }
∑ F = m a = 0 {\displaystyle \sum F=ma=0}
F k = F g {\displaystyle F_{k}=F_{g}} or μ k m g cos θ = m g sin θ {\displaystyle \mu _{k}mg\cos {\theta }=mg\sin {\theta }}
Here it is found that:
μ k = m g sin θ m g cos θ = tan θ {\displaystyle \mu _{k}={\frac {mg\sin {\theta }}{mg\cos {\theta }}}=\tan {\theta }} where θ {\displaystyle \theta } is the angle at which the block begins moving at a constant velocity [ 5 ] | https://en.wikipedia.org/wiki/Sliding_(motion) |
A sliding T bevel , also known as a bevel gauge or false square [ 1 ] is an adjustable gauge for setting and transferring angles. Different from the square , which is fixed and can only set a 90° angle , the sliding T bevel can set any angle and transfer it on another piece.
The bevel gauge is composed of two elements connected with a thumbscrew or wing nut , which allows the blade to pivot and be locked at any angle . The handle is usually made of wood or plastic and the blade of metal. The bevel can be used to duplicate an existing angle, or set to a desired angle by using it with any number of other measuring tools (such as a protractor , or framing square ). [ 2 ]
This tool article is a stub . You can help Wikipedia by expanding it .
This article about joinery, woodworking joints, carpentry or woodworking is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sliding_T_bevel |
The sliding criterion (discontinuity) is a tool to estimate easily the shear strength properties of a discontinuity in a rock mass based on visual and tactile (i.e. by feeling) characterization of the discontinuity . [ 1 ] [ 2 ] [ 3 ] [ 4 ] The shear strength of a discontinuity is important in, for example, tunnel , foundation , or slope engineering, but also stability of natural slopes is often governed by the shear strength along discontinuities.
The sliding-angle is based on the ease with which a block of rock material can move over a discontinuity and hence is comparable to the tilt-angle as determined with the tilt test , but on a larger scale. The sliding criterion has been developed for stresses that would occur in slopes between 2 and 25 metres (6.6 and 82.0 ft), hence, in the order of maximum 0.6 megapascals (87 psi). The sliding criterion is based on back analyses of slope instability and earlier work of ISRM [ 5 ] and Laubscher. [ 6 ] The sliding criterion is part of the Slope Stability Probability Classification (SSPC) [ 3 ] system for slope stability analyses.
The sliding-angle is calculated as follows:
″ s l i d i n g − a n g l e ″ {\displaystyle ''sliding-angle''} = R l ∗ R s ∗ I m ∗ K a 0.0113 {\displaystyle ={\frac {Rl*Rs*Im*Ka}{0.0113}}}
(The values for the parameters are listed in table 1 and explained below)
The roughness large scale ( Rl ) is based on visual comparison of the trace (with a length of about 1 m) or surface (with an area of about 1 x 1 m 2 of a discontinuity with the example graphs in figure 1. This results in a descriptive term: wavy, slightly wavy, curved, slightly curved , or straight . The corresponding factor for Rl is listed in table 1. The roughness large scale (Rl) contributes only to the friction along the discontinuity when the walls on both sides of the discontinuity are fitting, i.e. the asperities on both discontinuity walls match. If the discontinuity is non-fitting, the factor Rl = 0.75.
The roughness small scale ( Rs ) is established visually and tactile (by feeling).
The first term rough , smooth , or polished is established by feeling the surface of the discontinuity ; rough hurts when fingers are moved over the surface with some (little) force, smooth feels that there is resistance to the fingers, while polished gives a feeling about similar to the surface of glass. The second term is established visually. The trace (with a length of about 0.2 m) or surface (with an area of about 0.2 x 0.2 m 2 of a discontinuity is compared with the example graphs in figure 2; this gives stepped , undulating , or planar . The two terms of visual and tactile give a combined term and the corresponding factor is listed in table 1. The visual part of the roughness small scale (Rs) contributes only to the friction along the discontinuity if the walls on both sides of the discontinuity are fitting , i.e. the asperities on both discontinuity walls match. If the discontinuity is non-fitting, the visual part of the roughness small scale (Rs) should be taken as planar for the calculation of the sliding-angle , and hence, the roughness small scale (Rs) can be only rough planar, smooth planar, or polished planar .
Infill material in a discontinuity has often a marked influence on the shear characteristics. The different options for infill material are listed in table 1, and below follows a short explanation for each option.
A cemented discontinuity or a discontinuity with cemented infill has higher shear strength than a non-cemented discontinuity if the cement or cemented infill is bonded to both discontinuity walls. Note that cement and cement bounds that are stronger than the surrounding intact rock ceases the discontinuity to be a mechanical plane of weakness, meaning the 'sliding-angle' has no validity.
No infill describes a discontinuity that may have coated walls but no other infill.
Non-softening infill material is material that does not change in shear characteristics under the influence of water nor under the influence of shear displacement. The material may break but no greasing effect will occur. The material particles can roll but this is considered to be of minor influence because, after small displacements, the material particles generally will still be very angular. This is further sub-divided in coarse , medium , and fine for the size of the grains in the infill material or the size of the grains or minerals in the discontinuity wall. The larger of the two should be used for the description. The thickness of the infill can be very thin, sometimes not more than a dust coating.
Softening infill material will under the influence of water or displacements, attain in lower shear strength and will act as a lubricating agent.
This is further sub-divided in coarse , medium , and fine for the size of the grains in the infill material or the size of the grains or minerals in the discontinuity wall. The larger of the two should be used for the description. The thickness of the infill can be very thin, sometimes not more than a dust coating.
Gouge infill means a relatively thick and continuous layer of infill material, mainly consisting of clay but may contain rock fragments. The clay material surrounds the rock fragments in the clay completely or partly, so that these are not in contact with both discontinuity walls. A sub-division is made between less thick and thicker than the amplitude of the roughness of the discontinuity walls. If the thickness is less than the amplitude of the roughness, the shear strength will be influenced by the wall material and the discontinuity walls will be in contact after a certain displacement. If the infill is thicker than the amplitude, the friction of the discontinuity is fully governed by the infill.
Very weak and not compacted infill in discontinuities flows out of the discontinuities under its own weight or as a consequence of a very small trigger force (such as water pressure, vibrations due to traffic or the excavation process, etc.).
The presence of solution ( karst ) features along the discontinuity. | https://en.wikipedia.org/wiki/Sliding_criterion_(geotechnical_engineering) |
The sliding filament theory explains the mechanism of muscle contraction based on muscle proteins that slide past each other to generate movement. [ 1 ] According to the sliding filament theory, the myosin ( thick filaments ) of muscle fibers slide past the actin ( thin filaments ) during muscle contraction, while the two groups of filaments remain at relatively constant length.
The theory was independently introduced in 1954 by two research teams, one consisting of Andrew Huxley and Rolf Niedergerke from the University of Cambridge , and the other consisting of Hugh Huxley and Jean Hanson from the Massachusetts Institute of Technology . [ 2 ] [ 3 ] It was originally conceived by Hugh Huxley in 1953. Andrew Huxley and Niedergerke introduced it as a "very attractive" hypothesis. [ 4 ]
Before the 1950s there were several competing theories on muscle contraction, including electrical attraction, protein folding, and protein modification. [ 5 ] The novel theory directly introduced a new concept called cross-bridge theory (classically swinging cross-bridge, now mostly referred to as cross-bridge cycle ) which explains the molecular mechanism of sliding filament. Cross-bridge theory states that actin and myosin form a protein complex (classically called actomyosin ) by attachment of myosin head on the actin filament, thereby forming a sort of cross-bridge between the two filaments. The sliding filament theory is a widely accepted explanation of the mechanism that underlies muscle contraction. [ 6 ]
The first muscle protein discovered was myosin by a German scientist Willy Kühne , who extracted and named it in 1864. [ 7 ] In 1939 a Russian husband and wife team Vladimir Alexandrovich Engelhardt and Militsa Nikolaevna Lyubimova discovered that myosin had an enzymatic (called ATPase ) property that can break down ATP to release energy. [ 8 ] Albert Szent-Györgyi , a Hungarian physiologist, turned his focus on muscle physiology after winning the Nobel Prize in Physiology or Medicine in 1937 for his works on vitamin C and fumaric acid . He demonstrated in 1942 that ATP was the source of energy for muscle contraction. He actually observed that muscle fibres containing myosin B shortened in the presence of ATP, but not with myosin A, the experience which he later described as "perhaps the most thrilling moment of my life." [ 9 ] With Brunó Ferenc Straub , he soon found that myosin B was associated with another protein, which they called actin, while myosin A was not. Straub purified actin in 1942, and Szent-Györgyi purified myosin A in 1943. It became apparent that myosin B was a combination of myosin A and actin, so that myosin A retained the original name, whereas they renamed myosin B as actomyosin. By the end of the 1940s Szent-Györgyi's team had postulated with evidence that contraction of actomyosin was equivalent to muscle contraction as a whole. [ 10 ] But the notion was generally opposed, even from the likes of Nobel laureates such as Otto Fritz Meyerhof and Archibald Hill , who adhered to the prevailing dogma that myosin was a structural protein and not a functional enzyme. [ 3 ] However, in one of his last contributions to muscle research, Szent-Györgyi demonstrated that actomyosin driven by ATP was the basic principle of muscle contraction. [ 11 ]
By the time Hugh Huxley earned his PhD from the University of Cambridge in 1952 on his research on the structure of muscle, Szent-Györgyi had turned his career into cancer research. [ 12 ] Huxley went to Francis O. Schmitt 's laboratory at the Massachusetts Institute of Technology with a post-doctoral fellowship in September 1952, where he was joined by another English post-doctoral fellow Jean Hanson in January 1953. Hanson had a PhD in muscle structure from King's College, London in 1951. Huxley had used X-ray diffraction to speculate that muscle proteins, particularly myosin, form structured filaments giving rise to sarcomere (a segment of muscle fibre). Their main aim was to use electron microscopy to study the details of those filaments as never done before. They soon discovered and confirmed the filament nature of muscle proteins. Myosin and actin form overlapping filaments, myosin filaments mainly constituting the A band (the dark region of a sarcomere), while actin filaments traverse both the A and I (light region) bands. [ 13 ] Huxley was the first to suggest the sliding filament theory in 1953, stating:
"… [I]f it is postulated that stretching of the muscle takes place, not by an extension of the filaments, but by a process in which the two sets of filaments slide [emphasis added] past each other; extensibility will then be inhibited if the myosin and actin are linked together." [ 14 ]
Later, in 1996, Huxley regretted that he should have included Hanson in the formulation of his theory because it was based on their collaborative work. [ 15 ]
Andrew Huxley , whom Alan Hodgkin described as "wizard with scientific apparatus", had just discovered the mechanism of the nerve impulse ( action potential ) transmission (for which he and Hodgkin later won the Nobel Prize in Physiology or Medicine in 1963) in 1949 using his own design of voltage clamp , and was looking for an associate who could properly dissect out muscle fibres. [ 16 ] Upon recommendation of a close friend Robert Stämpfli, a German physician Rolf Niedergerke joined him at the University of Cambridge in 1952. By then he realised that the conventionally used phase-contrast microscope was not suitable for fine structures of muscle fibres, and thus developed his own interference microscope . Between March 1953 and January 1954 they executed their research. [ 17 ] Huxley recollected that at the time the only person who ever thought of sliding filaments before 1953 was Dorothy Hodgkin (later winner of the 1964 Nobel Prize in Chemistry ). [ 18 ] He spent the summer of 1953 at Marine Biological Laboratory at Woods Hole, Massachusetts, to use electron microscope there. There he met Hugh Huxley and Hanson with whom he shared data and information on their works. They parted with an agreement that they would keep in touch, and when their aim is achieved, they would publish together, if they ever "reached similar conclusions". [ 2 ]
The sliding filament theory was born from two consecutive papers published on the 22 May 1954 issue of Nature under the common theme "Structural Changes in Muscle During Contraction". Though their conclusions were fundamentally similar, their underlying experimental data and propositions were different.
The first paper, written by Andrew Huxley and Rolf Niedergerke, is titled "Interference microscopy of living muscle fibres". It was based on their study of frog muscle using interference microscope, which Andrew Huxley developed for the purpose. According to them: [ 4 ]
The second paper, by Hugh Huxley and Jean Hanson, is titled "Changes in the cross-striations of muscle during contraction and stretch and their structural interpretation". It is more elaborate and was based on their study of rabbit muscle using phase contrast and electron microscopes. According to them: [ 19 ]
In spite of strong evidence, the sliding filament theory did not gain any support for several years to come. [ 20 ] Szent-Györgyi himself refused to believe that myosin filaments were confined to the thick filament (A band). [ 15 ] F.O. Schmitt, whose electron microscope provided the best data, also remained sceptical of the original images. [ 21 ] There were also immediate arguments as to the organisation of the filaments, whether the two sets (myosin and actin) of filaments were merely overlapping or continuous. It was only with the new electron microscope that Hugh Huxley confirmed the overlapping nature of the filaments in 1957. [ 22 ] It was also from this publication that the existence of actin-myosin linkage (now called cross-bridge) was clearly shown. But he took another five years to provide evidence that the cross-bridge was a dynamic interaction between actin and myosin filaments. [ 23 ] He obtained the actual molecular arrangement of the filaments using X-ray crystallography by teaming up with Kenneth Holmes , who was trained by Rosalind Franklin , in 1965. [ 24 ] It was only after a conference in 1972 at Cold Spring Harbor Laboratory , where the theory and its evidence were deliberated, that it became generally accepted. [ 25 ] At the conference, as Koscak Maruyama later recalled, Hanson had to answer the criticisms by shouting, "I know I cannot explain the mechanism yet, but the sliding is a fact." [ 26 ] The factual proofs came in the early 1980s when it could be demonstrated the actual sliding motion using novel sophisticated tools by different researchers. [ 27 ] [ 28 ] [ 29 ]
With substantial evidence, Hugh Huxley formally proposed the mechanism for sliding filament which is variously called swinging cross-bridge model, cross-bridge theory or cross-bridge model. [ 3 ] [ 30 ] (He himself preferred the name "swinging crossbridge model", because, as he recalled, "it [the discovery] was, after all, the 1960s". [ 2 ] ) He published his theory in the 20 June 1969 issue of Science under the title "The Mechanism of Muscular Contraction". [ 31 ] According to his theory, filament sliding occurs by cyclic attachment and detachment of myosin on actin filaments. Contraction occurs when the myosin pulls the actin filament towards the centre of the A band, detaches from actin and creates a force (stroke) to bind to the next actin molecule. [ 32 ] This idea was subsequently proven in detail, and is more appropriately known as the cross-bridge cycle . [ 33 ] | https://en.wikipedia.org/wiki/Sliding_filament_theory |
In lattice theory , a mathematical discipline, a finite lattice is slim if no three join-irreducible elements form an antichain . [ 1 ] Every slim lattice is planar . A finite planar semimodular lattice is slim if and only if it contains no cover-preserving diamond sublattice M 3 (this is the original definition of a slim lattice due to George Grätzer and Edward Knapp). [ 2 ]
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slim_lattice |
A slime layer in bacteria is an easily removable (e.g. by centrifugation ), unorganized layer of extracellular material that surrounds bacteria cells. Specifically, this consists mostly of exopolysaccharides , glycoproteins , and glycolipids . [ 1 ] Therefore, the slime layer is considered as a subset of glycocalyx .
While slime layers and capsules are found most commonly in bacteria, these structures do exist in archaea as well, albeit rarely. [ 2 ] This information about structure and function is also transferable to these microorganisms too.
Slime layers are amorphous and inconsistent in thickness, being produced in various quantities depending upon the cell type and environment. [ 3 ] These layers present themselves as strands hanging extracellularly and forming net-like structures between cells that were 1-4μm apart. [ 4 ] Researchers suggested that a cell will slow formation of the slime layer after around 9 days of growth, perhaps due to slower metabolic activity. [ 4 ]
A bacterial capsule is similar, but is more rigid than the slime layer. Capsules are more organized and difficult to remove compared to their slime layer counterparts. [ 5 ] Another highly organized, but separate structure is an S-layer . S-layers are structures that integrate themselves into the cell wall and are composed of glycoproteins, these layers can offer the cell rigidity and protection. [ 6 ] Because a slime layer is loose and flowing, it does not aide the cell in its rigidity.
While biofilms can be composed of slime layer producing bacteria, it is typically not their main composition. Rather, a biofilm is made up of an array of microorganisms that come together to form a cohesive biofilm. [ 7 ] Although, there are homogeneous biofilms that can form. For example, the plaque that forms on the surfaces of teeth is caused by a biofilm formation of primarily Streptococcus mutans and the slow breakdown of tooth enamel. [ 8 ] [ 9 ]
The function of the slime layer is to protect the bacteria cells from environmental dangers such as antibiotics and desiccation . [ 1 ] The slime layer allows bacteria to adhere to smooth surfaces such as prosthetic implants and catheters , as well as other smooth surfaces like petri-dishes. [ 10 ] [ 4 ] Researchers found that the cells adhered themselves to the culture vessel without additional appendages, relying on the extracellular material alone.
While consisting mostly of polysaccharides, a slime layer may be over produced such that in a time of famine the cell can rely on the slime layer as extra food storage to survive. [ 8 ] In addition, a slime layer may be produced in ground dwelling prokaryotes to prevent unnecessary drying due to annual temperature and humidity shifts. [ 8 ]
It may permit bacterial colonies to survive chemical sterilization with chlorine , iodine , and other chemicals, leaving autoclaving or flushing with boiling water as the only certain methods of decontaminating .
Some bacteria have shown a protective response to attacks from the immune system by using their slime layers to absorb antibodies. [ 11 ] Additionally, some bacteria like Pseudomonas aeruginosa and Bacillus anthracis can produce biofilm structures that are effective against phagocyte attacks from the host immune system. [ 8 ] This type of biofilm formation increases their virulence factor as they are more likely to survive within a host's body, although this type of biofilm is typically associated with capsules. [ 12 ]
Because of the abundance of so many bacteria that are increasing their resistance to antimicrobial agents such as antibiotics (these products inhibit cell growth or just kill the cell), there is new research coming out about new drugs that reduce virulence factors in some bacteria. Anti-virulent drugs reduce the pathogenic properties in bacteria, allowing the host to attack said bacteria, or allows antimicrobial agents to work. Staphylococcus aureus is a pathogenic bacteria that causes several human infections with a plethora of virulence factors such as: biofilm formation, quorum sensing , and exotoxins to name a few. [ 13 ] Researchers took a look at Myricetin (Myr) as a multi-anti-virulence agent against S.areus and how it specifically impacts biofilm formation. After regular dosing it was found that biofilm formation decreased and the number of adhered cells on their specified media decreased without killing the cells. Myr is promising when surfaces are coated in the material, non-coated surfaces show a thick biofilm formation with a large quantity of cellular adherence; the coated material showed minimal cell clusters that were weakly adhered. [ 13 ]
A problem with concrete structures is the damage they receive during weather shifts, because if its porous nature there is an amount of water that can expand or contract the concrete depending on the environment. This damage makes these structures susceptible to sulfate attacks. Sulfate attacks occur when the sulfates in the concrete react to other salts formed by other sulfate sources and cause internal erosion of the concrete. The extra exposure to these sulfate (SO 4 ) ions can be caused by road salt getting splashed onto the structure, soils that are high in sulfates are also an issue for these concrete structures. Research has shown that some aerobic slime forming bacteria may be able to help repair and maintain concrete structures. [ 14 ] These bacteria act as a diffusion barrier from the external sulfates to the concrete. Researchers found that the thicker the layer the more effective it was, seeing almost a linear increase for the number of service years applicable to the concrete structure as the layer thickness increased. For long term repair of the structure, 60mm thickness of the slime layer should be used to ensure the longevity of the concrete structure, and to ensure the proper diffusion of sulfate ions. [ 14 ] | https://en.wikipedia.org/wiki/Slime_layer |
Slingshot is a water purification device created by inventor Dean Kamen . [ 1 ] Powered by a Stirling engine running on a combustible fuel source, it claims to be able to produce drinking water from almost any source [ 2 ] by means of vapor compression distillation , [ 3 ] requires no filters, and can operate using cow dung as fuel.
The name of the machine is a reference to the slingshot used by David to defeat Goliath . [ 4 ]
In his TEDMED 2010 presentation, Kamen announced several goals for and characteristics of the machine: [ 5 ]
Kamen came to develop the device based on statistics that showed a lack of access to clean water as a public health crisis. Statistics from the World Health Organization show that there are 900 million people worldwide without a readily available supply of drinking water and that some 3.5 million people die annually because of diseases resulting from the consumption of unsanitary water. Even though over two-thirds of the Earth 's surface is covered with water, only 1% of it is potable. [ 6 ]
Kamen sought to develop a technology that would transform the 97% of water that is undrinkable into water that can be used and consumed on the spot, readily and inexpensively. The device takes contaminated water and runs it through a vapor compression distiller that produces clean water, producing 250 gallons daily (~946 litres), enough for 100 people. The test devices have been used with "anything that looks wet", including polluted river water, saline ocean water, and raw sewage. [ 6 ] In a demonstration at a technology conference in October 2004, Kamen ran his own urine through the machine and drank the clean water that came out. [ 7 ]
Kamen built two machines — a power generator that would output one kilowatt from "anything that burns", and the water distiller, which uses the electricity. In 2005, the power generator was tested for six months in a village in Bangladesh and generated enough electricity to light 70 energy-efficient light bulbs. The hand-made prototype cost US$100,000 each. [ 8 ]
By the end of 2005, a team of 200 at DEKA had produced 30 units, each the size of a compact refrigerator. [ 7 ] A pair of Slingshot devices ran successfully for a month in a village in Honduras during the summer of 2006. While the initial devices cost hundreds of thousands of dollars, Kamen hopes that increased economies of scale will allow production machines to be made available for $2,000 each. [ 6 ]
In 2008, Kamen demonstrated the device on The Colbert Report . [ 9 ]
In his TEDMED 2010 presentation, Kamen lamented throughout that when he asked for "a few million dollars" over a few months, no large global health organizations supported the development. Later in the presentation, he announced a partnership with The Coca-Cola Company . [ 5 ]
In 2011, field tests of Slingshot in five towns in Ghana proved their effectiveness and durability. [ 10 ]
In October 2012, Kamen and Coca-Cola CEO Muhtar Kent announced at the Clinton Global Initiative that in collaboration with DEKA Research, Africare and Inter-American Development Bank , they will start bringing the Slingshot to rural parts of Latin America and Africa. The first initiative will be testing the Slingshot technology in health centers and schools in remote communities in Latin America in 2013.
Kamen hopes to send thousands of the units with local village entrepreneurs, in much the same way independent cell phone businesses have thrived and gradually changed the face of many impoverished areas around the globe. Future target price for the device is in the $1,000 to $2,000 range. [ 11 ]
As of 2020, the product does not seem to be in commercial production or wide use. Kamen invented the Coca-Cola Freestyle soda dispenser in return for the Coca-Cola corporation to release the Slingshot world wide. The systems were instead distributed as a component of EKOCENTER kiosks, of which only 150 have been deployed worldwide. [ 12 ] | https://en.wikipedia.org/wiki/Slingshot_(water_vapor_distillation_system) |
In philosophical logic , a slingshot argument is one of a group of arguments claiming to show that all true sentences stand for the same thing.
This type of argument was dubbed the " slingshot " by philosophers Jon Barwise and John Perry (1981) due to its disarming simplicity. It is usually said that versions of the slingshot argument have been given by Gottlob Frege , Alonzo Church , W. V. Quine , and Donald Davidson . However, it has been disputed by Lorenz Krüger (1995) that there is much unity in this tradition. Moreover, Krüger rejects Davidson's claim that the argument can refute the correspondence theory of truth . Stephen Neale (1995) claims, controversially, that the most compelling version was suggested by Kurt Gödel (1944).
These arguments are sometimes modified to support the alternative, and evidently stronger, conclusion that there is only one fact , or one true proposition , state of affairs , truth condition , truthmaker , and so on.
One version of the argument (Perry 1996) proceeds as follows.
Assumptions :
Let S and T be arbitrary true sentences, designating Des ( S ) and Des ( T ), respectively. (No assumptions are made about what kinds of things Des ( S ) and Des ( T ) are.) It is now shown by a series of designation-preserving transformations that Des ( S ) = Des ( T ). Here, " ι x {\displaystyle \iota x} " can be read as "the x such that".
Note that (1)-(9) is not a derivation of T from S . Rather, it is a series of (allegedly) designation-preserving transformation steps.
As Gödel (1944) observed, the slingshot argument does not go through if Bertrand Russell 's famous account of definite descriptions is assumed. Russell claimed that the proper logical interpretation of a sentence of the form "The F is G " is:
Or, in the language of first-order logic :
When the sentences above containing ι {\displaystyle \iota } -expressions are expanded out to their proper form, the steps involving substitution are seen to be illegitimate. Consider, for example, the move from (3) to (4). On Russell's account, (3) and (4) are shorthand for:
Clearly the substitution principle and assumption 4 do not license the move from (3') to (4'). Thus, one way to look at the slingshot is as simply another argument in favor of Russell's theory of definite descriptions.
If one is not willing to accept Russell's theory, then it seems wise to challenge either substitution or redistribution , which seem to be the other weakest points in the argument. Perry (1996), for example, rejects both of these principles, proposing to replace them with certain weaker, qualified versions that do not allow the slingshot argument to go through. | https://en.wikipedia.org/wiki/Slingshot_argument |
In materials science , slip is the large displacement of one part of a crystal relative to another part along crystallographic planes and directions. [ 1 ] Slip occurs by the passage of dislocations on close/packed planes, which are planes containing the greatest number of atoms per area and in close-packed directions (most atoms per length). Close-packed planes are known as slip or glide planes . A slip system describes the set of symmetrically identical slip planes and associated family of slip directions for which dislocation motion can easily occur and lead to plastic deformation . The magnitude and direction of slip are represented by the Burgers vector , b .
An external force makes parts of the crystal lattice glide along each other, changing the material's geometry. A critical resolved shear stress is required to initiate a slip. [ 2 ]
Slip in face centered cubic (fcc) crystals occurs along the close packed plane . Specifically, the slip plane is of type {111} , and the direction is of type < 1 10>. In the diagram on the right, the specific plane and direction are (111) and [ 1 10], respectively.
Given the permutations of the slip plane types and direction types, fcc crystals have 12 slip systems. [ 3 ] In the fcc lattice, the norm of the Burgers vector, b, can be calculated using the following equation: [ 4 ]
Where a is the lattice constant of the unit cell.
Slip in body-centered cubic (bcc) crystals occurs along the plane of shortest Burgers vector as well; however, unlike fcc, there are no truly close-packed planes in the bcc crystal structure.
Thus, a slip system in bcc requires heat to activate.
Some bcc materials (e.g. α-Fe) can contain up to 48 slip systems.
There are six slip planes of type {110}, each with two <111> directions (12 systems). There are 24 {123} and 12 {112} planes each with one <111> direction (36 systems, for a total of 48). Although the number of possible slip systems is much higher in bcc crystals than fcc crystals, the ductility is not necessarily higher due to increased lattice friction stresses . [ 3 ] While the {123} and {112} planes are not exactly identical in activation energy to {110}, they are so close in energy that for all intents and purposes they can be treated as identical.
In the diagram on the right the specific slip plane and direction are (110) and [ 1 11], respectively. [ 4 ]
Slip in hexagonal close packed (hcp) metals is much more limited than in bcc and fcc crystal structures. Usually, hcp crystal structures allow slip on the densely packed basal {0001} planes along the <11 2 0> directions.
The activation of other slip planes depends on various parameters, e.g. the c/a ratio.
Since there are only 2 independent slip systems on the basal planes, for arbitrary plastic deformation additional slip or twin systems needs to be activated. This typically requires a much higher resolved shear stress and can result in the brittle behavior of some hcp polycrystals. However, other hcp materials such as pure titanium show large amounts of ductility. [ 6 ]
Cadmium , zinc , magnesium , titanium , and beryllium have a slip plane at {0001} and a slip direction of <11 2 0>. This creates a total of three slip systems, depending on orientation. Other combinations are also possible. [ 7 ]
There are two types of dislocations in crystals that can induce slip - edge dislocations and screw dislocations. Edge dislocations have the direction of the Burgers vector perpendicular to the dislocation line, while screw dislocations have the direction of the Burgers vector parallel to the dislocation line. The type of dislocations generated largely depends on the direction of the applied stress, temperature, and other factors. Screw dislocations can easily cross slip from one plane to another if the other slip plane contains the direction of the Burgers vector. [ 2 ]
Formation of slip bands indicates a concentrated unidirectional slip on certain planes causing a stress concentration. Typically, slip bands induce surface steps (i.e. roughness due persistent slip bands during fatigue ) and a stress concentration which can be a crack nucleation site. Slip bands extend until impinged by a boundary, and the generated stress from dislocation pile-up against that boundary will either stop or transmit the operating slip. [ 9 ] [ 10 ]
Formation of slip bands under cyclic conditions is addressed as persistent slip bands (PSBs) where formation under monotonic condition is addressed as dislocation planar arrays (or simply slip-bands). [ 11 ] Slip-bands can be simply viewed as boundary sliding due to dislocation glide that lacks (the complexity of ) PSBs high plastic deformation localisation manifested by tongue- and ribbon-like extrusion. And, where PSBs normally studied with (effective) Burger’s vector aligned with extrusion plane because PSB extends across the grain and exacerbate during fatigue; [ 12 ] monotonic slip-band has a Burger’s vector for propagation and another for plane extrusions both controlled by the conditions at the tip.
The main methods to identify the active slip system involve either slip trace analysis of single crystals [ 13 ] [ 14 ] or polycrystals , [ 15 ] [ 8 ] using diffraction techniques such as neutron diffraction [ 16 ] and high angular resolution electron backscatter diffraction elastic strain analysis, [ 17 ] or Transmission electron microscopy diffraction imaging of dislocations . [ 18 ]
In slip trace analysis, only the slip plane is measured, and the slip direction is inferred. In zirconium, for example, this enables the identification of slip activity on a basal, prism, or 1st/2nd order pyramidal plane. In the case of a 1st-order pyramidal plane trace, the slip could be in either ⟨𝑎⟩ or ⟨𝑐 + 𝑎⟩ directions; slip trace analysis cannot discriminate between these. [ 5 ]
Diffraction -based studies measure the residual dislocation content instead of the slipped dislocations, which is only a good approximation for systems that accumulate networks of geometrically necessary dislocations , such as Face-centred cubic polycrystals. [ 19 ] In low-symmetry crystals such as hexagonal zirconium , there could be regions of the predominantly single slip where geometrically necessary dislocations may not necessarily accumulate. [ 20 ] Residual dislocation content does not distinguish between glissile and sessile dislocations. Glissile dislocations contribute to slip and hardening , but sessile dislocations contribute only to latent hardening. [ 5 ]
Diffraction methods cannot generally resolve the slip plane of a residual dislocation. For example, in Zr, the screw components of ⟨𝑎⟩ dislocations could slip on prismatic, basal, or 1st-order pyramidal planes. Similarly, ⟨𝑐 + 𝑎⟩ screw dislocations could slip on either 1st or 2nd order pyramidal planes. [ 5 ] | https://en.wikipedia.org/wiki/Slip_(materials_science) |
Slip bands or stretcher-strain marks are localized bands of plastic deformation in metals experiencing stresses. Formation of slip bands indicates a concentrated unidirectional slip on certain planes causing a stress concentration. Typically, slip bands induce surface steps (e.g., roughness due persistent slip bands during fatigue ) and a stress concentration which can be a crack nucleation site. Slip bands extend until impinged by a boundary , and the generated stress from dislocations pile-up against that boundary will either stop or transmit the operating slip depending on its (mis)orientation. [ 2 ] [ 3 ]
Formation of slip bands under cyclic conditions is addressed as persistent slip bands (PSBs) where formation under monotonic condition is addressed as dislocation planar arrays (or simply slip-bands, see Slip bands in the absence of cyclic loading section). [ 4 ] Slip-bands can be simply viewed as boundary sliding due to dislocation glide that lacks (the complexity of ) PSBs high plastic deformation localisation manifested by tongue- and ribbon-like extrusion. And, where PSBs normally studied with (effective) Burgers vector aligned with the extrusion plane because a PSB extends across the grain and exacerbates during fatigue; [ 5 ] a monotonic slip-band has a Burger’s vector for propagation and another for plane extrusions both controlled by the conditions at the tip.
Persistent slip-bands (PSBs) are associated with strain localisation due to fatigue in metals and cracking on the same plane. Transmission electron microscopy (TEM) and three-dimensional discrete dislocation dynamics (DDD [ 7 ] ) simulation were used to reveal and understand dislocations type and arrangement/ patterns to relate it to the sub-surface structure. PSB – ladder structure – is formed mainly from low-density channels of mobile gliding screw dislocation segments and high-density walls of dipolar edge dislocation segments piled up with tangled bowing-out edge segment and different sizes of dipolar loops scattered between the walls and channels. [ 8 ] [ 9 ]
One type of dislocation loop forms the boundary of a completely enclosed patch of slipped material on the slip plane which terminates at the free surface. Widening of the slip band: Screw dislocation can have high enough resolved shear stress for a glide on more than one slip plane. Cross-slip can occur. But this leaves some segments of dislocation on the original slip plane. Dislocation can cross-slip back on to a parallel primary slip plane. where it forms a new dislocation source, and the process can repeat. These walls in PSBs are a ‘dipole dispersion’ form of stable arrangement of edge dislocations with minimal long-range stress field which has a minimal long-range stress field. [ clarification needed ] This is different to slip-bands that is a planar stack of a stable array that has a strong long-range stress field. [ clarification needed ] Thus, – in the free surface – cut and open (elimination) of dislocation loops at the surface cause the irreversible/persistent surface step associated with slip-bands. [ 9 ] [ 10 ] [ 11 ]
Surface relief through extrusion occurs on the Burger's vector direction and extrusion height and PSB depth increase with PSB thickness. [ 12 ] PSB and planar walls are parallel and perpendicularly aligned with the normal direction of the Critical resolved shear stress , respectively. [ 13 ] And once dislocation saturate and reach its sessile configuration, cracks were observed to nucleate and propagate along PSB extrusions. [ 14 ] [ 15 ] [ 16 ] To summarise, contrary to 2D line defects, the field at the slip-band tip is due to three-dimensional interactions where the slip band extrusion simulates a sink-like dislocation blooming along the slip band axis. The magnitude of the gradient deformation field ahead of the slip band depends on the slip height and the mechanical conditions for propagation is influenced by the emitted dislocations long range field.A surface marking, or slip band, appears at the intersection of an active slip plane and the free surface of a crystal. Slip occurs in avalanches separated in time. Avalanches from other slip systems crossing a slip plane containing an active source led to the observed stepped surface markings, with successive avalanches from the given source displaced relative to each other. [ 17 ]
Dislocations are generated on a single slip plane They point out that a dislocation segment ( Frank–Read source ), lying in a slip plane and pinned at both ends, is a source of an unlimited number of dislocation loops. In this way the grouping of dislocations into an avalanche of a thousand or so loops on a single slip plane can be understood. [ 18 ] Each dislocation loop has a stress field that opposes the applied stress in the neighbourhood of the source. When enough loops have been generated, the stress at the source will fall to a value so low that additional loops cannot form. Only after the original avalanche of loops has moved some distance away can another avalanche occur.
Generation of the first avalanche at a source is easily understood. When the stress at the source reaches r*, loops are generated, and continue to be generated until the back-stress stops the avalanche. A second avalanche will not occur immediately in polycrystals, for the loops in the first avalanche are stopped or partially stopped at grain boundaries. Only if the external stress is increased substantially will a second avalanche be formed. In this way the formation of additional avalanches with rising stress can be understood.
It remains to explain the displacement of successive avalanches by a small amount normal to the slip plane, thereby accounting for the observed fine structure of slip bands . A displacement of this type requires that a Frank–Read source move relative to the surface where slip bands are observed.
In situ nano-compression work [ 19 ] in Transmission electron microscopy (TEM) reveals that the deformation of a-Fe at the nanoscale is an inhomogeneous process characterized by a series of short displacement bursts and intermittent large displacement bursts. The series of short bursts correspond to the collective movement of dislocations within the crystal. The large single bursts are from SBs nucleated from the specimen surface. These results suggest that the formation of SBs can be considered as a source-limited plasticity process. The initial plastic deformation is characterized by the multiplication/ movement of a few dislocations over short distances due to the availability of dislocation sources within the nano-blade. Once it has reached a stage at which the mobile dislocations along preferred slips planes have moved through the nano-blade or become entangled in sessile configurations and further dislocation movement is difficult within the crystal, plasticity is carried out by the formation of SBs, which nucleate from the surface [ 20 ] and then propagate through the nano-blade.
Fisher et al. [ 17 ] proposed that SBs are dynamically generated from a Frank–Read source at the specimen surface and are terminated by their own stress field in single crystals. The displacement burst behaviour reported by Kiener and Minor [ 21 ] on compressing Cu single crystal nanopillars. Obviously suppressed the progress of serrated yielding (a series of short strain bursts ) relative to that without the spinodal nanostructure. The results revealed that during compression deformation, the spinodal nanostructure confined the movement of dislocations (leading to a significant increase in dislocation density), causing a notable strengthening effect, and also kept the slip band morphology planar. [ 22 ]
Dislocation activity assists the growth of austenite precipitates and provide quantitative data for revealing the stress field generated by interface migration. [ 23 ] The jerky nature of the tip moving rate is probably due to the accumulation and relaxation of stress field near the tip. After leaving from the tip, the dislocation loop expands rapidly ahead of the tip thus the change in tip velocity is concomitant with dislocation emission. It indicates that the emitted dislocation is strongly repelled by the stress field present at the lath tip. When the loop meets the foil surface, it breaks into two dislocation segments that leave a visible trace, due to the presence of a thin oxide layer on the surface. The emission of a dislocation loop from the tip may also affect tip moving rate via interaction between the local dislocation loop and the possible interfacial dislocations in the semi-coherent interface surrounding the tip. consequently, the tip halted temporarily. The net shear stress acting on each dislocation results from a combination of the stress field at the lath tip (τ tip ), the image stress tending to attract the dislocation loop to the surface (τ image ), the line tension (τ l ) and the interaction stress between dislocations (τ inter ). This implies the strain field due to the transformation of austenite is large enough to cause the nucleation and emission of dislocations from an austenite lath tip. [ 24 ]
While repeatedly reversed loading commonly leads to localisation of dislocation glide, creating linear extrusions and intrusions on a free surface, similar features can arise even if there is no load reversal. These arise from dislocations gliding on a particular slip plane, in a particular slip direction (within a single grain), under an external load. Steps can be created on the free surface as a consequence of the tendency for dislocations to follow one another along a glide path, of which there may be several in parallel with each other in the grain concerned. Prior passage of dislocations apparently makes glide easier for subsequent ones, and the effect may also be associated with dislocation sources, such as a Frank-Read source , acting in particular planes.
The appearance of such bands, which are sometimes termed “persistent slip lines”, is similar to that of those arising from cyclic loading, but the resultant steps are usually more localised and have lower heights. They also reveal the grain structure . They can often be seen on free surfaces that were polished before the deformation took place. For example, the figure shows micrographs [ 25 ] (taken with different magnifications) of the region around an indent created in a copper sample with a spherical indenter. The parallel lines within individual grains are each the result of several hundred dislocations of the same type reaching the free surface, creating steps with a height of the order of a few microns. If a single slip system was operational within a grain, then there is just one set of lines, but it is common for more than one system to be activated within a grain (particularly when the strain is relatively high), leading to two or more sets of parallel lines. Other features indicative of the details of how the plastic deformation took place, such as a region of cooperative shear caused by deformation twinning , can also sometimes be seen on such surfaces. In the optical micrograph shown, there is also evidence of grain rotations – for example, at the “rim” of the indent and in the form of depressions at grain boundaries . Such images can thus be very informative.
The deformation field at the slip-band is due to three-dimensional elastic and plastic strains where the concentrated shear of the slip band tip deforms the grain in its vicinity. The elastic strains describe the stress concentration ahead of the slip band , which is important as it can affect the transfer of plastic deformation across grain boundaries. [ 27 ] [ 28 ] [ 29 ] An understanding of this is needed to support the study of yield and inter/intra-granular fracture. [ 30 ] [ 31 ] [ 32 ] The concentrated shear of slip bands can also nucleate cracks in the plane of the slip band , [ 15 ] [ 16 ] and persistent slip bands that lead to intragranular fatigue crack initiation and growth may also form under cyclic loading conditions. [ 4 ] [ 33 ] To properly characterise slip bands and validate mechanistic models for their interactions with microstructure, it is crucial to quantify the local deformation fields associated with their propagation. However, little attention has been given to slip bands within grains (i.e., in the absence of grain boundary interaction).
The long-range stress field (i.e., the elastic strain field) around the tip of a stress concentrator, such as a slip band , can be considered a singularity equivalent to that of a crack. [ 34 ] [ 35 ] This singularity can be quantified using a path independent integral since it satisfies the conservation laws of elasticity. The conservation laws of elasticity related to translational, rotational, and scaling symmetries were derived initially by Knowles and Sternberg [ 36 ] from the Noether's theorem . [ 37 ] Budiansky and Rice [ 38 ] introduced the J-, M-, L- integral and were the first to give them a physical interpretation as the strain energy-release rates for mechanisms such as cavity propagation, simultaneous uniform expansion, and defect rotation, respectively. When evaluated over a surface that encloses a defect, these conservation integrals represent a configurational force on the defect. [ 39 ] That work paved the way for the field of Configurational mechanics of materials, with the path-independent J-integral now widely used to analyse the configurational forces in problems as diverse as dislocation dynamics, [ 40 ] [ 41 ] misfitting inclusions , [ 42 ] propagation of cracks , [ 43 ] shear deformation of clays, [ 44 ] and co-planar dislocation nucleation from shear loaded cracks. [ 45 ] The integrals have been applied to linear elastic and elastic-plastic materials and have been coupled with processes such as thermal [ 46 ] and electrochemical [ 47 ] loading, and internal tractions. [ 48 ] Recently, experimental fracture mechanics studies have used full-field in situ measurements of displacements [ 49 ] [ 50 ] and elastic strains [ 51 ] [ 50 ] to evaluate the local deformation field surrounding the crack tip as a J-integral .
Slip bands form due to plastic deformation, and the analysis of the force on a dislocation considers the two-dimensional nature of the dislocation line defect. General definitions of the Peach– Koehler configurational force (𝑃 𝑘𝑗 ) [ 52 ] (or the elastic energy-momentum tensor [ 53 ] ) on a dislocation in the arbitrary 𝑥 1 , 𝑥 2 , 𝑥 3 coordinate system, decompose the Burgers vector (𝑏) to orthogonal components. This leads to the generalised definition of the J-integral in equations below. For a dislocation pile-up, the J-integral is the summation of the Peach– Koehler configurational force of the dislocations in the pile-up (including out-of-plane, 𝑏 3 [ 54 ] ).
𝐽 𝑘 = ∫ 𝑃 𝑘𝑗 𝑛 𝑗 𝑑𝑆 = ∫(𝑊 𝑠 𝑛 𝑘 − 𝑇 𝑖 𝑢 𝑖,𝑘 ) 𝑑𝑆
𝐽 𝑘 𝑥 = 𝑅 𝑘𝑗 𝐽 𝑗 , 𝑖,𝑗,𝑘=1,2,3
where 𝑆 is an arbitrary contour around the dislocation pile-up with unit outward normal 𝑛 𝑖 , 𝑊 𝑠 is the strain energy density, 𝑇 𝑖 = 𝜎 𝑖𝑗 𝑛 𝑗 is the traction on 𝑑𝑆, 𝑢 𝑖 are the displacement vector components, 𝐽 𝑘𝑥 is 𝐽-integral evaluated along the 𝑥 𝑘 direction, and 𝑅 𝑘𝑗 is a second-order mapping tensor that maps 𝐽 𝑘 into 𝑥 𝑘 direction. This vectorial 𝐽 𝑘 -integral leads to numerical difficulties in the analysis since 𝐽 2 and, for a three-dimensional slip band or inclined crack, the 𝐽 3 terms cannot be neglected. [ 1 ] | https://en.wikipedia.org/wiki/Slip_bands_in_metals |
A slip bond is a type of chemical noncovalent bond whose dissociation lifetime decreases with tensile force applied to the bond. This is the expected behaviour for chemical bonds, [ 1 ] but exceptions, like catch bonds exist.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slip_bond |
Slip forming , continuous poured , continuously formed , or slipform construction is a construction method in which concrete is placed into a form that may be in continuous motion horizontally, or incrementally raised vertically.
In horizontal construction, such as roadways and curbs, the weight of the concrete, forms, and any associated machinery is borne by the ground. In vertical construction, such as bridges, towers, buildings, and dams, forms are raised hydraulically in increments, no faster than the most recently poured concrete can set and support the combined weight of the concrete, forms, and machinery, and the pressure of concrete consolidation. [ 1 ]
Slipforming enables continuous, non-interrupted, cast-in-place, cold joint- and seam-free concrete structures that have performance characteristics superior to those of piecewise construction using discrete form elements. [ citation needed ]
Slip forming relies on the quick-setting properties of concrete, and requires a balance between workability and quick-setting capacity. Concrete needs to be workable enough to be placed into the form and consolidated (via vibration), yet quick-setting enough to emerge from the form with strength. This strength is needed because the freshly set concrete must not only permit the form to "slip" by the concrete without disturbing it, but also support the pressure of the new concrete and resist collapse caused by the vibration of the compaction machinery.
In horizontal slip forming for pavement, curbs, and traffic separation walls, concrete is laid down, vibrated, worked, and settled in place while the form itself slowly moves ahead. This method was initially devised and utilized in Interstate Highway construction initiated by the Eisenhower administration during the 1950s.
In vertical slip forming the concrete form may be surrounded by a platform on which workers stand, placing steel reinforcing rods ahead of the concrete and ensuring a smooth pour. [ 2 ] Together, the concrete form and working platform are raised by means of hydraulic jacks . [ 3 ] The slipform can only rise at a rate which permits the concrete to harden by the time it emerges from the bottom of the form. [ 1 ]
The slip forming technique was in use by the early 20th century for building silos and grain elevators . James MacDonald, of MacDonald Engineering of Chicago was the pioneer in utilizing slip form concrete for construction. His concept of placing circular bins in clusters was patented, with photographs and illustrations, contained in a 1907 book, "The Design Of Walls, Bins, And Grain Elevators". [ 4 ]
In 1910, MacDonald published a paper "Moving Forms for Reinforced Concrete Storage Bins," [ 5 ] describing the use of molds for moving forms, using jacks and concrete to form a continuous structure without joints or seams. This paper details the concept and procedure for creating slip form concrete structures. On May 24, 1917, a patent was issued to James MacDonald of Chicago, "for a device to move and elevate a concrete form in a vertical plane". [ 6 ]
James MacDonald’s bin and silo design was utilized around the world into the late 1970s by MacDonald Engineering. In the 1947-1950 period, MacDonald Engineering constructed over 40 concrete towers using the slip-form method for AT&T Long Lines [ 7 ] up to 58 m (190 ft) tall for microwave relay stations across the United States.
The former Landmark Hotel & Casino in Las Vegas was constructed in 1961 by MacDonald Engineering as a subcontractor, utilizing Macdonald’s concept of slip form concrete construction to build the 31 story reinforced steel tower. [ 8 ]
The technique was introduced to residential and commercial buildings already in the 1950s in Sweden. The Swedish company Bygging developed in 1944 the first hydraulic hijacks to lift the forms, which got patented. The first houses were built in Västertorp , Sweden, and Bygging became pioneers around the world with slip forming technique, from 1980 with the name Bygging-Uddemann . [ 9 ]
Residential and commercial building also was introduced in the late 1960s in USA. [ 2 ] One of Its first uses in high-rise buildings in the United States was on the shear wall supported apartment building at Turk & Eddy Streets in San Francisco, CA, in 1962, built by the San Francisco office of Macdonald Engineering. [ citation needed ] The first notable use of the method in a residential/retail business was the Skylon Tower in Niagara Falls, Ontario , which was completed in 1965. [ citation needed ] Another unusual structure was the tapered buttress structures for the Sheraton Waikiki Hotel in Honolulu, Hawaii, in 1969. Another shear wall supported structure was the Casa Del Mar Condominium on Key Biscayne, Miami, FL in 1970.
From the 1950s, the vertical technique was adapted to mining head frames, ventilation structures, below grade shaft lining, and coal train loading silos; theme and communication tower construction; high rise office building cores; shear wall supported apartment buildings; tapered stacks and hydro intake structures , etc. It is used for structures which would otherwise not be possible, such as the separate legs of the Troll A deep sea oil drilling platform which stands on the sea floor in water about 300 m (980 ft) deep, has an overall height of 472 m (1,549 ft) weighs 595,000 t (656,000 short tons), and has the distinction of being the tallest structure ever moved ( towed ) by mankind.
In addition to the typical silos and shear walls and cores in buildings, the system is used for lining underground shafts and surge tanks in hydroelectric generating facilities. The technique was utilized to build the Inco Superstack in Sudbury , Ontario , and the CN Tower in Toronto . In 2010, the technique was used to build the core of the supertall Shard London Bridge tower in London, England. | https://en.wikipedia.org/wiki/Slip_forming |
A slip joint is a mechanical construction allowing extension and compression in a linear structure of slip joint.
Slip joints can be designed to allow continuous relative motion of two components or it can allow an adjustment, by unclamping from one fixed position, and re-clamping to another. Examples of the latter are tripods , hiking poles , or similar telescoping device. The clamping mechanism is based on a cam , a set screw , or a similar locking mechanism. Slip joints can also be non-telescoping, such as the joints on some older wooden surveyor's levelling rods . These use a joint that keeps the sections offset from each other but able to be slid together for transport.
Examples of continuous slip joints are given below.
Slip joints in large structures are used to allow the independent motion of large components while enabling them to be joined in some way. For example, if two tall buildings are to be joined with a pedestrian skyway at some high level, there are two options in structural engineering. If the buildings are identical in mass and elasticity they will tend to respond similarly to ground motion induced by earthquakes . In this case, it may be appropriate to construct a rigid connection between the buildings, although this may require additional supporting members within the structures. On the other hand, a lower cost connection may be made by using a lightweight structure that is not coupled rigidly but instead is allowed to slide or "float" relative to one or both structures. This is especially suitable where the two structures may respond differently to ground motion. The structure will not be completely free to move but rather may use elastic materials to locate it near the center of its range of motion and viscous shock absorbers to absorb energy and to restrict the speed of relative motion. When a sliding connection is used it is extremely important that there be sufficient range of motion without failure to accommodate the maximum credible relative motion of the structures. Additional "fail-safe" flexible connections may be added to ensure that the structure does not fall, although it may be damaged to a point of being unserviceable or unrepairable.
Slip joints are common under conditions where temperature changes can cause expansion and contraction that may overstress a structure. These are generally referred to as expansion joints . Bridges and overpasses frequently have sliding joints that allow a deck to move relative to piers or abutments. The joints can be constructed with elastomeric pads that permit motion or can use rollers on flat surfaces to allow the ends to move smoothly. The exact details are limited by the imagination of the designer.
Slip joints are sometimes found in tubular structures such as piping but are generally avoided for this application due to requirements for sealing against leakage, instead of using either a large loop that is allowed to flex or a semi-rigid bellow . Slip joints are used when the main problem is a large axial movement. [ 1 ] Pipe supports often are slip joints to allow for the thermal expansion or contraction of the pipe relative to the support.
Slip joint connections are also commonly used in wastewater plumbing , most commonly under kitchen sinks. Here, the slip joint provides a water-tight seal for non-pressurized drainage, with adjustability to aid installation. The slip joint includes a gasket that fits snugly on a pipe end, with a threaded nut behind the gasket, but with gasket position adjustable as needed. This pipe end fits loosely into another with a flange for the gasket to seal against, and threads for the nut to clamp the gasket to the flange. | https://en.wikipedia.org/wiki/Slip_joint |
In materials science and soil mechanics , a slip line field or slip line field theory is a technique often used to analyze the stresses and forces involved in the major deformation of metals or soils . In essence, in some problems including plane strain and plane stress elastic-plastic problems, elastic part of the material prevent unrestrained plastic flow but in many metal-forming processes, such as rolling , drawing , gorging, etc., large unrestricted plastic flows occur except for many small elastic zones. In effect we are concerned with a rigid-plastic material under condition of plane strain. [ 1 ] it turns out that the simplest way of solving stress equations is to express them in terms of a coordinate system that is along potential slip (or failure) surfaces. It is for this reason that this type of analysis is termed slip line analysis or the theory of slip line fields in the literature. [ 2 ] [ 3 ]
The slip-line theory was co-developed by Hilda Geiringer in the early 1930s. [ 4 ] She developed the Geiringer equations , which simplify the process of calculating the deformation. [ 4 ]
This article about a mechanical engineering topic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slip_line_field |
The Slip melting point (SMP) or "slip point" is one conventional definition of the melting point of a waxy solid . It is determined by casting a 10 mm column of the solid in a glass tube with an internal diameter of about 1 mm and a length of about 80 mm, and then
immersing it in a temperature-controlled water bath . The slip point is
the temperature at which the column of the solid begins to rise in the tube
due to buoyancy , and because the outside surface of the solid is molten. [ 1 ]
This is a popular method for fats and waxes, because they tend to be mixtures of compounds with a range of molecular masses , without well-defined melting points. [ 2 ]
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slip_melting_point |
Slip ratio is a means of calculating and expressing the slipping behavior of the wheel of an automobile . It is of fundamental importance in the field of vehicle dynamics, as it allows to understand the relationship between the deformation of the tire and the longitudinal forces (i.e. the forces responsible for forward acceleration and braking) acting upon it. Furthermore, it is essential to the effectiveness of any anti-lock braking system .
When accelerating or braking a vehicle equipped with tires, the observed angular velocity of the tire does not match the expected velocity for pure rolling motion, which means there appears to be apparent sliding between outer surface of the rim and the road in addition to rolling due to deformation of the part of tire above the area in contact with the road. When driving on dry pavement the fraction of slip that is caused by actual sliding taking place between road and tire contact patch is negligible in magnitude and thus does not in practice make slip ratio dependent on speed. It is only relevant in soft or slippery surfaces, like snow, mud, ice, etc and results constant speed difference in same road and load conditions independently of speed, and thus fraction of slip ratio due to that cause is inversely related to speed of the vehicle.
The difference between theoretically calculated forward speed based on angular speed of the rim and rolling radius, and actual speed of the vehicle, expressed as a percentage of the latter, is called ‘slip ratio’. This slippage is caused by the forces at the contact patch of the tire, not the opposite way, and is thus of fundamental importance to determine the accelerations a vehicle can produce.
There is no universally agreed upon definition of slip ratio. [ 1 ] The SAE J670 definition is, for tires pointing straight ahead: [ 2 ]
Where Ω {\displaystyle \Omega } is the angular velocity of the wheel, R C {\displaystyle R_{C}} is the effective radius of the corresponding free-rolling tire, which can be calculated from the revolutions per kilometer, and V {\displaystyle V} is the forward velocity of the vehicle. | https://en.wikipedia.org/wiki/Slip_ratio |
Slip ratio (or velocity ratio ) in gas–liquid (two-phase) flow, is defined as the ratio of the velocity of the gas phase to the velocity of the liquid phase. [ 1 ]
In the homogeneous model of two-phase flow , the slip ratio is by definition assumed to be unity (no slip). It is however experimentally observed that the velocity of the gas and liquid phases can be significantly different, depending on the flow pattern (e.g. plug flow , annular flow, bubble flow, stratified flow, slug flow, churn flow). The models that account for the existence of the slip are called "separated flow models".
The following identities can be written using the interrelated definitions:
where:
There are a number of correlations for slip ratio.
For homogeneous flow, S = 1 (i.e. there is no slip).
The Chisholm correlation [ 2 ] [ 3 ] is:
S = 1 − x ( 1 − ρ L ρ G ) {\displaystyle S={\sqrt {1-x\left(1-{\frac {\rho _{L}}{\rho _{G}}}\right)}}}
The Chisholm correlation is based on application of the simple annular flow model and equates the frictional pressure drops in the liquid and the gas phase.
The slip ratio for two-phase cross-flow horizontal tube bundles may be determined using the following correlation:
S = 1 + 25.7 R i ⋅ C a ⋅ ( P / D ) − 1 {\displaystyle S=1+25.7{\sqrt {Ri\cdot Ca}}\cdot {\bigl (}P/D)^{-1}} where the Richardson and capillary numbers are defined as R i = ( ρ l − ρ g ) 2 ⋅ g ⋅ y m i n G 2 {\displaystyle Ri={\frac {(\rho _{l}-\rho _{g})^{2}\cdot g\cdot y_{min}}{G^{2}}}} and C a = μ l σ ( x ⋅ G ϵ ⋅ ρ g ) {\displaystyle Ca={\frac {\mu _{l}}{\sigma }}\left({\frac {x\cdot G}{\epsilon \cdot \rho _{g}}}\right)} . [ 4 ]
For enhanced surfaces bundles the slip ratio can be defined as:
S = 6.71 ( R i ⋅ C a ) {\displaystyle S=6.71{\sqrt {(Ri\cdot Ca)}}} [ 5 ]
Where: | https://en.wikipedia.org/wiki/Slip_ratio_(gas–liquid_flow) |
In architecture , a slipcover is a modification of an older building facing by adding a new ornamental layer.
The slipcover was a popular treatment in the United States after World War II , as early twentieth-century building styles had fallen out of fashion. Constructing a slipcover with a contemporary design over an existing building was a less expensive alternative to tearing down and building anew. [ 1 ] Sometimes attachments of the slipcover caused damage to the original facings. At other times, slipcovers have protected the original facings from deterioration. [ 2 ]
Slipcovers are used on structures. "Slipcovered buildings are those structures whose facade have been sheathed in a newer material which partially or completely masks the original". [ 3 ]
In the US, the slipcovering of buildings sparked prominence from the mid-1940s to the 1960s. [ 3 ] Building owners applied these slipcovers to their old historic buildings in an effort to refresh their business and create a more modern appearance. [ 4 ]
Following the Second World War, the architectural styles that were popular prior to the war were considered “passé” and were thought to not truly represent the ambitions of the “forward-thinking generation”. [ 3 ] Hence, there became an indulgence in new, modern styles of Victorian, Classic Revival, Art Deco , and more early twentieth-century American commercial styles. [ 3 ]
The Modern movement in architecture ignited the beginning of the architectural revolution. This was pioneered by architects and courageous clients who sought “innovative residential and institutional buildings”. [ 5 ] Shopkeepers across the US were ready to join the movement by the middle of the 1930s thanks to the combination of architectural-design competitions and clever architectural promotions. [ 5 ] As modernist buildings become historic, slipcovered structures create challenges for the preservation community in terms of evaluating and treating them. A majority of these slipcovered buildings have been standing for more than 50 years, the usual interval at which a building can be designated historic. [ 5 ] It is critical to understand the evolution of design, the options for designation, and the treatment protocols.
Slipcovers are prevalently seen in the commercial sector. There has been rapid acceptance of modern architecture in this sector, as this could be the storefronts or movie theatres that utilise slipcovers on many streets around the world. [ 5 ]
There are frequent renovations to storefronts consistently, to adapt for new businesses or for a renewal of style. [ 5 ] The use of slipcovers became more efficient and recognised when the architectural media, the sign industry, and the marketing departments of building-product industries became involved and promoted their products for the use of slipcovering. [ 5 ] Additionally, trade publications from retailers and sign companies with a modernisation message were widely distributed.
Shopping malls are competition to store owners, so in the past 60 years, merchants and property owners have tried to 'imitate their competition' (the shopping mall). [ 6 ] These 'attempts to modernise' were seen as 'pedestrian malls, covering traditional building fronts with aluminium slipcovers, and attaching huge, oversized signs on their buildings to attract attention'. [ 6 ]
Mid 1940s: Introduced at this time in the mid-20th century, prominently in the US. [ 3 ]
Mid 1940s-1950s: 'Continuation of competitive modernism'. Facades on Main Streets becoming 'angular three-dimensional sculptures and businesses erecting stand-alone pylon signs'. There was an increase in 'overarching visual control through aluminium slipcovering of older Main Street facades'. [ 7 ]
1950s-1970s: The use of slipcovers was applied to many downtown Wisconsin buildings in these decades. Aluminium companies such as the Aluminium Company of America (Alcoa) started to manufacture and sell the large panels used for slipcovers.
Beneath the prefabricated metal panels or other materials that may have been used to construct a slipcover for a building, there is a concealed façade, which may be one of cultural or historical significance. One of the reasons for the use of slipcovers on buildings is that the historic appearance of the building can be restored at any time, by removing the slipcover. [ 4 ]
The use of slipcovers has also been applied for business owners in the same area, who have covered their historic building fronts to obtain a visible “modernisation” of the area. Hence, slipcovers have the ability to change the overall expression of an area. [ 4 ]
With slipcovers, sometimes the outside may relate to the inside of the building, while sometimes it may not. Nowadays, what is desired from a slipcover is not usually a classical façade, as inflation has led to the meticulous detail of classical styles to be too expensive. [ 8 ]
Partially or completely covering a building can mask its original character, detail, and ornamentation, or the slipcover can even obliterate the original. [ 5 ]
Due to the sheer size of larger commercial structures, most of which were occupied by offices and relying on natural light and ventilation, it was not practical or common to cover windows like they sometimes do in smaller buildings. [ 5 ]
Slipcovers are commonly seen as used for storefronts. As older, traditional buildings are sometimes quite large, there may be numerous, different slipcovers used for a variety of storefronts at the street level of these older buildings. [ 5 ]
Slipcovers can be erected over buildings during renovation periods. [ 9 ] The alterations to buildings to add slipcovers can be 'radical but reversible', as the slipcover can be removed at any time. [ 9 ] Slipcovers create a division from the interior space and the slipcover on the primary facade. [ 9 ]
Popular materials used in the construction of slipcovers include
Slipcovers were most often made of aluminium or sheet metal. [ 4 ] The panels used for slipcovering buildings are usually produced in industrial plants. The structural elements are then shipped to the site of use and erected over the existing façade. [ 4 ] It is a complicated and delicate task to preserve modern buildings and those that have changed over time. [ 5 ]
Popular materials such as plaster and marble caused extensive damage to the original facade beneath their installation. [ 3 ]
Older buildings often had permanent exterior walls made of cast iron, brick, stone, and terracotta. [ 3 ] Because these materials were integral to the façade, removing them during a cosmetic update was challenging. The majority of the time, they were simply covered over and, despite some damage, remain intact under the slipcovers. [ 3 ] Some slipcovers have helped preserve the architectural details behind them by concealing them with their mask. [ 3 ] Moreover, it varies in how much the slipcover altered the original appearance of the building.
Some examples of slipcovering include on commercial block buildings; 'the upper facade being stuccoed' and the window area being slipcovered. [ 10 ] Additionally, on some commercial buildings, there may be a 'modern slipcover hiding the upper facade'. [ 10 ] Another example is a 'metal slipcover on the upper facade and part of display windows'. [ 10 ] There may also be instances where there is a 'stone facade treatment below the canopy and metal slipcover above'. [ 10 ] Also, commercial block buildings may have wooden slipcovers with altered storefronts. The building blocks may have been built in 1900 and the facade be rebuilt in the mid-1950s. [ 10 ] | https://en.wikipedia.org/wiki/Slipcover_(architecture) |
Slipform stonemasonry is a method for making a reinforced concrete wall with stone facing in which stones and mortar are built up in courses within reusable slipforms . It is a cross between traditional mortared stone wall and a veneered stone wall. Short forms, up to 60 cm high, are placed on both sides of the wall to serve as a guide for the stone work. The stones are placed inside the forms with the good faces against the form work. Concrete is poured in behind the rocks. Rebar is added for strength, to make a wall that is approximately half reinforced concrete and half stonework . The wall can be faced with stone on one side or both sides. After the concrete sets enough to hold the wall together, the forms are "slipped" up to pour the next level. With slipforms it is easy for a novice to build free-standing stone walls. [ 1 ]
Slipform stonemasonry was developed by New York architect Ernest Flagg in 1920. Flagg built a vertical framework as tall as the wall, then inserted 2x6 or 2x8 planks as forms to guide the stonework. When the masonry work reached the top of a plank, Flagg inserted another one, adding more planks until he reached the top of the wall. Helen and Scott Nearing modified the technique in Vermont in the 1930s, using slipforms that were slipped up the wall. [ 2 ]
The diagram of the slipform wall section is completely misleading without showing the 2nd form. | https://en.wikipedia.org/wiki/Slipform_stonemasonry |
A slippery sequence is a small section of codon nucleotide sequences (usually UUUAAAC) that controls the rate and chance of ribosomal frameshifting . A slippery sequence causes a faster ribosomal transfer which in turn can cause the reading ribosome to "slip." This allows a tRNA to shift by 1 base (−1) after it has paired with its anticodon, changing the reading frame. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] A −1 frameshift triggered by such a sequence is a programmed −1 ribosomal frameshift . It is followed by a spacer region, and an RNA secondary structure. Such sequences are common in virus polyproteins . [ 1 ]
The frameshift occurs due to wobble pairing. The Gibbs free energy of secondary structures downstream give a hint at how often frameshift happens. [ 7 ] Tension on the mRNA molecule also plays a role. [ 8 ] A list of slippery sequences found in animal viruses is available from Huang et al. [ 9 ]
Slippery sequences that cause a 2-base slip (−2 frameshift) have been constructed out of the HIV UUUUUUA sequence. [ 8 ] | https://en.wikipedia.org/wiki/Slippery_sequence |
In mathematics , the slope or gradient of a line is a number that describes the direction of the line on a plane . [ 1 ] Often denoted by the letter m , slope is calculated as the ratio of the vertical change to the horizontal change ("rise over run") between two distinct points on the line, giving the same number for any choice of points.
The line may be physical – as set by a road surveyor , pictorial as in a diagram of a road or roof, or abstract .
An application of the mathematical concept is found in the grade or gradient in geography and civil engineering .
The steepness , incline, or grade of a line is the absolute value of its slope: greater absolute value indicates a steeper line. The line trend is defined as follows:
Special directions are:
If two points of a road have altitudes y 1 and y 2 , the rise is the difference ( y 2 − y 1 ) = Δ y . Neglecting the Earth's curvature , if the two points have horizontal distance x 1 and x 2 from a fixed point, the run is ( x 2 − x 1 ) = Δ x . The slope between the two points is the difference ratio :
Through trigonometry , the slope m of a line is related to its angle of inclination θ by the tangent function
Thus, a 45° rising line has slope m = +1, and a 45° falling line has slope m = −1.
Generalizing this, differential calculus defines the slope of a plane curve at a point as the slope of its tangent line at that point. When the curve is approximated by a series of points, the slope of the curve may be approximated by the slope of the secant line between two nearby points. When the curve is given as the graph of an algebraic expression , calculus gives formulas for the slope at each point. Slope is thus one of the central ideas of calculus and its applications to design.
There seems to be no clear answer as to why the letter m is used for slope, but it first appears in English in O'Brien (1844) [ 2 ] who introduced the equation of a line as " y = mx + b " , and it can also be found in Todhunter (1888) [ 3 ] who wrote " y = mx + c ". [ 4 ]
The slope of a line in the plane containing the x and y axes is generally represented by the letter m , [ 5 ] and is defined as the change in the y coordinate divided by the corresponding change in the x coordinate, between two distinct points on the line. This is described by the following equation:
(The Greek letter delta , Δ, is commonly used in mathematics to mean "difference" or "change".)
Given two points ( x 1 , y 1 ) {\displaystyle (x_{1},y_{1})} and ( x 2 , y 2 ) {\displaystyle (x_{2},y_{2})} , the change in x {\displaystyle x} from one to the other is x 2 − x 1 {\displaystyle x_{2}-x_{1}} ( run ), while the change in y {\displaystyle y} is y 2 − y 1 {\displaystyle y_{2}-y_{1}} ( rise ). Substituting both quantities into the above equation generates the formula:
The formula fails for a vertical line, parallel to the y {\displaystyle y} axis (see Division by zero ), where the slope can be taken as infinite , so the slope of a vertical line is considered undefined.
Suppose a line runs through two points: P = (1, 2) and Q = (13, 8). By dividing the difference in y {\displaystyle y} -coordinates by the difference in x {\displaystyle x} -coordinates, one can obtain the slope of the line:
As another example, consider a line which runs through the points (4, 15) and (3, 21). Then, the slope of the line is
For example, consider a line running through points (2,8) and (3,20). This line has a slope, m , of
One can then write the line's equation, in point-slope form:
or:
The angle θ between −90° and 90° that this line makes with the x -axis is
Consider the two lines: y = −3 x + 1 and y = −3 x − 2 . Both lines have slope m = −3 . They are not the same line. So they are parallel lines.
Consider the two lines y = −3 x + 1 and y = x / 3 − 2 . The slope of the first line is m 1 = −3 . The slope of the second line is m 2 = 1 / 3 . The product of these two slopes is −1. So these two lines are perpendicular.
In statistics , the gradient of the least-squares regression best-fitting line for a given sample of data may be written as:
This quantity m is called as the regression slope for the line y = m x + c {\displaystyle y=mx+c} . The quantity r {\displaystyle r} is Pearson's correlation coefficient , s y {\displaystyle s_{y}} is the standard deviation of the y-values and s x {\displaystyle s_{x}} is the standard deviation of the x-values. This may also be written as a ratio of covariances : [ 6 ]
The concept of a slope is central to differential calculus . For non-linear functions, the rate of change varies along the curve. The derivative of the function at a point is the slope of the line tangent to the curve at the point and is thus equal to the rate of change of the function at that point.
If we let Δ x and Δ y be the distances (along the x and y axes, respectively) between two points on a curve, then the slope given by the above definition,
is the slope of a secant line to the curve. For a line, the secant between any two points is the line itself, but this is not the case for any other type of curve.
For example, the slope of the secant intersecting y = x 2 at (0,0) and (3,9) is 3. (The slope of the tangent at x = 3 ⁄ 2 is also 3 − a consequence of the mean value theorem .)
By moving the two points closer together so that Δ y and Δ x decrease, the secant line more closely approximates a tangent line to the curve, and as such the slope of the secant approaches that of the tangent. Using differential calculus , we can determine the limit , or the value that Δ y /Δ x approaches as Δ y and Δ x get closer to zero ; it follows that this limit is the exact slope of the tangent. If y is dependent on x , then it is sufficient to take the limit where only Δ x approaches zero. Therefore, the slope of the tangent is the limit of Δ y /Δ x as Δ x approaches zero, or d y /d x . We call this limit the derivative .
The value of the derivative at a specific point on the function provides us with the slope of the tangent at that precise location. For example, let y = x 2 . A point on this function is (−2,4). The derivative of this function is d y ⁄ d x = 2 x . So the slope of the line tangent to y at (−2,4) is 2 ⋅ (−2) = −4 . The equation of this tangent line is: y − 4 = (−4)( x − (−2)) or y = −4 x − 4 .
An extension of the idea of angle follows from the difference of slopes. Consider the shear mapping
Then ( 1 , 0 ) {\displaystyle (1,0)} is mapped to ( 1 , v ) {\displaystyle (1,v)} . The slope of ( 1 , 0 ) {\displaystyle (1,0)} is zero and the slope of ( 1 , v ) {\displaystyle (1,v)} is v {\displaystyle v} . The shear mapping added a slope of v {\displaystyle v} . For two points on { ( 1 , y ) : y ∈ R } {\displaystyle \{(1,y):y\in \mathbb {R} \}} with slopes m {\displaystyle m} and n {\displaystyle n} , the image
has slope increased by v {\displaystyle v} , but the difference n − m {\displaystyle n-m} of slopes is the same before and after the shear. This invariance of slope differences makes slope an angular invariant measure , on a par with circular angle (invariant under rotation) and hyperbolic angle, with invariance group of squeeze mappings . [ 7 ] [ 8 ]
The slope of a roof, traditionally and commonly called the roof pitch , in carpentry and architecture in the US is commonly described in terms of integer fractions of one foot (geometric tangent, rise over run), a legacy of British imperial measure. Other units are in use in other locales, with similar conventions. For details, see roof pitch .
There are two common ways to describe the steepness of a road or railroad . One is by the angle between 0° and 90° (in degrees), and the other is by the slope in a percentage. See also steep grade railway and rack railway .
The formulae for converting a slope given as a percentage into an angle in degrees and vice versa are:
and
where angle is in degrees and the trigonometric functions operate in degrees. For example, a slope of 100 % or 1000 ‰ is an angle of 45°.
A third way is to give one unit of rise in say 10, 20, 50 or 100 horizontal units, e.g. 1:10. 1:20, 1:50 or 1:100 (or "1 in 10", "1 in 20", etc.) 1:10 is steeper than 1:20. For example, steepness of 20% means 1:5 or an incline with angle 11.3°.
Roads and railways have both longitudinal slopes and cross slopes.
The concept of a slope or gradient is also used as a basis for developing other applications in mathematics: | https://en.wikipedia.org/wiki/Slope |
A slope field (also called a direction field [ 1 ] ) is a graphical representation of the solutions to a first-order differential equation [ 2 ] of a scalar function. Solutions to a slope field are functions drawn as solid curves. A slope field shows the slope of a differential equation at certain vertical and horizontal intervals on the x-y plane, and can be used to determine the approximate tangent slope at a point on a curve, where the curve is some solution to the differential equation.
The slope field can be defined for the following type of differential equations
which can be interpreted geometrically as giving the slope of the tangent to the graph of the differential equation's solution ( integral curve ) at each point ( x , y ) as a function of the point coordinates. [ 3 ]
It can be viewed as a creative way to plot a real-valued function of two real variables f ( x , y ) {\displaystyle f(x,y)} as a planar picture. Specifically, for a given pair x , y {\displaystyle x,y} , a vector with the components [ 1 , f ( x , y ) ] {\displaystyle [1,f(x,y)]} is drawn at the point x , y {\displaystyle x,y} on the x , y {\displaystyle x,y} -plane. Sometimes, the vector [ 1 , f ( x , y ) ] {\displaystyle [1,f(x,y)]} is normalized to make the plot better looking for a human eye. A set of pairs x , y {\displaystyle x,y} making a rectangular grid is typically used for the drawing.
An isocline (a series of lines with the same slope) is often used to supplement the slope field. In an equation of the form y ′ = f ( x , y ) {\displaystyle y'=f(x,y)} , the isocline is a line in the x , y {\displaystyle x,y} -plane obtained by setting f ( x , y ) {\displaystyle f(x,y)} equal to a constant.
Given a system of differential equations,
the slope field is an array of slope marks in the phase space (in any number of dimensions depending on the number of relevant variables; for example, two in the case of a first-order linear ODE , as seen to the right). Each slope mark is centered at a point ( t , x 1 , x 2 , … , x n ) {\displaystyle (t,x_{1},x_{2},\ldots ,x_{n})} and is parallel to the vector
The number, position, and length of the slope marks can be arbitrary. The positions are usually chosen such that the points ( t , x 1 , x 2 , … , x n ) {\displaystyle (t,x_{1},x_{2},\ldots ,x_{n})} make a uniform grid. The standard case, described above, represents n = 1 {\displaystyle n=1} . The general case of the slope field for systems of differential equations is not easy to visualize for n > 2 {\displaystyle n>2} .
With computers, complicated slope fields can be quickly made without tedium, and so an only recently practical application is to use them merely to get the feel for what a solution should be before an explicit general solution is sought. Of course, computers can also just solve for one, if it exists.
If there is no explicit general solution, computers can use slope fields (even if they aren’t shown) to numerically find graphical solutions. Examples of such routines are Euler's method , or better, the Runge–Kutta methods .
Different software packages can plot slope fields. | https://en.wikipedia.org/wiki/Slope_field |
A slope house or souterrain house is a house with soil or rock completely covering one wall and part of two more on the bottom floor. The house may have two entries depending on the ground level.
The main reason for building a slope house is due to the landscape, for example, if the land where the house should be built is placed on a hill or a slope on a mountain. Unlike an earth-sheltered building , the primary reason is not to use the thermal mass from the surrounding earth to insulate the house. Sometimes the soil is excavated to make the floor area the same on both upper and lower floor, the soil can also be partly excavated making the area for the lower floor smaller. [ 1 ] When a house is built in a slope the advantage in an open country is the view, mountain, lake or meadow. [ 2 ] | https://en.wikipedia.org/wiki/Slope_house |
A slosh baffle is a device used to dampen the adverse effects of liquid slosh in a tank. Slosh baffles have been implemented in a variety of applications including tanker trucks , and liquid rockets , although any moving tank containing liquid may employ them. [ 1 ]
Baffle rings are rigid rings placed within the inside of a tank to retard the flow of liquid between sections. The location and orifice size of the rings yield varying performance for a given application. [ 2 ]
Baffle blocks
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slosh_baffle |
In fluid dynamics , slosh refers to the movement of liquid inside another object (which is, typically, also undergoing motion).
Strictly speaking, the liquid must have a free surface to constitute a slosh dynamics problem, where the dynamics of the liquid can interact with the container to alter the system dynamics significantly. [ 1 ] Important examples include propellant slosh in spacecraft tanks and rockets (especially upper stages), and the free surface effect (cargo slosh) in ships and trucks transporting liquids (for example oil and gasoline).
However, it has become common to refer to liquid motion in a completely filled tank, i.e. without a free surface, as "fuel slosh". [ not verified in body ]
Such motion is characterized by " inertial waves " and can be an important effect in spinning spacecraft dynamics. Extensive mathematical and empirical relationships have been derived to describe liquid slosh. [ 2 ] [ 3 ] These types of analyses are typically undertaken using computational fluid dynamics and finite element methods to solve the fluid-structure interaction problem, especially if the solid container is flexible. Relevant fluid dynamics non-dimensional parameters include the Bond number , the Weber number , and the Reynolds number .
Slosh is an important effect for spacecraft, [ 4 ] ships, [ 3 ] some land vehicles and some aircraft . Slosh was a factor in the Falcon 1 second test flight anomaly, and has been implicated in various other spacecraft anomalies, including a near-disaster [ 5 ] with the Near Earth Asteroid Rendezvous ( NEAR Shoemaker ) satellite.
Liquid slosh in microgravity [ 6 ] [ 7 ] is relevant to spacecraft, most commonly Earth-orbiting satellites , and must take account of liquid surface tension which can alter the shape (and thus the eigenvalues ) of the liquid slug. Typically, a large fraction of the mass of a satellite is liquid propellant at/near Beginning of Life (BOL), and slosh can adversely affect satellite performance in a number of ways. For example, propellant slosh can introduce uncertainty in spacecraft attitude (pointing) which is often called jitter . Similar phenomena can cause pogo oscillation and can result in structural failure of a space vehicle.
Another example is problematic interaction with the spacecraft's Attitude Control System (ACS), especially for spinning satellites [ 8 ] which can suffer resonance between slosh and nutation , or adverse changes to the rotational inertia . Because of these types of risk , in the 1960s the National Aeronautics and Space Administration (NASA) extensively studied [ 9 ] liquid slosh in spacecraft tanks, and in the 1990s NASA undertook the Middeck 0-Gravity Dynamics Experiment [ 10 ] on the Space Shuttle . The European Space Agency has advanced these investigations [ 11 ] [ 12 ] [ 13 ] [ 14 ] with the launch of SLOSHSAT . Most spinning spacecraft since 1980 have been tested at the Applied Dynamics Laboratories drop tower using sub-scale models. [ 15 ] Extensive contributions have also been made [ 16 ] by the Southwest Research Institute , but research is widespread [ 17 ] in academia and industry.
Research is continuing into slosh effects on in- space propellant depots . In October 2009, the United States Air Force and United Launch Alliance (ULA) performed an experimental on-orbit demonstration on a modified Centaur upper stage on the DMSP-18 satellite launch in order to improve "understanding of propellant settling and slosh", "The light weight of DMSP-18 allowed 12,000 pounds (5,400 kg) of remaining LO 2 and LH 2 propellant, 28% of Centaur’s capacity", for the on-orbit tests. The post-spacecraft mission extension ran 2.4 hours before the planned deorbit burn was executed. [ 18 ]
NASA's Launch Services Program is working on two on-going slosh fluid dynamics experiments with partners: CRYOTE and SPHERES -Slosh. [ 19 ] ULA has additional small-scale demonstrations of cryogenic fluid management are planned with project CRYOTE in 2012–2014 [ 20 ] leading to a ULA large-scale cryo-sat propellant depot test under the NASA flagship technology demonstrations program in 2015. [ 20 ] SPHERES-Slosh with Florida Institute of Technology and Massachusetts Institute of Technology will examine how liquids move around inside containers in microgravity with the SPHERES Testbed on the International Space Station .
Liquid sloshing strongly influences the directional dynamics and safety performance of highway tank vehicles in a highly adverse manner. [ 21 ] Hydrodynamic forces and moments arising from liquid cargo oscillations in the tank under steering and/or braking maneuvers reduce the stability limit and controllability of partially-filled tank vehicles . [ 22 ] [ 23 ] [ 24 ] Anti-slosh devices such as baffles are widely used in order to limit the adverse liquid slosh effect on directional performance and stability of the tank vehicles . [ 25 ] Since most of the time, tankers are carrying dangerous liquid contents such as ammonia, gasoline and fuel oils, stability of partially-filled liquid cargo vehicles is very important. Optimizations and sloshing reduction techniques in fuel tanks such as elliptical tank, rectangular, modified oval and generic tank shape have been performed in different filling levels using numerical, analytical and analogical analyses. Most of these studies concentrate on effects of baffles on sloshing while the influence of cross-section is completely ignored. [ 26 ]
The Bloodhound LSR 1,000 mph project car utilizes a liquid-fuelled rocket that requires a specially-baffled oxidizer tank to prevent directional instability, rocket thrust variations and even oxidizer tank damage. [ 27 ]
Sloshing or shifting cargo , water ballast , or other liquid (e.g., from leaks or fire fighting) can cause disastrous capsizing in ships due to free surface effect ; this can also affect trucks and aircraft.
The effect of slosh is used to limit the bounce of a roller hockey ball. Water slosh can significantly reduce the rebound height of a ball [ 28 ] but some amounts of liquid seem to lead to a resonance effect. Many of the balls for roller hockey commonly available contain water to reduce the bounce height. | https://en.wikipedia.org/wiki/Slosh_dynamics |
The sloshing bucket model of evolution is a theory in evolutionary biology that describes how environmental disturbances varying in magnitude will affect the species present. [ 1 ] [ 2 ] [ 3 ] The theory emphasizes the causal relationship between environmental factors that impinge and affect genealogical systems, providing an overarching view that determines the relationship between the variety of biological systems.
This theory was developed by Niles Eldredge , a U.S. biologist and paleontologist , [ 4 ] and published in the journal 'Evolutionary Dynamics: Exploring the Interplay of Selection, Accident, Neutrality and Function ' where Eldredge introduces his sloshing bucket model in the article titled: 'The Sloshing Bucket: How the Physical Realm Controls Evolution'. [ 5 ]
The sloshing bucket model uses the imagery of water representing species sloshing back and forth in the environment, represented by the bucket. Disturbances in the environment are represented by the movement of the bucket, creating the sloshes. Starting off, small sloshes/disturbances do not spill any water; stasis of the current species are dominant. However, as physical disturbances grow in magnitude and size, the sloshes will result in large amounts of water spilling out, representing the extinction and speciation of the organisms present.
An example Eldredge uses is the dinosaurs, which were the prevalent life form on earth for 150 million years, [ 6 ] surviving smaller sloshes in the bucket without much evolutionary change. It was not until the Mesozoic asteroid impact that extinction occurred to the dinosaurs and after a lag of five to seven million year, did mammals begin to speciate and diversify. [ 7 ] [ 8 ]
Built directly off his previous paper titled 'Punctuated equilibria', [ 9 ] the stasis pattern of life represents periods of 'dynamic, non-regular oscillation' [ 5 ] of intra-population variation. Stasis does not mean a species collective genome is stable, but instead still are in constant flux and variation, just in non-specific directions. This results similar phenotypes between organisms.
Fossil records further supports this idea, as with negligible or little disruptions, no evolution or adaptive change in detectable in the fossil records. It is not until larger scale and larger magnitude ecological disruptions does the fossil record drastically change. [ 10 ]
Causes of this stasis comes from the fact that to cause genetic shifts in an entire species require a selection force that spans all members of a species. Environmental factors though wide-spread, are out-paced by the movement of species, who find recognisable habitats to resettle and remain unchanged. [ 11 ] Called habitat tracking, this idea states that species are able to track habitats better than natural selection can follow the changing environment.
Even in exceptional circumstances such as drastic climate change , variation in many factors such as initial genomes, mutational history and selection pressures across the species means a whole species is unlikely to be headed down a specific evolutionary direction.
Overall, the stasis pattern of life is the dominant pattern in life and results generally in no net evolutionary changes.
While stasis was the dominant pattern in life, adaptive change and the resultant creation of new species arises in short burst, 'punctuating' the equilibrium set by stasis. The discontinuity of species arises not purely from accumulating genetic changes, but in conjunction with reproductive isolation . [ 5 ]
This form of allopatric speciation has many plausible models, for which Eldredge describes one. Optimal habitat location generally are the center of a species range, with outer limits of the location being marginally useful. Sections of the species at these peripheral zones may adapt to the differing ecosystem, thus changing the fringe habitat area into the now optimal area for the newly isolated population. [ 12 ]
Amalgamation of the pattern 1 (stasis) and pattern 2 (adaptive change) creates the final pattern signaled by large scale change in species caused by significant enough changes in the environment on a global scale. When the increasing environmental stress reaches a certain threshold, it causes widespread extinction and speciation, alongside migration. [ 5 ] This pattern includes whole regions, encompassing all species-level taxa, affecting them all equally. However, each species responds differently. Some species survive unchanged while others become extinct or speciate.
There are multiple documented phenomena that collaborate with this pattern very well. Carlton E.E Brett showed a 'coordinated stasis' through fossil patterns, demonstrating both a period of stasis where 70%-85% of species remain throughout a period, and after a large scale regional event, only around 20% make it through to the next period of stasis. [ 13 ]
This pattern has also been labelled by the term 'turnover pulse' by Elisabeth Vrba. [ 14 ] She documented gradual drop in temperature in now South Africa during the Pliocene epoch which initially had little effect. Then suddenly after half a million years, it caused an abrupt environmental change: from damp woodlands to savannahs. The same pattern of stasis punctuated by speciation and change occurred here.
Eldredge suggests that these mass extinction events 'rather than driving speciation, simply increases the probability of survival of fledgling species'. [ 5 ]
The double-hierarchy of genealogy and ecology is needed due to the dual nature of organisms. All organisms do two main things; they exist by interacting with environment to gain energy, and they reproduce. [ 14 ] These two distinct actions then each exist within their own hierarchy, but are tied together at the organism level through natural selection and variation.
The genealogical hierarchy exist as a consequence of the spatial distribution of reproduction in species. The levels within the hierarchy ascend with increasing size and geographic range, and are each subjected to corresponding factors in the ecological hierarchy. [ 8 ]
The lowest level in the genealogical hierarchy is the organism, specially in its reproductive sense. These organism participate the reproduction of the overall species. The next level up in the hierarchy are ' deme ' who are the interbreeding local population of a species. These can be thought of as specific regional variations of the species that interbreed. The next and second highest level in the hierarchy are species . The final and highest level in the genealogical hierarchy are monophyletic taxa , who all share and come from a common ancestor.
The ecological hierarchy similarly starts at the lowest level in organism, though in this case, focusing on their economic pursuit of survival. These organisms either compete, cooperate or are neutral towards each other in their survival. The next level are the 'avatars' which differ from demes. Avatars are local interacting conspecifics focused on survival, rather than reproduction. [ 5 ] The combination and interactions between avatars then make up the next level: the local ecosystem . The topmost and final layer in the hierarchy involves the region ecosystems, which are collections of local ecosystems.
By integrating the above dual hierarchy system along with the established three patterns of evolutionary life, the sloshing bucket model of evolution can be fully realised. The spatio-temporal scale of environmental or physical disturbances can be looked at through certain levels within the hierarchy, depending on their magnitude and effect. [ 8 ]
First Level: Short term effects within the deme or avatar level. There is no net evolutionary change, resulting in stasis.
Second Level: Mid term effects, localised in specific regions. Stabilising selection occurs as adjacent demes or species fill in lost components, re-establishing the same previous hierarchy. [ 15 ]
Third Level: Large scale environmental changes. Lasting from ten's of years to thousand or more years, these changes are slow enough to allow species to migrate to more optimal environments. Consequently due to habitat tracking, there are some changes in lower levels of hierarchy, but overall, species prevails and stasis is upheld.
Fourth Level: Regional disturbances. Where changes are too rapid for habitat tracking, the third pattern of life occurs. At this threshold, extinction and speciation is triggered on a large scale across unrelated species.
Fifth Level: Global disturbances. Essentially mass extinction events which completely overhaul the existing species. Extinction and speciation is common and widespread throughout the world. Examples include the End-Permian extinction event. [ 16 ]
The sloshing bucket model of evolution has been well received by some evolutionary biologists. Fellow philosopher and biologist Telmo Pievani states that, 'hierarchy theory is able to cover all the levels that make the evolutionary game so complex, from genes to organisms to species and the largest ecological scenarios'. [ 17 ] Palaeontologists Bruce Lieberman and Elisabeth Vrba have stated that the sloshing bucket model contained in hierarchy theory 'play(s) a prominent role in shaping the major features of diversity and biological organization'. [ 18 ]
There have also been some criticisms of the model. One of the larger issues with the sloshing bucket model of evolution is the missing mechanism of ecological inheritance . As this form of inheritance does not implicate reproduction or economic survival, it does not fit neatly into either of the two hierarchies, leaving a conceptual rift in the theory. [ 19 ] Similarly, different methods of inheritance such as epigenetic and symbolic that are present in other evolutionary biology theories disrupt the hierarchical structure of the sloshing bucket model. Additionally, the more externalist viewpoint adopted by the sloshing bucket model, though distinct from adaptationism , presents another difficult concept for some biologists to agree with. [ 19 ]
Evolutionary biologist Stephen Jay Gould further criticises the arbitrary and exclusively selective definition of an individual and the subsequent groups that then follow from these individuals. [ 20 ] Biologist David Morrison also points out the confounding nature of network interactions upon strict hierarchies. [ 21 ] Both ecological and genealogical systems are not strictly hierarchical, and questions 'to what extent the hierarchies dominate the network connections.' [ 21 ]
The sloshing bucket model of evolution has been applied by Daniel Brooks in explaining the evolutionary biology of emerging infectious diseases (EID). [ 22 ] Published in 2007, it uses the sloshing bucket model to explain the driving causes of these EID, stating that there are 'evolutionary accidents waiting to happen, requiring only the catalyst of climate change . . .' [ 22 ] As regional ecological disturbances grow in frequency, episodic bursts of newly mutated and potentially more infectious and deadly diseases will also become more frequent.
Furthermore, specifically the genealogical hierarchical has been observed to be a foundation for the field of evolutionary psychology. [ 23 ] This is reasoned as social systems are a product of biology, which result within the genealogical hierarchy. | https://en.wikipedia.org/wiki/Sloshing_bucket_model_of_evolution |
Slot-die coating is a coating technique for the application of solution , slurry , hot-melt , or extruded thin films onto typically flat substrates such as glass, metal, paper, fabric, plastic, or metal foils. The process was first developed for the industrial production of photographic papers in the 1950s. [ 1 ] It has since become relevant in numerous commercial processes and nanomaterials related research fields. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ]
Slot-die coating produces thin films via solution processing. [ 8 ] The desired coating material is typically dissolved or suspended into a precursor solution or slurry (sometimes referred to as "ink") and delivered onto the surface of the substrate through a precise coating head known as a slot-die. The slot-die has a high aspect ratio outlet controlling the final delivery of the coating liquid onto the substrate. This results in the continuous production of a wide layer of coated material on the substrate, with adjustable width depending on the dimensions of the slot-die outlet. By closely controlling the rate of solution deposition and the relative speed of the substrate, slot-die coating affords thin material coatings with easily controllable thicknesses in the range of 10 nanometers to hundreds of micrometers after evaporation of the precursor solvent. [ 9 ]
Commonly cited benefits of the slot-die coating process include its pre-metered thickness control, non-contact coating mechanism, high material efficiency , scalability of coating areas and throughput speeds, and roll-to-roll compatibility. The process also allows for a wide working range of layer thickness and precursor solution properties such as material choice, viscosity , and solids content. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Commonly cited drawbacks of the slot-die coating process include its comparatively high complexity of apparatus and process optimization relative to similar coating techniques such as blade coating and spin coating . Furthermore, slot-die coating falls into the category of coating processes rather than printing processes. It is therefore better suited for coating of uniform, thin material layers rather than printing or consecutive buildup of complex images and patterns.
Slot-die coating equipment is available in a variety of configurations and form factors. However, the vast majority of slot-die processes are driven by a similar set of common core components. These include:
Depending on the complexity of the coating apparatus, a slot-die coating system may include additional modules for e.g. precise positioning of the slot-die over the substrate, particulate filtering of the coating solution, pre-treatment of the substrate (e.g. cleaning and surface energy modification), and post-processing steps (e.g. drying , curing , calendering , printing, slitting , etc.). [ 7 ] [ 15 ]
Slot-die coating was originally developed for industrial use and remains primarily applied in production-scale settings. [ 11 ] This is due to its potential for large-scale production of high-value thin films and coatings at a low operating cost via roll-to-roll and sheet-to-sheet line integration. Such roll-to-roll and sheet-to-sheet coating systems are similar in their intent for large-scale production, but are distinguished from each other by the physical rigidity of the substrates they handle. Roll-to-roll systems are designed to coat and handle flexible substrate rolls such as paper, fabric, plastic or metal foils. Conversely, sheet-to-sheet systems are designed to coat and handle rigid substrate sheets such as glass, metal, or plexiglass. [ 16 ] Combinations of these systems such roll-to-sheet lines are also possible.
Both industrial roll-to-roll and sheet-to-sheet systems typically feature slot-dies in the range of 300 to 1000 mm in coating width, though slot-dies up to 4000 mm wide have been reported. Commercial slot-die systems are claimed to operate at speeds up to several hundred square meters per minute, [ 14 ] with roll-to-roll systems typically offering higher throughput due to decreased complexity of substrate handling. [ 17 ] Such large-scale coating systems can be driven by a variety of industrial pumping solutions including gear pumps , progressive cavity pumps , pressure pots, and diaphragm pumps depending on process requirements. [ 18 ]
To handle flexible substrates, roll-to-roll lines typically use a series of rollers to continually drive the substrate through the various stations of the process line. The bare substrate originates at an "unwind" roll at the start of the line and is collected at a "rewind" roll at the end. Hence, the substrate is often referred to as a "web" as it winds its way through the process line from start to finish. When a substrate roll has been fully processed, it is collected from the rewind roll, allowing for a new, bare substrate roll to be mounted onto the unwind roller to begin the process again. [ 16 ] Slot-die coating often comprises just a single step of an overall roll-to-roll process. The slot-die is typically mounted in a fixed position on the roll-to-roll line, dispensing coating fluid onto the web in a continuous or patch-based manner as the substrate passes by. Because the substrate web spans all stations of the roll-to-roll line simultaneously, the individual processes at these stations are highly coupled and must be optimized to work in tandem with each other at the same web speed.
The rigid substrates employed in sheet-to-sheet systems are not compatible with the roll-to-roll processing method. Sheet-to-sheet systems rely instead on a rack-based system to transport individual sheets between the various stations of a process line, where transfer between stations may occur in a manual or automated manner. Sheet-to-sheet lines are therefore more analogous to a series of semi-coupled batch operations rather than a single continuous process. This allows for easier optimization of individual unit operations at the expense of potentially increased handling complexity and reduced throughput. [ 16 ] [ 17 ] Furthermore, the need to start and stop the slot-die coating process for each substrate sheet places higher tolerance requirements on the leading and trailing edge uniformity of the slot-die step. In sheet-to-sheet lines, the substrate may be fixed in place as the substrate passes underneath on a moving support bed (sometimes referred to as a "chuck"). Alternatively, the slot-die may move during coating while the substrate remains fixed in place.
Miniaturized slot-die tools have become increasingly available to support the development of new roll-to-roll compatible processes prior to the requirement of full pilot - and production-scale equipment. These tools feature similar core components and functionality as compared to larger slot-die coating lines, but are designed to integrate into pre-production research environments. This is typically achieved by e.g. accepting standard A4 sized substrate sheets rather than full substrate rolls, using syringe pumps rather than industrial pumping solutions, and relying upon hot-plate heating rather than large industrial drying ovens, which can otherwise reach lengths of several meters to provide suitable residence times for drying. [ 19 ]
Because the slot-die coating process can be readily scaled between large and small areas by adjusting the size of the slot-die and throughput speed, processes developed on lab-scale tools are considered to be reasonably scalable to industrial roll-to-roll and sheet-to-sheet coating lines. This has led to significant interest in slot-die coating as a method of scaling new thin film materials and devices , [ 20 ] [ 21 ] particularly in the sphere of thin film solar cell research for e.g. perovskite and organic photovoltaics. [ 2 ] [ 22 ]
Slot-die hardware can be applied in several distinct coating modalities, depending on the requirements of a given process. These include:
The dynamics of proximity coating have been extensively studied and applied over a wide range of scales and applications. [ 25 ] [ 26 ] [ 11 ] [ 27 ] Furthermore, the concepts governing proximity coating are relevant in understanding the behavior of other coating modalities. Proximity coating is therefore considered to be the default configuration for the purposes of this introductory article, though curtain coating and tensioned web over slot die configurations remain highly relevant in industrial manufacturing.
Slot-die coating is a non-contact coating method, in which the slot-die is typically held over the substrate at a height several times higher than the target wet film thickness. [ 23 ] The coating fluid transfers from the slot-die to the substrate via a fluid bridge that spans the air gap between the slot-die lips and substrate surface. This fluid bridge is commonly referred to as the coating meniscus or coating bead. The thickness of the resulting wet coated layer is controlled by tuning the ratio between the applied volumetric pump rate and areal coating rate. Unlike in self-metered coating methods such as blade- and bar coating, the slot-die does not influence the thickness of the wet coated layer via any form of destructive physical contact or scraping. The height of the slot-die therefore does not determine the thickness of the wet coated layer. The height of the slot-die is instead significant in determining the quality of the coated film, as it controls the distance that must be spanned by the meniscus to maintain a stable coating process.
t w e t = Q W × U {\displaystyle t_{wet}={\frac {Q}{W\times U}}}
Slot-die coating operates via a pre-metered liquid coating mechanism. The thickness of the wet coated layer ( t w e t {\displaystyle t_{wet}} ) is therefore significantly determined by the width of coating ( W {\displaystyle W} ), the volumetric pump rate ( Q {\displaystyle Q} ), and the coating speed, or relative speed between the slot-die and the substrate during coating ( U {\displaystyle U} ). [ 28 ] [ 25 ] Increasing the pump rate increases the thickness of the wet layer, while increasing the coating speed or coating width decreases the wet layer thickness. The coating width is typically a fixed value for a given slot-die process. Hence, pump rate and coating speed can be used to calculate, control, and adjust the wet film thickness in a highly predictable manner. However, deviation from this idealized relationship can occur in practice due to non-ideal behavior of materials and process components; for example when using highly viscoelastic fluids, or a sub-optimal process setup where fluid creeps up the slot-die component rather than transferring fully to the substrate.
t d r y = t w e t × c ρ {\displaystyle t_{dry}=t_{wet}\times {\frac {c}{\rho }}}
The final thickness of the dry layer after solvent evaporation ( t d r y {\displaystyle t_{dry}} ) is further determined by the solids concentration of the precursor solution ( c {\displaystyle c} ) and the volumetric density of the coated material in its final form ( ρ {\displaystyle \rho } ). Increasing the solids content of the precursor solution increases the thickness of the dry layer, while using a more dense material results a thinner dry layer for a given concentration. [ 25 ]
As with all solution processed coating methods, the final quality of a thin film produced via slot-die coating depends on a wide array of parameters both intrinsic and external to the slot-die itself. These parameters can be broadly categorized into:
Under ideal conditions, the potential to achieve a defect-free film via slot-die is entirely governed by the coating window of the a given process. The coating window is a multivariable map of key process parameters, describing the range over which they can be applied together to achieve a defect-free film. Understanding the coating window behavior of a typical slot-die process enables operators to observe defects in a slot-die coated layer and intuitively determine a course of action for defect resolution. The key process parameters used to define the coating window typically include:
The coating window can be visualized by plotting two such key parameters against each other while assuming the others to remain constant. In an initial simple representation, the coating window can be described by plotting the relationship between viable pump rates and coating speeds for a given process. [ 29 ] Excessive pumping or insufficient coating speeds result in defect spilling of the coating liquid outside of the desired coating area, while coating too quickly or pumping insufficiently results in defect breakup of the meniscus. The pump rate and coating speed can therefore be adjusted to directly compensate for these defects, though changing these parameters also affects wet film thickness via the pre-metered coating mechanism. Implicit in this relationship is the effect of the slot-die height parameter, as this affects the distance over which the meniscus must be stretched while remaining stable during coating. Raising the slot-die higher can thus counteract spilling defects by stretching the meniscus further, while lowering the slot-die can counteract streaking and breakup defects by reducing the gap that the meniscus must breach. Other helpful coating window plots to consider include the relationship between fluid capillary number and slot-die height, [ 30 ] as well as the relationship between pressure across the meniscus and slot-die height. [ 30 ] The former is particularly relevant when considering changes in fluid viscosity and surface tension (i.e. the effect of coating various materials with significantly different rheology ), while the latter is relevant in the context of applying a vacuum box at the upstream face of the meniscus to stabilize the meniscus against breakup.
In reality, the final quality of a slot-die coated film is heavily influenced by a variety of factors beyond the parameter boundaries of the ideal coating window. [ 31 ] Surface energy effects and drying effects are examples of common downstream effects with a significant influence on final film morphology. Sub-optimal matching of surface energy between the substrate and coating fluid can cause dewetting of the liquid film after it has been applied to the substrate, resulting in pinholes or beading of the coated layer. [ 32 ] Sub-optimal drying processes are also often noted to influence film morphology, resulting in increased thickness at the edge of a film caused by the coffee ring effect . [ 33 ] Surface energy and downstream processing must therefore be carefully optimized to maintain the integrity of the slot-die coated layer as it moves through the system, until the final thin film product can be collected.
Slot-die coating is a highly mechanical process in which uniformity of motion and high hardware tolerances are critical to achieving uniform coatings. Mechanical imperfections such as jittery motion in the pump and coating motion systems, poor parallelism between the slot-die and substrate, and external vibrations in the environment can all lead to undesired variations in film thickness and quality. Slot-die coating apparatus and its environment must therefore be suitably specified to meet the needs of a given process and avoid hardware- and environment-derived defects in the coated film.
Slot-die coating was originally developed for the commercial production of photographic films and papers. [ 11 ] In the past several decades it has become a critical process in the production of adhesive films, [ 34 ] flexible packaging, [ 35 ] transdermal and oral pharmaceutical patches, [ 36 ] LCD panels, [ 26 ] multi-layer ceramic capacitors, [ 37 ] lithium-ion batteries [ 38 ] [ 39 ] [ 40 ] and more.
With growing interest in the potential of nanomaterials and functional thin film devices, slot-die coating has become increasingly applied in the sphere of materials research. This is primarily attributed to the flexibility, predictability and high repeatability of the process, as well as its scalability and origin as a proven industrial technique. Slot-die coating has been most notably employed in research related to flexible , printed , and organic electronics , but remains relevant in any field where scalable thin film production is required.
Examples of research enabled by slot-die coating include: | https://en.wikipedia.org/wiki/Slot-die_coating |
A slot comprises the operation issue and data path machinery surrounding a set of one or more execution unit (also called a functional unit (FU)) which share these resources. [ 1 ] [ 2 ] The term slot is common for this purpose in very long instruction word (VLIW) computers, where the relationship between operation in an instruction and pipeline to execute it is explicit. In dynamically scheduled machines, the concept is more commonly called an execute pipeline .
Modern conventional central processing units (CPU) have several compute pipelines, for example: two arithmetic logic units (ALU), one floating point unit (FPU), one Streaming SIMD Extensions (SSE) (such as MMX ), one branch . Each of them can issue one instruction per basic instruction cycle , but can have several instructions in process. These are what correspond to slots. The pipelines may have several FUs, such as an adder and a multiplier , but only one FU in a pipeline can be issued to in a given cycle. The FU population of a pipeline (slot) is a design option in a CPU.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slot_(computer_architecture) |
A slough ( / s l uː / ⓘ [ 1 ] [ 2 ] or / s l aʊ / ⓘ ) [ 1 ] [ 2 ] [ 3 ] is a wetland , usually a swamp or shallow lake , often a backwater to a larger body of water. [ 4 ] Water tends to be stagnant or may flow slowly on a seasonal basis. [ 5 ]
In North America, "slough" may refer to a side-channel from or feeding a river, or an inlet or natural channel only sporadically filled with water. [ 3 ] An example of this is Finn Slough on the Fraser River , whose lower reaches have dozens of notable sloughs. Some sloughs, like Elkhorn Slough , used to be mouths of rivers, but have become stagnant because tectonic activity cut off the river's source.
In the Sacramento River , Steamboat Slough was an alternate branch of the river, a preferred shortcut route for steamboats passing between Sacramento and San Francisco . Georgiana Slough was a steamboat route through the Sacramento–San Joaquin River Delta , from the Sacramento River to the San Joaquin River and Stockton .
A slough, also called a tidal channel, is a channel in a wetland . [ 6 ] Typically, it is either stagnant or slow flowing on a seasonal basis.
Vegetation patterns in a slough are largely determined by depth , distribution, and duration in the environment. Moreover, these same variables also influence the distribution, abundance, reproduction, and seasonal movements of aquatic and terrestrial life within the sloughs. [ 7 ] Sloughs support a wide variety of plant life that is adapted to rapidly changing physical conditions such as salinity , oxygen levels and depth. [ 8 ]
In general, sloughs are microhabitats high in species diversity. Open water sloughs are characterized by submerged and floating vegetation which includes periphyton mats dominated by sawgrass typically. The topographical and vegetation heterogeneity of ridge and slough landscape influences the productivity and diversity of birds and fish adapted to that wetland. [ 9 ]
Fish that typically inhabit sloughs include tidewater goby, California killifish , mosquitofish , and topsmelt . [ 10 ] Food habits of fish within sloughs consist of preying upon invertebrates ; mostly epifaunal crustacean followed by epifaunal and infaunal worms and mollusks. Fish can feed on zooplankton and plant material. Research on prey species for fish in sloughs found that in a study done on Elkhorn Slough in California the mean prey richness for fish was greatest near the ocean and lowest inshore. This allows for a higher availability of food to enhance the function of inshore habitats and emphasizes the importance of invertebrate prey populations and how they influence plant production. [ 11 ]
Birds also inhabit sloughs, making them hotspots for birdwatching , with the Elkhorn Slough being one of the premier birdwatching sites in the western United States. Over 340 species have been seen visiting, including several rare and endangered species. Bird species seen in sloughs include acorn woodpecker , brown pelican , Caspian tern , great blue heron , great egret , great horned owl , snowy plover , and white-tailed kite . [ 12 ]
Sloughs are largely influenced by human development such as urban and agricultural expansion , industrial and agricultural practices, water management practices, and humans influence on species composition . Uses of identifying these aspects of human involvement can help to better predict restoration efforts to be made in managing sloughs. Examples of attributes that are affected by human stress upon the environment include periphyton , marsh plant communities, tree islands, alligators , wading birds , and marsh fishes, invertebrates , and herpetofauna . [ 7 ]
A slough can form when a meander gets cut off from the main river channel creating an oxbow lake that accumulates with fine overbank sediment and organic material such as peat . This creates a wetland or swamp environment. One end of the oxbow configuration then continues to receive flow from the main channel, creating a slough. [ 13 ]
Sloughs are typically associated with the ridge formations found in their presence. Such a landscape consists of mosaic linear ridges, typically of some sort of grass such as sawgrass ridges in the Florida Everglades , that are separated by deeper water sloughs. [ 11 ]
Edges of sloughs are layers of sediment deposited by a river over time. [ 6 ] The development of this landscape is thought to occur by the preferential formation of peat in bedrock depressions. Multiple of these deposits mounted on top of the surrounding bedrock can become elongated alongside the slough and create flow diversions within the system. Different rates of this peat accumulation could be triggered by variations in microtopography that alter plant production and vegetation type. Water flow might be the key to preventing an accumulation of organic sediment in sloughs due to the fact that accumulation leads to lowering water depths and instead allows for the growth of vegetation. [ 9 ]
Overall little quantitative data on the degradation of slough landscape exists. Slough and ridge landscape has been greatly degraded in terms of both topographic and vegetation changes over time. Topographical changes create an increase in the relief between ridge crests and slough bottoms. Vegetation changes consist of an increase in the amount of dense grass and decrease in the area of open water, creating a blurring of the directional ridge and slough pattern. [ 9 ]
Historical everglade and slough landscape has been greatly affected and degraded by human activity. Open water sloughs support important ecological functions that have been seen to be sensitive to hydrologic and water quality problems stemmed from human activities. [ 14 ]
Sloughs are ecologically important as they are a part of an endangered environment: wetlands. They act as a buffer from land to sea and act as an active part of the estuary system where freshwater flows from creeks and runoff from the land mix with salty ocean water transported by tides. Restoration is a big effort in California wetlands to restore slough and ridge landscapes. Examples of restoration projects on slough landscapes include The Elkhorn Slough Tidal Wetland Project, [ 15 ] Dutch Slough Tidal Restoration Project, [ 16 ] and the McDaniel Slough wetland enhancement project. [ 17 ] [ 18 ]
Wetlands portal | https://en.wikipedia.org/wiki/Slough_(hydrology) |
In computability theory , computational complexity theory and proof theory , the slow-growing hierarchy is an ordinal-indexed family of slowly increasing functions g α : N → N (where N is the set of natural numbers , {0, 1, ...}). It contrasts with the fast-growing hierarchy .
Let μ be a large countable ordinal such that a fundamental sequence is assigned to every limit ordinal less than μ. The slow-growing hierarchy of functions g α : N → N , for α < μ, is then defined as follows: [ 1 ]
Here α[ n ] denotes the n th element of the fundamental sequence assigned to the limit ordinal α.
The article on the Fast-growing hierarchy describes a standardized choice for fundamental sequence for all α < ε 0 .
The slow-growing hierarchy grows much more slowly than the fast-growing hierarchy. Even g ε 0 is only equivalent to f 3 and g α only attains the growth of f ε 0 (the first function that Peano arithmetic cannot prove total in the hierarchy) when α is the Bachmann–Howard ordinal . [ 2 ] [ 3 ] [ 4 ]
However, Girard proved that the slow-growing hierarchy eventually catches up with the fast-growing one. [ 2 ] Specifically, that there exists an ordinal α such that for all integers n
where f α are the functions in the fast-growing hierarchy. He further showed that the first α this holds for is the ordinal of the theory ID <ω of arbitrary finite iterations of an inductive definition. [ 5 ] However, for the assignment of fundamental sequences found in [ 3 ] the first match up occurs at the level ε 0 . [ 6 ] For Buchholz style tree ordinals it could be shown that the first match up even occurs at ω 2 {\displaystyle \omega ^{2}} .
Extensions of the result proved [ 5 ] to considerably larger ordinals show that there are very few ordinals below the ordinal of transfinitely iterated Π 1 1 {\displaystyle \Pi _{1}^{1}} -comprehension where the slow- and fast-growing hierarchy match up. [ 7 ]
The slow-growing hierarchy depends extremely sensitively on the choice of the underlying fundamental sequences. [ 6 ] [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Slow-growing_hierarchy |
The slow-reacting substance of anaphylaxis or SRS-A is a mixture of the leukotrienes LTC4 , LTD4 and LTE4 . Mast cells secrete it during the anaphylactic reaction , inducing inflammation . [ 1 ] It can be found in basophils .
It induces prolonged, slow contraction of smooth muscle and has a major bronchoconstrictor role in asthma . [ 2 ] Compared to histamine , it is approximately 1000 times more potent and has a slower onset but longer duration of action. [ citation needed ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slow-reacting_substance_of_anaphylaxis |
Slow afterhyperpolarisation (sAHP) refers to prolonged periods of hyperpolarisation in a neuron or cardiomyocyte following an action potential or other depolarising event. In neurons, trains of action potentials may be required to induce sAHPs; this is unlike fast AHPs that require no more than a single action potential. A variety of ionic mechanism may contribute to sAHPs, including potassium efflux from calcium- [ 1 ] or sodium- [ 2 ] activated potassium channels, and/or the electrogenic response of the sodium-potassium ATPase , [ 3 ] [ 4 ] and different mechanisms may underlie the sAHP at different temperatures. [ 4 ] Depending on neuron type and stimulus used for induction, slow afterhyperpolarisations can last for one second to several tens of seconds, during which time the sAHP effectively inhibits neural activity. Fast and Medium AHPs have shorter durations and different ionic mechanisms.
This biophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slow_afterhyperpolarization |
A slow fire is a term used in library and information science to describe paper embrittlement resulting from acid decay. The term is taken from the title of Terry Sanders 's 1987 film Slow Fires: On the preservation of the human record.
Solutions to this problem include the use of acid-free paper stocks, format shifting brittle books by microfilming, photocopying or digitization, and a variety of deacidification techniques.
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slow_fire |
In mathematics , the slow manifold of an equilibrium point of a dynamical system occurs as the most common example of a center manifold . One of the main methods of simplifying dynamical systems , is to reduce the dimension of the system to that of the slow manifold— center manifold theory rigorously justifies the modelling. [ 1 ] [ 2 ] For example, some global and regional models of the atmosphere or oceans resolve the so-called quasi- geostrophic flow dynamics on the slow manifold of the atmosphere/oceanic dynamics, [ 3 ] and is thus crucial to forecasting with a climate model .
In some cases, a slow manifold is defined to be the invariant manifold on which the dynamics are slow compared to the dynamics off the manifold. The slow manifold in a particular problem would be a sub-manifold of either the stable, unstable, or center manifold, exclusively, that has the same dimension of, and is tangent to, the eigenspace with an associated eigenvalue (or eigenvalue pair) that has the smallest real part in magnitude. This generalizes the definition described in the first paragraph. Furthermore, one might define the slow manifold to be tangent to more than one eigenspace by choosing a cut-off point in an ordering of the real part eigenvalues in magnitude from least to greatest. In practice, one should be careful to see what definition the literature is suggesting.
Consider the dynamical system
for an evolving state vector x → ( t ) {\displaystyle {\vec {x}}(t)} and with equilibrium point x → ∗ {\displaystyle {\vec {x}}^{*}} . Then the linearization of the system at the equilibrium point is
The matrix A {\displaystyle A} defines four invariant subspaces characterized by the eigenvalues λ {\displaystyle \lambda } of the matrix: as described in the entry for the center manifold three of the subspaces are the stable, unstable and center subspaces corresponding to the span of the eigenvectors with eigenvalues λ {\displaystyle \lambda } that have real part negative, positive, and zero, respectively; the fourth subspace is the slow subspace given by the span of the eigenvectors, and generalized eigenvectors , corresponding to the eigenvalue λ = 0 {\displaystyle \lambda =0} precisely (more generally, [ 4 ] corresponding to all eigenvalues with | λ | ≤ α {\displaystyle |\lambda |\leq \alpha } separated by a gap from all other eigenvalues, those with | λ | ≥ β > r α {\displaystyle |\lambda |\geq \beta >r\alpha } ). The slow subspace is a subspace of the center subspace, or identical to it, or possibly empty.
Correspondingly, the nonlinear system has invariant manifolds , made of trajectories of the nonlinear system, corresponding to each of these invariant subspaces. There is an invariant manifold tangent to the slow subspace and with the same dimension; this manifold is the slow manifold .
Stochastic slow manifolds also exist for noisy dynamical systems ( stochastic differential equation ), as do also stochastic center, stable and unstable manifolds. [ 5 ] Such stochastic slow manifolds are similarly useful in modeling emergent stochastic dynamics, but there are many fascinating issues to resolve such as history and future dependent integrals of the noise. [ 6 ] [ 7 ]
The coupled system in two variables x ( t ) {\displaystyle x(t)} and y ( t ) {\displaystyle y(t)}
has the exact slow manifold y = x 2 {\displaystyle y=x^{2}} on which the evolution is d x / d t = − x 3 {\displaystyle dx/dt=-x^{3}} . Apart from exponentially decaying transients, this slow manifold and its evolution captures all solutions that are in the neighborhood of the origin. [ 8 ] The neighborhood of attraction is, roughly, at least the half-space y > − 1 / 2 {\displaystyle y>-1/2} .
Edward Norton Lorenz introduced the following dynamical system of five equations in five variables to explore the notion of a slow manifold of quasi- geostrophic flow [ 9 ]
Linearized about the origin the eigenvalue zero has multiplicity three, and there is a complex conjugate pair of eigenvalues, ± i {\displaystyle \pm i} . Hence there exists a three-dimensional slow manifold (surrounded by 'fast' waves in the X {\displaystyle X} and Z {\displaystyle Z} variables). Lorenz later argued a slow manifold did not exist! [ 10 ] But normal form [ 11 ] arguments suggest that there is a dynamical system that is exponentially close to the Lorenz system for which there is a good slow manifold.
In modeling we aim to simplify enormously. This example uses a slow manifold to simplify the 'infinite dimensional' dynamics of a partial differential equation to a model of one ordinary differential equation . Consider a field u ( x , t ) {\displaystyle u(x,t)} undergoing the nonlinear diffusion
with Robin boundary conditions
Parametrising the boundary conditions by b {\displaystyle b} empowers us to cover the insulating Neumann boundary condition case b = 0 {\displaystyle b=0} , the Dirichlet boundary condition case b = 1 {\displaystyle b=1} , and all cases between.
Now for a marvelous trick, much used in exploring dynamics with bifurcation theory . Since parameter b {\displaystyle b} is constant, adjoin the trivially true differential equation
Then in the extended state space of the evolving field and parameter, ( b , u ( x ) ) {\displaystyle (b,u(x))} , there exists an infinity of equilibria, not just one equilibrium, with b = 0 {\displaystyle b=0} (insulating) and u = {\displaystyle u=} constant, say u = a {\displaystyle u=a} . Without going into details, about each and every equilibria the linearized diffusion has two zero eigenvalues and for a > 0 {\displaystyle a>0} all the rest are negative (less than − π 2 a / 4 {\displaystyle -\pi ^{2}a/4} ). Thus the two-dimensional dynamics on the slow manifolds emerge (see emergence ) from the nonlinear diffusion no matter how complicated the initial conditions.
Here one can straightforwardly verify the slow manifold to be precisely the field u ( x , t ) = a ( t ) ( 1 − b x 2 ) {\displaystyle u(x,t)=a(t)(1-bx^{2})} where amplitude a {\displaystyle a} evolves according to
That is, after the initial transients that by diffusion smooth internal structures, the emergent behavior is one of relatively slow decay of the amplitude ( a {\displaystyle a} ) at a rate controlled by the type of boundary condition (constant b {\displaystyle b} ).
Notice that this slow manifold model is global in a {\displaystyle a} as each equilibria is necessarily in the slow subspace of each other equilibria, but is only local in parameter b {\displaystyle b} . We cannot yet be sure how large b {\displaystyle b} may be taken, but the theory assures us the results do hold for some finite parameter b {\displaystyle b} .
Stochastic modeling is much more complicated—this example illustrates just one such complication. Consider for small parameter ϵ {\displaystyle \epsilon } the two variable dynamics of this linear system forced with noise from the random walk W ( t ) {\displaystyle W(t)} :
One could simply notice that the Ornstein–Uhlenbeck process y {\displaystyle y} is formally the history integral
and then assert that x ( t ) {\displaystyle x(t)} is simply the integral of this history integral. However, this solution then inappropriately contains fast time integrals, due to the exp ( s − t ) {\displaystyle \exp(s-t)} in the integrand, in a supposedly long time model.
Alternatively, a stochastic coordinate transform extracts a sound model for the long term dynamics. Change variables to ( X ( t ) , Y ( t ) ) {\displaystyle (X(t),Y(t))} where
then the new variables evolve according to the simple
In these new coordinates we readily deduce Y ( t ) → 0 {\displaystyle Y(t)\to 0} exponentially quickly, leaving X ( t ) = ϵ W ( t ) {\displaystyle X(t)=\epsilon W(t)} undergoing a random walk to be the long term model of the stochastic dynamics on the stochastic slow manifold obtained by setting Y = 0 {\displaystyle Y=0} .
A web service constructs such slow manifolds in finite dimensions, both deterministic and stochastic. [ 12 ] | https://en.wikipedia.org/wiki/Slow_manifold |
Slow sand filters are used in water purification for treating raw water to produce a potable product. They are typically 1–2 m (3.3–6.6 ft) deep, can be rectangular or cylindrical in cross section and are used primarily to treat surface water. The length and breadth of the tanks are determined by the flow rate desired for the filters, which typically have a loading rate of 200–400 litres (0.20–0.40 m 3 ) per square metre per hour.
Slow sand filters differ from all other filters used to treat drinking water in that they work by using a complex biofilm that grows naturally on the surface of the sand. The sand itself does not perform any filtration function but simply acts as a substrate, unlike its counterparts for ultraviolet and pressurized treatments. Although they are often preferred technology in many developing countries because of their low energy requirements and robust performance, they are also used to treat water in some developed countries, such as the UK , where they are used to treat water supplied to London . Slow sand filters now are also being tested for pathogen control of nutrient solutions in hydroponic systems.
The first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in Paisley, Scotland , John Gibb, installed an experimental filter created by engineer Robert Thom , selling his unwanted surplus to the public. [ 1 ] [ 2 ] This method was refined in the following two decades by engineers working for private water companies, and it culminated in the first treated public water supply in the world, installed by engineer James Simpson for the Chelsea Waterworks Company in London in 1829. [ 3 ] [ 4 ] This installation provided filtered water for every resident of the area, and the network design was widely copied throughout the United Kingdom in the ensuing decades.
The practice of water treatment soon became mainstream, and the virtues of the system were made starkly apparent after the investigations of the physician John Snow during the 1854 Broad Street cholera outbreak . Snow was sceptical of the then-dominant miasma theory that stated that diseases were caused by noxious "bad airs". Although the germ theory of disease had not yet been developed, Snow's observations led him to discount the prevailing theory. His 1855 essay On the Mode of Communication of Cholera conclusively demonstrated the role of the water supply in spreading the cholera epidemic in Soho , [ 5 ] with the use of a dot distribution map and statistical proof to illustrate the connection between the quality of the water source and cholera cases. His data convinced the local council to disable the water pump, which promptly ended the outbreak.
The Metropolis Water Act introduced the regulation of the water supply companies in London , including minimum standards of water quality for the first time. The Act "made provision for securing the supply to the Metropolis of pure and wholesome water", and required that all water be "effectually filtered" from 31 December 1855. [ 6 ] This was followed up with legislation for the mandatory inspection of water quality, including comprehensive chemical analyses, in 1858. This legislation set a worldwide precedent for similar state public health interventions across Europe . The Metropolitan Commission of Sewers was formed at the same time, water filtration was adopted throughout the country, and new water intakes on the Thames were established above Teddington Lock .
Water treatment came to the United States in 1872 when Poughkeepsie, New York , opened the first slow sand filtration plant, [ 7 ] dramatically reducing instances of cholera and typhoid fever which had been seriously impacting the local community. Poughkeepsie's design criteria were used throughout the country as a model for other municipalities. Poughkeepsie's original treatment facility operated continuously for 87 years before being replaced in 1959. [ 8 ]
Slow sand filters work through the formation of a gelatinous layer (or biofilm ) called the hypogeal layer or Schmutzdecke in the top few millimetres of the fine sand layer. The Schmutzdecke is formed in the first 10–20 days of operation [ 9 ] and consists of bacteria , fungi , protozoa , rotifera and a range of aquatic insect larvae. As an epigeal biofilm ages, more algae tend to develop and larger aquatic organisms may be present including some bryozoa , snails and Annelid worms. The surface biofilm is the layer that provides the effective purification in potable water treatment, the underlying sand providing the support medium for this biological treatment layer. As water passes through the hypogeal layer, particles of foreign matter are trapped in the mucilaginous matrix and soluble organic material is adsorbed . The contaminants are metabolised by the bacteria, fungi and protozoa. The water produced from an exemplary slow sand filter is of excellent quality with 90–99% bacterial cell count reduction. [ 10 ] Typically, in the UK slow sand filters have a bed depth of 0.3 to 0.6 metres comprising 0.2 to 0.4 mm sand. The throughput is 0.25 m/h. [ 11 ]
Slow sand filters slowly lose their performance as the biofilm thickens and thereby reduces the rate of flow through the filter. Eventually, it is necessary to refurbish the filter. Two methods are commonly used to do this. In the first, the top few millimetres of fine sand is scraped off to expose a new layer of clean sand. Water is then decanted back into the filter and re-circulated for a few hours to allow a new biofilm to develop. The filter is then filled to full volume and brought back into service. [ 10 ] The second method, sometimes called wet harrowing, involves lowering the water level to just above the hypogeal layer, stirring the sand; thus precipitating any solids held in that layer and allowing the remaining water to wash through the sand. The filter column is then filled to full capacity and brought back into service. Wet harrowing can allow the filter to be brought back into service more quickly. [ 9 ]
Slow sand filters have a number of unique qualities:
While many municipal water treatment works will have 12 or more beds in use at any one time, smaller communities or households may only have one or two filter beds.
In the base of each bed is a series of herringbone drains that are covered with a layer of pebbles which in turn is covered with coarse gravel. Further layers of sand are placed on top followed by a thick layer of fine sand. The whole depth of filter material may be more than 1 metre in depth, the majority of which will be fine sand material. On top of the sand bed sits a supernatant layer of unpurified water. | https://en.wikipedia.org/wiki/Slow_sand_filter |
Slow strain rate testing ( SSRT ), also called constant extension rate tensile testing ( CERT ), is a popular test used by research scientists to study stress corrosion cracking . It involves a slow (compared to conventional tensile tests) dynamic strain applied at a constant extension rate in the environment of interest. These test results are compared to those for similar tests in a, known to be inert, environment. A 50-year history of the SSRT has recently been published by its creator. [ 1 ] The test has also been standardized [ 2 ] [ 3 ] and two ASTM symposia devoted to it. [ 4 ] [ 5 ]
The important characteristic of these tests is that the strain rate is low, for example extension rates selected in the range from 10 −8 to 10 −3 s −1 . The selection of the strain rate is very important because the susceptibility to cracking may not be evident from result of tests at too low or too high strain rate. For numerous material-environment systems, strain rates in range 10 −5 - 10 −6 s −1 are used; however, the observed absence of cracking at a given strain rate should not be taken as a proof of immunity to cracking. There are known cases wherein the susceptibility to stress-corrosion cracking only became evident at strain rates as low as 10 −8 or 10 −9 s −1 . Nevertheless, the method is very suitable for mechanistic studies, as well as for relative ranking of susceptibility to cracking of different alloys, or the aggressiveness of environments and the effect of temperature, pH, metallurgical condition etc.
The fastest strain rate that will still promote SCC for a given environment-material system is sometimes called the "critical strain rate", some values are given in the table: [ 6 ]
Electrode potential and other environmental factors such as temperature, pH and degree of aeration can greatly impact the results off this accelerated stress corrosion cracking test, as can the specimen surface finish and metallurgical condition.
The evaluated parameters are:
The results of the SSRT tests are evaluated using the ratio:
[ result from specimen in test environment result from specimen in inert environment ] {\displaystyle \left[{\frac {\text{result from specimen in test environment}}{\text{result from specimen in inert environment}}}\right]}
The departure of the ratio below unity quantifies the increased susceptibility to cracking.
The test is best used in combination with electrochemical measurements and other stress corrosion cracking tests. | https://en.wikipedia.org/wiki/Slow_strain_rate_testing |
The slow vertex response (also called SVR or V potential [ 1 ] ) is an electrochemical signal associated with electrophysiological recordings of the auditory system , specifically Auditory evoked potentials (AEPs). The SVR of a normal human being recorded with surface electrodes can be found at the end of a recorded AEP waveform between the latencies 50-500ms. [ 2 ] Detection of SVR is used to estimate thresholds for hearing pathways.
This biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slow_vertex_response |
The slowed rotor principle is used in the design of some helicopters . On a conventional helicopter the rotational speed of the rotor is constant; reducing it at lower flight speeds can reduce fuel consumption and enable the aircraft to fly more economically. In the compound helicopter and related aircraft configurations such as the gyrodyne and winged autogyro , reducing the rotational speed of the rotor and offloading part of its lift to a fixed wing reduces drag , enabling the aircraft to fly faster.
Traditional helicopters get both their propulsion and lift from the main rotor; by using a dedicated propulsion device such as a propeller or jet engine , the rotor burden is lessened. [ 1 ] If wings are also used to lift the aircraft, the rotor can be unloaded (partially or fully) and its rotational speed further reduced, enabling higher aircraft speed. Compound helicopters use these methods, [ 2 ] [ 3 ] [ 4 ] but the Boeing A160 Hummingbird shows that rotor-slowing is possible without wings or propellers, and regular helicopters may reduce turbine RPM (and thus rotor speed) to 85% using 19% less power. [ 5 ] Alternatively, research suggests that twin-engine helicopters may decrease fuel consumption by 25%-40% when running only one engine, given adequate height and velocity well inside the safe areas of the height–velocity diagram . [ 6 ] [ 7 ] [ 8 ]
As of 2012, no compound or hybrid wing/rotor (manned) aircraft had been produced in quantity, and only a few had been flown as experimental aircraft, [ 9 ] mainly because the increased complexities have not been justified by military or civilian markets. [ 10 ] Varying the rotor speed may induce severe vibrations at specific resonance frequencies. [ 11 ]
Contra-rotating rotors (as on the Sikorsky X2 ) solve the problem of lift dissymmetry by having both left and right sides provide near equal lift with less flapping. [ 12 ] [ 1 ] The X2 deals with the compressibility issue by reducing its rotor speed [ 1 ] from 446 to 360 RPM [ 13 ] [ 14 ] to keep the advancing blade tip below the sound barrier when going above 200 knots. [ 15 ]
The rotors of conventional helicopters are designed to operate at a fixed speed of rotation, to within a few percent. [ 16 ] [ 17 ] [ 18 ] [ 11 ] This introduces limitations in areas of the flight envelope where the optimal speed differs. [ 5 ]
In particular, it limits the maximum forward speed of the aircraft. Two main issues restrict the speed of rotorcraft: [ 11 ] [ 4 ] [ 19 ] [ 12 ]
These (and other) [ 27 ] [ 28 ] problems limit the practical speed of a conventional helicopter to around 160–200 knots (300–370 km/h). [ 1 ] [ 26 ] [ 29 ] [ 30 ] At the extreme, the theoretical top speed for a rotary winged aircraft is about 225 knots (259 mph; 417 km/h), [ 28 ] just above the current official speed record for a conventional helicopter held by a Westland Lynx , which flew at 400 km/h (250 mph) in 1986 [ 31 ] where its blade tips were nearly Mach 1. [ 32 ]
For rotorcraft, advance ratio (or Mu, symbol μ {\displaystyle \mu } ) is defined as the aircraft forward speed V divided by its relative blade tip speed. [ 33 ] [ 34 ] [ 35 ] Upper mu limit is a critical design factor for rotorcraft, [ 23 ] and the optimum for traditional helicopters is around 0.4. [ 4 ] [ 26 ]
The "relative blade tip speed" u is the tip speed relative to the aircraft (not the airspeed of the tip). Thus the formula for Advance ratio is
μ = V u = V Ω ⋅ R {\displaystyle \mu ={\frac {V}{u}}={\frac {V}{\Omega \cdot R}}} where Omega (Ω) is the rotor's angular velocity , and R is the rotor radius (about the length of one rotor blade) [ 36 ] [ 23 ] [ 13 ]
When the rotor blade is perpendicular to the aircraft and advancing, its tip airspeed V t is the aircraft speed plus relative blade tip speed, or V t = V + u . [ 12 ] [ 37 ] At mu=1, V is equal to u and the tip airspeed is twice the aircraft speed.
At the same position on the opposite side (retreating blade), the tip airspeed is the aircraft speed minus relative blade tip speed, or V t = V - u . At mu=1, the tip airspeed is zero. [ 30 ] [ 38 ] At a mu between 0.7 and 1.0, most of the retreating side has reverse airflow. [ 13 ]
Although rotor characteristics are fundamental to rotorcraft performance, [ 39 ] little public analytical and experimental knowledge exists between advance ratios of 0.45 to 1.0, [ 13 ] [ 40 ] and none is known above 1.0 for full-size rotors. [ 41 ] [ 42 ] Computer simulations are not capable of adequate predictions at high mu. [ 43 ] [ 44 ] The region of reverse flow on the retreating blade is not well understood, [ 45 ] [ 46 ] however some research has been conducted, [ 47 ] [ 48 ] particularly for scaled rotors. [ 49 ] [ 50 ] The US Army Aviation Applied Technology Directorate runs a supporting program in 2016 aiming at developing transmissions with a 50% rotor speed reduction. [ 51 ]
The profile drag of a rotor corresponds to the cube of its rotational speed . [ 52 ] [ 53 ] Reducing the rotational speed is therefore a significant reduction of rotor drag, allowing higher aircraft speed [ 13 ] A conventional rotor such as the UH-60A has lowest consumption around 75% rpm, but higher aircraft speed (and weight) requires higher rpm. [ 54 ]
A rotor disk with variable radius is a different way of reducing tip speed to avoid compressibility, but blade loading theory suggests that a fixed radius with varying rpm performs better than a fixed rpm with varying radius. [ 55 ]
Conventional helicopters have constant-speed rotors and adjust lift by varying the blade angle of attack or collective pitch . The rotors are optimised for high-lift or high-speed flight modes and in less demanding situations are not as efficient.
The profile drag of a rotor corresponds to the cube of its rotational speed . [ 52 ] [ 53 ] Reducing the rotational speed and increasing the angle of attack can therefore give a significant reduction in rotor drag, allowing lower fuel consumption. [ 5 ]
Technical parameters given for each type listed:
When Juan de la Cierva developed the autogyro through the 1920s and 1930s, it was found that the tip speeds of the advancing rotor blade could become excessive. Designers such as he and Harold F. Pitcairn developed the idea of adding a conventional wing to offload the rotor during high-speed flight, allowing it to rotate at slower speeds. [ citation needed ]
The 1932 Pitcairn PCA-2 autogyro had a maximum speed of 20-102 knots (117 mph; 189 km/h), [ 56 ] μ of 0.7, [ 57 ] and L/D of 4.8 [ 58 ]
NACA engineer John Wheatley examined the effect of varying advance ratios up to about 0.7 in a wind tunnel in 1933 and published a landmark study in 1934. Although lift could be predicted with some accuracy, by 1939 the state of the art theory still gave unrealistically low values for rotor drag. [ 59 ]
Fairey Aviation in the UK worked on gyrodynes in the late 1940s and 1950s developing tip-jet propulsion which eliminated the need for countertorque. They culminated in the Fairey Rotodyne , the prototype for a VTOL passenger aircraft, which could combine the vertical landing of a helicopter with the speed of a fixed wing aircraft. The Rotodyne had a single 90 ft diameter main rotor supplemented by a 46 ft wide wing with forward thrust provided by twin turboprop engines. In forward flight the power to the rotor was reduced to about 10%. [ citation needed ] Its maximum speed was 166 knots (191 mph; 307 km/h) a record set in 1959. [ 60 ] [ 61 ] 0.6. [ 62 ] Rotor speed was 120 (high speed cruising flight as an autogyro) to 140 ( flare out while landing as a helicopter) rpm [ 63 ] During forward flight 60% of the lift came from the wings and 40% from the rotor. [ 64 ]
At the same time, the US Air Force was investigating fast VTOL aircraft. McDonnell developed what became the McDonnell XV-1 , the first of the V-designated types, which flew in 1955. It was a tip-jet driven gyrodyne, which turned off rotor thrust at high airspeeds and relied on a pusher propeller to maintain forward flight and rotor autorotation. Lift was shared between the rotor and stub wings. It established a rotorcraft speed record of 170 knots (200 mph; 310 km/h). 0.95. [ 65 ] 180-410 [ 66 ] (50% [ 67 ] ). 85% \ 15%. [ 68 ] 6.5 (Wind tunnel tests at 180 RPM with no propeller. [ 69 ] )
The Lockheed AH-56 Cheyenne military attack helicopter for the US Army arose out of Lockheed's ongoing research programme into rigid rotors, which began with the CL-475 in 1959. Stub wings and a thrust turbojet to offload the rotor were first added to an XH-51A and in 1965 this allowed the craft to achieve a world speed record of 272 miles per hour (438 km/h). The Cheyenne flew just two years later, obtaining its forward thrust from a pusher propeller. Although pre-production prototypes were ordered the program met problems and was cancelled. [ 70 ] 212 knots (244 mph; 393 km/h). [ 71 ] [ 72 ] 0.8. [ 65 ] .. \ 20%. [ 73 ]
The Piasecki 16H Pathfinder project similarly evolved an initially conventional design into a compound helicopter through the 1960s, culminating in the 16H-1A Pathfinder II which flew successfully in 1965. Thrust was obtained via a ducted fan at the tail. [ 74 ]
The Bell 533 of 1969 was a compound jet helicopter. 275 knots (316 mph; 509 km/h). [ 75 ] [ 76 ]
The compound helicopter has continued to be studied and flown experimentally. In 2010 the Sikorsky X2 flew with coaxial rotors . 250 knots (290 mph; 460 km/h). [ 77 ] [ 78 ] 0.8. [ 13 ] 360 to 446. [ 13 ] [ 14 ] No wings. [ 79 ] In 2013 the Eurocopter X3 flew. [ 80 ] 255 knots (293 mph; 472 km/h). [ 81 ] [ 82 ] 310 minus 15%. [ 12 ] 40 [ 12 ] [ 1 ] -80% \. [ 83 ] [ 84 ]
The compound autogyro, in which the rotor is supplemented by wings and thrust engine but is not itself powered, has also undergone further refinement by Jay Carter Jr. He flew his CarterCopter in 2005. 150 knots (170 mph; 280 km/h). [ 85 ] 1. 50%. [ 13 ] By 2013 he had developed its design into a personal air vehicle , the Carter PAV . 175 knots (201 mph; 324 km/h). 1.13. 105 [ 86 ] to 350. [ 87 ]
The potential of the slowed rotor in enhancing fuel economy has also been studied in the Boeing A160 Hummingbird UAV, a conventional helicopter. 140 knots (160 mph; 260 km/h). 140 to 350. [ 88 ] | https://en.wikipedia.org/wiki/Slowed_rotor |
In physics , slowly varying envelope approximation [ 1 ] ( SVEA , sometimes also called slowly varying asymmetric approximation or SVAA ) is the assumption that the envelope of a forward-travelling wave pulse varies slowly in time and space compared to a period or wavelength . This requires the spectrum of the signal to be narrow-banded —hence it is also referred to as the narrow-band approximation .
The slowly varying envelope approximation is often used because the resulting equations are in many cases easier to solve than the original equations, reducing the order of—all or some of—the highest-order partial derivatives . But the validity of the assumptions which are made need to be justified.
For example, consider the electromagnetic wave equation :
∇ 2 E − 1 c 2 ∂ 2 E ∂ t 2 = 0 , {\displaystyle \nabla ^{2}E-{\frac {1}{c^{2}}}{\frac {\partial ^{2}E}{\partial t^{2}}}=0\,,}
where c = 1 μ 0 ε 0 . {\displaystyle c={\frac {1}{\sqrt {\mu _{0}\varepsilon _{0}}}}~.}
If k 0 and ω 0 are the wave number and angular frequency of the (characteristic) carrier wave for the signal E ( r , t ) , the following representation is useful:
E ( r , t ) = Re [ E 0 ( r , t ) e i ( k 0 ⋅ r − ω 0 t ) ] , {\displaystyle E(\mathbf {r} ,t)=\operatorname {\operatorname {Re} } \left[E_{0}(\mathbf {r} ,t)\,e^{i(\mathbf {k} _{0}\cdot \mathbf {r} -\omega _{0}t)}\right],}
where Re [ ⋅ ] {\displaystyle \operatorname {Re} [\,\cdot \,]} denotes the real part of the quantity between brackets, and i 2 ≡ − 1. {\displaystyle i^{2}\equiv -1.}
In the slowly varying envelope approximation (SVEA) it is assumed that the complex amplitude E 0 ( r , t ) only varies slowly with r and t . This inherently implies that E ( r , t ) represents waves propagating forward, predominantly in the k 0 direction. As a result of the slow variation of E 0 ( r , t ) , when taking derivatives, the highest-order derivatives may be neglected: [ 2 ]
Consequently, the wave equation is approximated in the SVEA as:
2 i k 0 ⋅ ∇ E 0 + 2 i ω 0 c 2 ∂ E 0 ∂ t − ( k 0 2 − ω 0 2 c 2 ) E 0 = 0 . {\displaystyle 2i\mathbf {k} _{0}\cdot \nabla E_{0}+{\frac {2i\omega _{0}}{c^{2}}}{\frac {\partial E_{0}}{\partial t}}-\left(k_{0}^{2}-{\frac {\omega _{0}^{2}}{c^{2}}}\right)E_{0}=0~.}
It is convenient to choose k 0 and ω 0 such that they satisfy the dispersion relation :
k 0 2 − ω 0 2 c 2 = 0 . {\displaystyle k_{0}^{2}-{\frac {\omega _{0}^{2}}{c^{2}}}=0~.}
This gives the following approximation to the wave equation, as a result of the slowly varying envelope approximation:
k 0 ⋅ ∇ E 0 + ω 0 c 2 ∂ E 0 ∂ t = 0 . {\displaystyle \mathbf {k} _{0}\cdot \nabla E_{0}+{\frac {\omega _{0}}{c^{2}}}\,{\frac {\partial E_{0}}{\partial t}}=0~.}
This is a hyperbolic partial differential equation , like the original wave equation, but now of first-order instead of second-order. It is valid for coherent forward-propagating waves in directions near the k 0 -direction. The space and time scales over which E 0 varies are generally much longer than the spatial wavelength and temporal period of the carrier wave. A numerical solution of the envelope equation thus can use much larger space and time steps, resulting in significantly less computational effort.
Assume wave propagation is dominantly in the z -direction, and k 0 is taken in this direction. The SVEA is only applied to the second-order spatial derivatives in the z -direction and time. If Δ ⊥ ≡ ∂ 2 / ∂ x 2 + ∂ 2 / ∂ y 2 {\displaystyle \Delta _{\perp }\equiv \partial ^{2}/\partial x^{2}+\partial ^{2}/\partial y^{2}} is the Laplace operator in the x × y plane, the result is: [ 3 ]
k 0 ∂ E 0 ∂ z + ω 0 c 2 ∂ E 0 ∂ t − 1 2 i Δ ⊥ E 0 = 0 . {\displaystyle k_{0}{\frac {\partial E_{0}}{\partial z}}+{\frac {\omega _{0}}{c^{2}}}{\frac {\partial E_{0}}{\partial t}}-{\frac {1}{2}}\,i\,\Delta _{\perp }E_{0}=0~.}
This is a parabolic partial differential equation . This equation has enhanced validity as compared to the full SVEA: It represents waves propagating in directions significantly different from the z -direction.
In the one-dimensional case, another sufficient condition for the SVEA validity is
where ℓ g {\displaystyle \ell _{\mathsf {g}}} is the length over which the radiation pulse is amplified, ℓ p {\displaystyle \ell _{\mathsf {p}}} is the pulse width and v {\displaystyle v} is the group velocity of the radiating system. [ 4 ]
These conditions are much less restrictive in the relativistic limit where v c {\displaystyle {\frac {v}{c}}} is close to 1, as in a free-electron laser , compared to the usual conditions required for the SVEA validity. | https://en.wikipedia.org/wiki/Slowly_varying_envelope_approximation |
Sludge (possibly from Middle English slutch ' mud, mire ' , or some dialect related to slush ) [ 1 ] is a semi-solid slurry that can be produced from a range of industrial processes, from water treatment , wastewater treatment or on-site sanitation systems. It can be produced as a settled suspension obtained from conventional drinking water treatment , [ 2 ] as sewage sludge from wastewater treatment processes [ 3 ] : 23–25 or as fecal sludge from pit latrines and septic tanks . The term is also sometimes used as a generic term for solids separated from suspension in a liquid; this soupy material usually contains significant quantities of interstitial water (between the solid particles). Sludge can consist of a variety of particles, such as animal manure. [ 4 ] [ not specific enough to verify ]
Industrial wastewater treatment plants produce solids that are also referred to as sludge. This can be generated from biological or physical-chemical processes.
In the activated sludge process for wastewater treatment, the terms "waste activated sludge" and "return activated sludge" are used.
Sludge from the food-processing and beverage-making industries can have a high content of protein and other nutrients. Thus, it can be processed for beneficial uses such as animal feed, rather than being landfilled .
There are several types of sludge, often categorized by their origin or processing stages:
Sludge composition varies significantly based on its source and the treatment process used. It generally includes:
Proper sludge treatment and disposal are crucial to minimize environmental and public health impacts.
Common methods include:
Some treated sludge, known as biosolids , can be used as fertilizer in agriculture due to its nutrient content. However, the presence of contaminants like heavy metals and pathogens requires careful regulation and management. In many countries, guidelines limit the application of biosolids to protect soil health and groundwater quality. [ 10 ] There is also increasing concern over " forever chemicals " like PFAS ( per- and polyfluoroalkyl substances ) that can accumulate in sludge and pose long-term environmental risks. [ 10 ]
Many countries have established regulatory frameworks for sludge management. In the United States , for instance, the Environmental Protection Agency (EPA) oversees the safe disposal and reuse of sludge through its "Part 503" regulations. These regulations set limits on pathogens, heavy metals, and other contaminants to ensure biosolids used in agriculture or land application are safe. [ 10 ] Similarly, the European Union has strict directives regarding sludge, emphasizing sustainable practices and environmental protection. [ 11 ]
The EPA, under CWA section 405(d), established regulations for the use and disposal of sewage sludge ( biosolids ) found in 40 CFR Part 503 . These standards regulate sludge applied to land, incinerated, or placed in surface disposal sites, addressing pollutant limits, pathogen and vector reduction, management practices, monitoring, recordkeeping, and reporting. They apply to anyone handling, applying, or disposing of sewage sludge, as well as operators of disposal sites. Initially finalized in 1993, 40 CFR Part 503 has been amended several times. The original regulation is in the Federal Register , while the updated version is in the Code of Federal Regulations . [ 12 ]
The directive aims to promote the safe use of sewage sludge in agriculture while protecting human health, soil, water, and the environment. It prohibits untreated sludge on agricultural land unless properly incorporated into the soil, mandates adherence to plant nutrient requirements, and prevents soil and water contamination. The Directive also supports the EU's waste hierarchy by encouraging safe recycling of nutrients like phosphorus , aligning with circular economy principles and the European Green Deal's zero pollution goals.
Using treated sludge as an alternative to chemical fertilizers reduces dependence on raw material extraction but requires strict control to avoid spreading contaminants. A 2014 evaluation of the SSD highlighted shortcomings due to its outdated framework, including gaps in addressing modern pollutants (e.g., pharmaceuticals , microplastics ) and its alignment with the EU's circular economy goals. It also identified a need to regulate other sludge uses and consider interactions with newer policies, such as the Urban Waste Water Treatment Directive (UWWTD). [ 13 ]
Since then, scientific advances, policy changes, and new EU strategies (e.g., Circular Economy Action Plan, Farm to Fork Strategy, Biodiversity Strategy 2030) have underscored the need to update the SSD. A comprehensive evaluation is underway to determine whether revisions are necessary to meet contemporary environmental, health, and resource efficiency needs. [ 14 ]
This waste -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sludge |
In fluid mechanics , slug flow in liquid–gas two-phase flow is a type of flow pattern. Lighter, faster moving continuous fluid which contains gas bubbles - pushes along a disperse gas bubble. [ 1 ] [ 2 ] Pressure oscillations within piping can be caused by slug flow. [ 3 ] The word slug usually refers to the heavier, slower moving fluid, but can also be used to refer to the bubbles of the lighter fluid.
This flow is characterised by the intermittent sequence of liquid slugs followed by longer gas bubbles flowing through a pipe. The flow regime is similar to plug flow , but the bubbles are larger and move at a greater velocity.
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slug_flow |
In hydrogeology , a slug test is a particular type of aquifer test where water is quickly added or removed from a groundwater well , and the change in hydraulic head is monitored through time, to determine the near-well aquifer characteristics. It is a method used by hydrogeologists and civil engineers to determine the transmissivity / hydraulic conductivity and storativity of the material the well is completed in.
The "slug" of water can either be added to or removed from the well — the only requirement is that it be done as quickly as possible (the interpretation typically assumes instantaneously), then the water level or pressure is monitored. Depending on the properties of the aquifer and the size of the slug, the water level may return to pre-test levels very quickly (thus complicating accurate collection of water level data).
A slug can be added by either quickly adding a measured amount of water to the well or something which displaces a measured volume (e.g., a long heavy pipe with the ends capped off). An alternative object is a solid polyvinyl chloride (PVC) rod, with sufficient weight to sink into the groundwater. The objective here is to displace water, not merely be "heavy". A slug of water can be removed using a bailer or pump , but this is more difficult to do since it must be done very quickly and the equipment for removing the water (pump or bailer) will likely be in the way of getting water level measurements.
A slug test is in contrast to standard aquifer tests, which typically involve pumping a well at a constant flowrate, and monitoring the response of the aquifer in nearby monitoring wells. Often slug tests are performed instead of a constant rate test, because:
The size of the slug required is determined by the aquifer properties, the size of the well and the amount of time which is available for the test. For very permeable aquifers, the pulse will dissipate very quickly. If the well has a large diameter, a large volume of water must be added to increase the level in the well a measurable amount.
Because the flow rate into or out of the well is not constant, as is the case in a typical aquifer test, the standard Theis solution does not work.
Mathematically , the Theis equation is the solution of the groundwater flow equation for a step increase in discharge rate at the pumping well; a slug test is instead an instantaneous pulse at the pumping well. This means that a superposition (or more precisely a convolution ) of an infinite number of sequential slug tests through time would effectively be a "standard" Theis aquifer test.
There are several known solutions to the slug test problem; a common engineering approximation is the Hvorslev method, which approximates the more rigorous solution to transient aquifer flow with a simple decaying exponential function.
The aquifer parameters obtained from a slug test are typically less representative of the aquifer surrounding the well than an aquifer test which involves pumping in one well and monitoring in another. Complications arise from near-well effects (i.e., well skin and wellbore storage), which may make it difficult to get accurate results from slug test interpretation. | https://en.wikipedia.org/wiki/Slug_test |
Slug Catcher is the name of a unit in the gas refinery or petroleum industry in which slugs at the outlet of pipelines are collected or caught. A slug is a large quantity of a liquid that exists in a multi-phase pipeline.
Pipelines that transport both gas and liquids together, known as two-phase flow , can operate in a flow regime known as slugging flow or slug flow . Under the influence of gravity , liquids will tend to settle on the bottom of the pipeline, while the gases occupy the top section of the pipeline. Under certain operating conditions gas and liquid are not evenly distributed throughout the pipeline, but travel as large plugs with mostly liquids or mostly gases through the pipeline. These large plugs are called slugs.
Slugs exiting the pipeline can overload the gas/liquid handling capacity of the plant at the pipeline outlet, as they are often produced at a much larger rate than the equipment is designed for.
Slugs can be generated by different mechanisms in a pipeline:
Slugs formed by terrain slugging, hydrodynamic slugging or riser-based slugging are periodical in nature. Whether a slug is able to reach the outlet of the pipeline depends on the rate at which liquids are added to the slug at the front (i.e. in the direction of flow) and the rate at which liquids leave the slug at the back. Some slugs will grow as they travel the pipeline, while others are damped and disappear before reaching the outlet of the pipeline.
A slug catcher is a vessel with sufficient buffer volume to store the largest slugs expected from the upstream system. The slug catcher is located between the outlet of the pipeline and the processing equipment. The buffered liquids can be drained to the processing equipment at a much slower rate to prevent overloading the system. As slugs are a periodical phenomenon, the slug catcher should be emptied before the next slug arrives
Slug catchers can be used continuously or on-demand. A slug catcher permanently connected to the pipeline will buffer all production, including the slugs, before it is sent to the gas and liquid handling facilities. This is used for difficult to predict slugging behaviour found in terrain slugging, hydrodynamic slugging or riser-based slugging. Alternatively, the slug catcher can be bypassed in normal operation and be brought online when a slug is expected, usually during pigging operations. An advantage of this set-up is that inspection and maintenance on the slug catcher can be done without interrupting the normal operation.
Slug catchers are designed in different forms,
Finger type slug catchers are large in size and can be observed on satellite images. The following table is generated using Google Earth and gives an overview of slug catchers in the world. The slug catcher length is determined using measurement tool in Google Earth and is estimated to be +/- 5 meters accurate.
http://www.glossary.oilfield.slb.com/en/Terms/s/slug_flow.aspx | https://en.wikipedia.org/wiki/Slugcatcher |
Slugs is an open-source autopilot system oriented toward inexpensive autonomous aircraft. [ 1 ] Low cost and wide availability enable hobbyist use in small remotely piloted aircraft . The project started in 2009 [ 2 ] and is being further developed and used at Autonomous Systems Lab of University of California Santa Cruz . Several vendors produce Slugs autopilots and accessories. [ 3 ] [ 4 ]
An autopilot allows a remotely piloted aircraft to be flown out of sight. All hardware and software is open-source and freely available to anyone under the MIT licensing agreement. free software autopilots provide more flexible hardware and software. Users can modify the autopilot based on their own special requirements, such as forest fire evaluation. The free software approach from Slugs is similar to that from the paparazzi Project , PX4 autopilot , ArduPilot and OpenPilot where low cost and availability enables its hobbyist use in small remotely piloted aircraft such as micro air vehicles and miniature UAVs . Such frameworks are common in Open-source robotics .
The open-source software suite contains everything to let airborne systems fly.
This article about aircraft components is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Slugs_(autopilot_system) |
A slurry is a mixture of denser solids suspended in liquid, usually water. The most common use of slurry is as a means of transporting solids or separating minerals, the liquid being a carrier that is pumped on a device such as a centrifugal pump . The size of solid particles may vary from 1 micrometre up to hundreds of millimetres .
The particles may settle below a certain transport velocity and the mixture can behave like a Newtonian or non-Newtonian fluid. Depending on the mixture, the slurry may be abrasive and/or corrosive.
Examples of slurries include:
To determine the percent solids (or solids fraction) of a slurry from the density of the slurry, solids and liquid [ 7 ]
where
In aqueous slurries, as is common in mineral processing, the specific gravity of the species is typically used, and since specific gravity of water is taken to be 1, this relation is typically written:
even though specific gravity with units tonnes/m 3 (t/m 3 ) is used instead of the SI density unit, kg/m 3 .
To determine the mass of liquid in a sample given the mass of solids and the mass fraction:
By definition
therefore
and
then
and therefore
where
Equivalently
and in a minerals processing context where the specific gravity of the liquid (water) is taken to be one:
So
and
Then combining with the first equation:
So
Then since
we conclude that
where | https://en.wikipedia.org/wiki/Slurry |
A slurry pipeline is a specially engineered pipeline used to move ores, such as coal or iron, or mining waste, called tailings , over long distances. A mixture of the ore concentrate and water, called slurry , is pumped to its destination and the water is filtered out. Due to the abrasive properties of slurry , the pipelines can be lined with high-density polyethylene (HDPE), or manufactured completely from HDPE Pipe , although this requires a very thick pipe wall. [ 1 ] Slurry pipelines are used as an alternative to railroad transportation when mines are located in remote, inaccessible areas.
Canadian researchers at the University of Alberta are investigating the use of slurry pipelines to move agricultural and forestry wastes from dispersed sources to centralized biofuel plants. Over distances of 100 kilometres pipeline transport of biomass can be viable provided it is used in processes that can accept very wet feedstocks such as hydrothermal liquefaction or ethanol fermentation. Compared to an equivalently sized oil pipeline, a biomass slurry pipeline would carry around 8% of the energy. [ 2 ]
The concentrate of the ore is mixed with water and then pumped over a long distance to a port where it can be shipped for further processing. At the end of the pipeline, the material is separated from the [slurry] in a filter press to remove the water. This water is usually subjected to a waste treatment process before disposal or return to the mine. Slurry pipelines offer an economic advantage over railroad transport and much less noise disturbance to the environment, particularly when mines are in extremely remote areas.
Pipelines must be suitably engineered to resist abrasion from the solids as well as corrosion from the soil. Some of these pipelines are lined with high-density polyethylene (HDPE).
Typical materials that are transferred using slurry pipelines include coal , [ 3 ] copper , iron , and phosphate concentrates, limestone , lead , zinc , nickel , bauxite and oil sands .
Slurry pipelines are also used to transport tailings from a mineral processing plant after the ore has been processed in order to dispose of the remaining rocks or clays.
For oil sand plants, a mixture of oil sand and water may be pumped over a long distance to release the bitumen by ablation . These pipelines are also called hydrotransport pipelines.
Early modern slurry pipelines include The Ohio 'Consolidation' coal slurry pipeline (1957) and the Kensworth to Rugby limestone slurry pipeline (1965) [ 4 ]
The 85 km Savage River Slurry pipeline in Tasmania , Australia , was possibly the world's first slurry pipeline to transport iron ore when it was built in 1967. It includes a 366m bridge span at 167m above the Savage River. It carries iron ore slurry from the Savage River open cut mine owned by Australian Bulk Minerals and was still operational as of 2011. [ 5 ] [ 6 ]
One of the longest slurry pipelines was to be the proposed ETSI pipeline, to transport coal from Wyoming to Louisiana over a distance of 1036 miles (1675 km). It was never commissioned. It is anticipated that in the next few years some long distance slurry pipelines will be constructed in Australia and South America where mineral deposits are often a few hundred kilometers away from shipping ports.
A 525 km slurry pipeline is planned for the Minas-Rio iron ore mine in Brazil . [ 7 ]
Slurry pipelines are also being considered to desilt or remove silts from deposits behind dams in man-made lakes. After the Hurricane Katrina disaster there were proposals to remedy the environment by pumping silt to the shore. Proposals have also been made to de-silt Lake Nubia-Nasser in Egypt and Sudan by slurry pipelines, as Egypt is now deprived of 95% of its alluvium , which used to arrive every year. These projects to remedy the environment might alleviate one of the major problems associated with large dams and man-made lakes.
ESSAR Steel India Limited owns two >250 km slurry pipelines in India; the Kirandul-Vishakhapatnam (slurry pipeline) and Dabuna-Paradeep pipeline. | https://en.wikipedia.org/wiki/Slurry_pipeline |
In probability theory , Slutsky's theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables . [ 1 ]
The theorem was named after Eugen Slutsky . [ 2 ] Slutsky's theorem is also attributed to Harald Cramér . [ 3 ]
Let X n , Y n {\displaystyle X_{n},Y_{n}} be sequences of scalar/vector/matrix random elements .
If X n {\displaystyle X_{n}} converges in distribution to a random element X {\displaystyle X} and Y n {\displaystyle Y_{n}} converges in probability to a constant c {\displaystyle c} , then
where → d {\displaystyle {\xrightarrow {d}}} denotes convergence in distribution .
Notes:
This theorem follows from the fact that if X n converges in distribution to X and Y n converges in probability to a constant c , then the joint vector ( X n , Y n ) converges in distribution to ( X , c ) ( see here ).
Next we apply the continuous mapping theorem , recognizing the functions g ( x , y ) = x + y , g ( x , y ) = xy , and g ( x , y ) = x y −1 are continuous (for the last function to be continuous, y has to be invertible). | https://en.wikipedia.org/wiki/Slutsky's_theorem |
In microeconomics , the Slutsky equation (or Slutsky identity ), named after Eugen Slutsky , relates changes in Marshallian (uncompensated) demand to changes in Hicksian (compensated) demand , which is known as such since it compensates to maintain a fixed level of utility.
There are two parts of the Slutsky equation, namely the substitution effect and income effect . In general, the substitution effect is negative. Slutsky derived this formula to explore a consumer's response as the price of a commodity changes. When the price increases, the budget set moves inward, which also causes the quantity demanded to decrease. In contrast, if the price decreases, the budget set moves outward, which leads to an increase in the quantity demanded. The substitution effect is due to the effect of the relative price change, while the income effect is due to the effect of income being freed up. The equation demonstrates that the change in the demand for a good caused by a price change is the result of two effects:
The Slutsky equation decomposes the change in demand for good i in response to a change in the price of good j :
where h ( p , u ) {\displaystyle h(\mathbf {p} ,u)} is the Hicksian demand and x ( p , w ) {\displaystyle x(\mathbf {p} ,w)} is the Marshallian demand, at the vector of price levels p {\displaystyle \mathbf {p} } , wealth level (or income level) w {\displaystyle w} , and fixed utility level u {\displaystyle u} given by maximizing utility at the original price and income, formally presented by the indirect utility function v ( p , w ) {\displaystyle v(\mathbf {p} ,w)} . The right-hand side of the equation equals the change in demand for good i holding utility fixed at u minus the quantity of good j demanded, multiplied by the change in demand for good i when wealth changes.
The first term on the right-hand side represents the substitution effect, and the second term represents the income effect. [ 1 ] Note that since utility is not observable, the substitution effect is not directly observable. Still, it can be calculated by referencing the other two observable terms in the Slutsky equation. This process is sometimes known as the Hicks decomposition of a demand change. [ 2 ]
The equation can be rewritten in terms of elasticity :
where ε p is the (uncompensated) price elasticity , ε p h is the compensated price elasticity, ε w,i the income elasticity of good i , and b j the budget share of good j .
Overall, the Slutsky equation states that the total change in demand consists of an income effect and a substitution effect, and both effects must collectively equal the total change in demand.
The equation above is helpful because it demonstrates that changes in demand indicate different types of goods. The substitution effect is negative, as indifference curves always slope downward. However, the same does not apply to the income effect , which depends on how income affects the consumption of a good.
The income effect on a normal good is negative, so if its price decreases, the consumer's purchasing power or income increases. The reverse holds when the price increases and purchasing power or income decreases.
An example of inferior goods is instant noodles. When consumers run low on money for food, they purchase instant noodles; however, the product is not generally considered something people would normally consume daily. This is due to money constraints; as wealth increases, consumption decreases. In this case, the substitution effect is negative, but the income effect is also negative.
In any case, the substitution effect or income effect are positive or negative when prices increase depending on the type of goods:
However, it is impossible to tell whether the total effect will always be negative if inferior complementary goods are mentioned. For instance, the substitution effect and the income effect pull in opposite directions. The total effect will depend on which effect is ultimately stronger.
While there are several ways to derive the Slutsky equation, the following method is likely the simplest. Begin by noting the identity h i ( p , u ) = x i ( p , e ( p , u ) ) {\displaystyle h_{i}(\mathbf {p} ,u)=x_{i}(\mathbf {p} ,e(\mathbf {p} ,u))} where e ( p , u ) {\displaystyle e(\mathbf {p} ,u)} is the expenditure function , and u is the utility obtained by maximizing utility given p and w . Totally differentiating with respect to p j yields as the following:
Making use of the fact that ∂ e ( p , u ) ∂ p j = h j ( p , u ) {\displaystyle {\frac {\partial e(\mathbf {p} ,u)}{\partial p_{j}}}=h_{j}(\mathbf {p} ,u)} by Shephard's lemma and that at optimum,
one can substitute and rewrite the derivation above as the Slutsky equation.
The Slutsky equation can be rewritten in matrix form:
where D p is the derivative operator with respect to prices and D w is the derivative operator with respect to wealth.
The matrix D p h ( p , u ) {\displaystyle \mathbf {D_{p}h} (\mathbf {p} ,u)} is known as the Hicksian substitution matrix and is formally defined as:
The Slutsky matrix is given by:
When u {\displaystyle u} is the maximum utility the consumer achieves at prices p {\displaystyle \mathbf {p} } and income w {\displaystyle w} , that is, u = v ( p , w ) {\displaystyle u=v(\mathbf {p} ,w)} , the Slutsky equation implies that each element of the Slutsky matrix S ( p , w ) {\displaystyle S(\mathbf {p} ,w)} is exactly equal to the corresponding component of the Hicksian substitution matrix σ ( p , u ) {\displaystyle \sigma (\mathbf {p} ,u)} , or :
The Slutsky matrix is symmetric, and given that the expenditure function e ( p , u ) {\displaystyle e(\mathbf {p} ,u)} is concave, the Slutsky matrix is also negative semi-definite .
A Cobb-Douglas utility function (see Cobb-Douglas production function ) with two goods and income w {\displaystyle w} generates Marshallian demand for goods 1 and 2 of x 1 = .7 w / p 1 {\displaystyle x_{1}=.7w/p_{1}} and x 2 = .3 w / p 2 . {\displaystyle x_{2}=.3w/p_{2}.} Rearrange the Slutsky equation to put the Hicksian derivative on the left-hand-side yields the substitution effect:
Going back to the original Slutsky equation shows how the substitution and income effects add up to give the total effect of the price rise on quantity demanded:
Thus, of the total decline of .7 w / p 1 2 {\displaystyle .7w/p_{1}^{2}} in quantity demanded when p 1 {\displaystyle p_{1}} rises, 21/70 is from the substitution effect and 49/70 from the income effect. The good one is the good this consumer spends most of his income on ( p 1 q 1 = .7 w {\displaystyle p_{1}q_{1}=.7w} ), which is why the income effect is so large.
One can check that the answer from the Slutsky equation is the same as from directly differentiating the Hicksian demand function, which here is [ 3 ]
where u {\displaystyle u} is utility. The derivative is
so since the Cobb-Douglas indirect utility function is v = w p 1 − .7 p 2 − .3 , {\displaystyle v=wp_{1}^{-.7}p_{2}^{-.3},} and u = v {\displaystyle u=v} when the consumer uses the specified demand functions, the derivative is:
which is indeed the Slutsky equation's answer.
The Slutsky equation also can be applied to compute the cross-price substitution effect. One might think it was zero here because when p 2 {\displaystyle p_{2}} rises, the Marshallian quantity demanded of good 1, x 1 ( p 1 , p 2 , w ) , {\displaystyle x_{1}(p_{1},p_{2},w),} is unaffected ( ∂ x 1 / ∂ p 2 = 0 {\displaystyle \partial x_{1}/\partial p_{2}=0} ), but that is wrong. Again rearranging the Slutsky equation, the cross-price substitution effect is:
This says that when p 2 {\displaystyle p_{2}} rises, there is a substitution effect of .21 w / ( p 1 p 2 ) {\displaystyle .21w/(p_{1}p_{2})} towards good 1. At the same time, the rise in p 2 {\displaystyle p_{2}} has a negative income effect on good 1's demand, an opposite effect of the same size as the substitution effect, so the net effect is zero. This is a special property of the Cobb-Douglas function.
When there are two goods, the Slutsky equation in matrix form is: [ 4 ]
Although strictly speaking, the Slutsky equation only applies to infinitesimal price changes, a linear approximation for finite changes is standardly used. If the prices of the two goods change by Δ p 1 {\displaystyle \Delta p_{1}} and Δ p 2 {\displaystyle \Delta p_{2}} , the effect on the demands for the two goods are:
Multiplying out the matrices, the effect on good 1, for example, would be
The first term is the substitution effect. The second term is the income effect, which is composed of the consumer's response to income loss multiplied by the size of the income loss from each price increase.
A Giffen good is a product in greater demand when the price increases, which is also a special case of inferior goods. [ 5 ] In the extreme case of income inferiority, the size of the income effect overpowers the size of the substitution effect, leading to a positive overall change in demand responding to an increase in the price. Slutsky's decomposition of the change in demand into a pure substitution effect and income effect explains why the law of demand doesn't hold for Giffen goods.
Varian, H. R. (2020). Intermediate microeconomics : a modern approach (Ninth edition.). W.W. Norton & Company. | https://en.wikipedia.org/wiki/Slutsky_equation |
Samarium(III) oxide ( Sm 2 O 3 ) is a chemical compound . Samarium oxide readily forms on the surface of samarium metal under humid conditions or temperatures in excess of 150°C in dry air. Similar to rust on metallic iron, this oxide layer spalls off the surface of the metal, exposing more metal to continue the reaction. The oxide is commonly white to off yellow in color and is often encountered as a highly fine dust like powder.
Samarium(III) oxide is used in optical and infrared absorbing glass to absorb infrared radiation . Also, it is used as a neutron absorber in control rods for nuclear power reactors. The oxide catalyzes the dehydration and dehydrogenation of primary and secondary alcohols. [ 1 ] Another use involves preparation of other samarium salts. [ 2 ]
Samarium(III) oxide may be prepared by two methods:
1. thermal decomposition of samarium(III) carbonate, hydroxide, nitrate, oxalate or sulfate:
2. by burning the metal in air or oxygen at a temperature above 150 °C:
Samarium(III) oxide dissolves in mineral acids, forming salts upon evaporation and crystallization :
The oxide can be reduced to metallic samarium by heating with a reducing agent , such as hydrogen or carbon monoxide , at elevated temperatures. | https://en.wikipedia.org/wiki/Sm2O3 |
Samarium(III) arsenide is a binary inorganic compound of samarium and arsenic with the chemical formula SmAs . [ 1 ] [ 2 ]
Samarium arsenide can be synthesised by heating of pure substances in vacuum:
Samarium arsenide forms crystals of a cubic system , [ 3 ] space group Fm3m , cell parameters a = 0.5921 nm, Z = 4, of NaCl -structure. [ 4 ] [ 5 ]
The compound melts congruently at 2257 °C.
SmAs is used as a semiconductor and in photo optic applications. [ 6 ] | https://en.wikipedia.org/wiki/SmAs |
Samarium tetraboride is a binary inorganic compound of samarium and boron with the formula SmB 4 . It forms black crystals.
Samarium tetraboride can be prepared from directly reacting samarium and boron at 2400 °C:
Samarium tetraboride forms crystals of the orthorhombic crystal system , space group P4/mbm , cell parameters a = 0.7174 nm, c = 0.40696 nm, Z = 4, and a structure like thorium tetraboride. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
The compound is formed by a peritectic reaction at a temperature of 2400 °C. [ 1 ]
At temperatures of 25 K and 7 K, magnetic transitions occur in the compound. [ 6 ] | https://en.wikipedia.org/wiki/SmB4 |
Samarium(II) bromide is an inorganic compound with the chemical formula SmBr 2 . [ 6 ] It is a brown solid that is insoluble in most solvents but degrades readily in air. [ 4 ]
In the gas phase, SmBr 2 is a bent molecule with Sm–Br distance 274.5 pm and bond angle 131±6°. [ 7 ]
Samarium(II) bromide was first synthesized in 1934 by P. W. Selwood , when he reduced samarium tribromide (SmBr 3 ) with hydrogen (H 2 ). Kagan also synthesized it by converting samarium(III) oxide (Sm 2 O 3 ) to SmBr 3 and then reducing with a lithium dispersion in THF . Robert A. Flowers synthesized it by adding two equivalent of lithium bromide (LiBr) to samarium diiodide (SmI 2 ) in tetrahydrofuran . Namy managed to synthesize it by mixing tetrabromoethane (C 2 H 2 Br 4 ) with samarium metal, and Hilmerson found that heating the tetrabromoethane or samarium greatly improved the production of samarium(II) bromide. [ 8 ]
Samarium(II) bromide has reducing properties reminiscent of the more commonly used samarium diiodide . [ 9 ] It is an effective for pinacol homocouplings of aldehydes , ketones , and cross-coupling carbonyl compounds. Reports have shown that samarium(II) bromide is capable of selectively reducing ketones if it is in the presence of an alkyl halide . [ 8 ]
Samarium(II) bromide forms soluble adducts with hexamethylphosphoramide . This species reduces imines to amines and alkyl chlorides to hydrocarbons . [ 10 ] For example, SmBr 2 (hmpa) x converts cyclohexyl chloride to cyclohexane . [ 11 ]
Samarium(II) bromide will reduce ketones in tetrahydrofuran if an activator is absent. [ 12 ] | https://en.wikipedia.org/wiki/SmBr2 |
cream-coloured solid (hexahydrate)
2.383 g /cm 3 (hexahydrate)
Samarium(III) chloride , also known as samarium trichloride, is an inorganic compound of samarium and chloride . It is a pale yellow salt that rapidly absorbs water to form a hexa hydrate , SmCl 3 . 6H 2 O. [ 1 ] The compound has few practical applications but is used in laboratories for research on new compounds of samarium.
Like several related chlorides of the lanthanides and actinides, SmCl 3 crystallises in the UCl 3 motif. The Sm 3+ centres are nine-coordinate, occupying trigonal prismatic sites with additional chloride ligands occupying the three square faces.
SmCl 3 is prepared by the " ammonium chloride " route, which involves the initial synthesis of (NH 4 ) 2 [SmCl 5 ]. This material can be prepared from the common starting materials at reaction temperatures of 230 °C from samarium oxide : [ 2 ]
The pentachloride is then heated to 350-400 °C resulting in evolution of ammonium chloride and leaving a residue of the anhydrous trichloride:
It can also be prepared from samarium metal and hydrochloric acid . [ 3 ] [ 4 ]
Aqueous solutions of samarium(III) chloride can be prepared by dissolving metallic samarium or samarium carbonate in hydrochloric acid .
Samarium(III) chloride is a moderately strong Lewis acid , which ranks as "hard" according to the HSAB concept . Aqueous solutions of samarium chloride can be used to prepare samarium trifluoride :
Samarium(III) chloride is used for the preparation of samarium metal, which has a variety of uses, notably in magnets . Anhydrous SmCl 3 is mixed with sodium chloride or calcium chloride to give a low melting point eutectic mixture. Electrolysis of this molten salt solution gives the free metal . [ 5 ]
Samarium(III) chloride can also be used as a starting point for the preparation of other samarium salts . The anhydrous chloride is used to prepare organometallic compounds of samarium, such as bis(pentamethylcyclopentadienyl)alkylsamarium(III) complexes. [ 6 ] | https://en.wikipedia.org/wiki/SmCl3 |
Samarium(III) fluoride ( Sm F 3 ) is a slightly hygroscopic solid fluoride . Conditions/substances to avoid are: open flame , moisture, strong acids .
Samarium(III) fluoride can be obtained by reacting SmCl 3 or Sm 2 (CO 3 ) 3 with 40% hydrofluoric acid: [ 1 ]
Samarium(III) fluoride can also be produced by hydrothermal reaction of samarium nitrate and sodium fluroborate at 200 °C. [ 2 ]
Samarium(III) fluoride reacts with some reducing agents at high temperatures to obtain samarium(II) fluoride :
At room temperature, samarium(III) fluoride has orthorhombic structure with space group Pnma – β-YF 3 type with lattice constants a = 666,9 pm , b = 705,9 pm, c = 440,5 pm. Above 495 °C, it has the rhombohedral LaF 3 structure (space group P3cl ) – with lattice constants a = 707, c = 724 pm. [ 3 ] | https://en.wikipedia.org/wiki/SmF3 |
Samarium(II) iodide is an inorganic compound with the formula SmI 2 . When employed as a solution for organic synthesis , it is known as Kagan 's reagent . SmI 2 is a green solid and forms a dark blue solution in THF . [ 1 ] It is a strong one-electron reducing agent that is used in organic synthesis .
In solid samarium(II) iodide, the metal centers are seven-coordinate with a face-capped octahedral geometry . [ 2 ]
In its ether adducts , samarium remains heptacoordinate with five ether and two terminal iodide ligands. [ 3 ]
Samarium iodide is easily prepared in nearly quantitative yields from samarium metal and either diiodomethane or 1,2-diiodoethane . [ 4 ] When prepared in this way, its solutions is most often used without purification of the inorganic reagent.
Solid, solvent-free SmI 2 forms by high temperature decomposition of samarium(III) iodide (SmI 3 ). [ 5 ] [ 6 ] [ 7 ]
Samarium(II) iodide is a powerful reducing agent – for example it rapidly reduces water to hydrogen . [ 2 ] It is available commercially as a dark blue 0.1 M solution in THF. Although used typically in superstoichiometric amounts, catalytic applications have been described. [ 8 ]
Samarium(II) iodide is a reagent for carbon-carbon bond formation, for example in a Barbier reaction (similar to the Grignard reaction ) between a ketone and an alkyl iodide to form a tertiary alcohol : [ 9 ]
Typical reaction conditions use SmI 2 in THF in the presence of catalytic NiI 2 .
Esters react similarly (adding two R groups), but aldehydes give by-products. The reaction is convenient in that it is often very rapid (5 minutes or less in the cold). Although samarium(II) iodide is considered a powerful single-electron reducing agent, it does display remarkable chemoselectivity among functional groups. For example, sulfones and sulfoxides can be reduced to the corresponding sulfide in the presence of a variety of carbonyl -containing functionalities (such as esters , ketones , amides , aldehydes , etc.). This is presumably due to the considerably slower reaction with carbonyls as compared to sulfones and sulfoxides . Furthermore, hydrodehalogenation of halogenated hydrocarbons to the corresponding hydrocarbon compound can be achieved using samarium(II) iodide. Also, it can be monitored by the color change that occurs as the dark blue color of SmI 2 in THF discharges to a light yellow once the reaction has occurred. The picture shows the dark colour disappearing immediately upon contact with the Barbier reaction mixture.
Work-up is with dilute hydrochloric acid , and the samarium is removed as aqueous Sm 3+ .
Carbonyl compounds can also be coupled with simple alkenes to form five, six or eight membered rings. [ 10 ]
Tosyl groups can be removed from N -tosylamides almost instantaneously, using SmI 2 in conjunction with distilled water and an amine base. The reaction is even effective for deprotection of sensitive substrates such as aziridines : [ 11 ]
In the Markó-Lam deoxygenation , an alcohol could be almost instantaneously deoxygenated by reducing their toluate ester in presence of SmI 2 .
SmI 2 can also be used in the transannulation of bicyclic molecules . An example is the SmI 2 induced ketone - alkene cyclization of 5-methylenecyclooctanone which proceeds through a ketyl intermediate:
The applications of SmI 2 have been reviewed. [ 12 ] [ 13 ] [ 14 ] The book Organic Synthesis Using Samarium Diiodide , published in 2009, gives a detailed overview of reactions mediated by SmI 2 . [ 15 ] | https://en.wikipedia.org/wiki/SmI2 |
Samarium(III) phosphide is an inorganic compound of samarium and phosphorus with the chemical formula SmP . [ 1 ] [ 2 ] [ 3 ]
Samarium(III) phosphide can be obtained by heating samarium and phosphorus:
Samarium(III) phosphide forms crystals of a cubic system , space group Fm 3 m , cell size a = 0.5760 nm, Z = 4, with a structure similar to sodium chloride NaCl. [ 4 ]
The compound exists in the temperature range of 1315–2020 °C and has a homogeneity region described by the SmP 1÷0.982 . [ 5 ]
Samarium(III) phosphide readily dissolves in nitric acid . [ 6 ]
Samarium(III) phosphide compound is a semiconductor used in high power, high frequency applications and in laser diodes . [ 1 ] | https://en.wikipedia.org/wiki/SmP |
Samarium(III) phosphate is an inorganic compound, with the chemical formula of SmPO 4 . It is one of the phosphates of samarium .
Samarium(III) phosphate can be obtained by reacting sodium metaphosphate with any soluble samarium(III) salt:
Samarium(III) phosphate can also be obtained by reacting phosphoric acid and samarium(III) chloride . [ 1 ]
Samarium(III) phosphate reacts with sodium fluoride at 750 °C to form
Na 2 SmF 2 PO 4 . [ 2 ] Samarium(III) phosphate forms crystals of the monoclinic crystal system , with space group P 2 1 /n, and lattice parameters a = 0.6669 nm, b = 0.6868 nm, c = 0.6351 nm, β = 103.92 °, Z = 4. [ 3 ] | https://en.wikipedia.org/wiki/SmPO4 |
The Smale conjecture , named after Stephen Smale , is the statement that the diffeomorphism group of the 3-sphere has the homotopy-type of its isometry group, the orthogonal group O(4) . It was proved in 1983 by Allen Hatcher . [ 1 ]
There are several equivalent statements of the Smale conjecture. One is that the component of the unknot in the space of smooth embeddings of the circle in 3-space has the homotopy-type of the round circles, equivalently, O(3) . Interestingly, this statement is not equivalent to the generalized Smale Conjecture, in higher dimensions.
Another equivalent statement is that the group of diffeomorphisms of the 3-ball which restrict to the identity on the boundary is contractible.
Yet another equivalent statement is that the space of constant-curvature Riemann metrics on the 3-sphere is contractible.
The (false) statement that the inclusion O ( n + 1 ) → Diff ( S n ) {\displaystyle O(n+1)\to {\text{Diff}}(S^{n})} is a weak equivalence for all n {\displaystyle n} is sometimes meant when referring to the generalized Smale conjecture . For n = 1 {\displaystyle n=1} , this is classical, for n = 2 {\displaystyle n=2} , Smale proved it himself. [ 2 ]
For n ≥ 5 {\displaystyle n\geq 5} the conjecture is false due to the failure of Diff ( S n ) / O ( n + 1 ) {\displaystyle {\text{Diff}}(S^{n})/O(n+1)} to be contractible. [ 3 ]
In late 2018, Tadayuki Watanabe released a preprint that proves the failure of Smale's conjecture in the remaining 4-dimensional case [ 4 ] relying on work around the Kontsevich integral , a generalization of the Gauss linking integral . As of 2021, the proof remains unpublished in a mathematical journal. | https://en.wikipedia.org/wiki/Smale_conjecture |
Small-signal modeling is a common analysis technique in electronics engineering used to approximate the behavior of electronic circuits containing nonlinear devices with linear equations . It is applicable to electronic circuits in which the AC signals (i.e., the time-varying currents and voltages in the circuit) are small relative to the DC bias currents and voltages. A small-signal model is an AC equivalent circuit in which the nonlinear circuit elements are replaced by linear elements whose values are given by the first-order (linear) approximation of their characteristic curve near the bias point.
Many of the electrical components used in simple electric circuits, such as resistors , inductors , and capacitors are linear . [ citation needed ] Circuits made with these components, called linear circuits , are governed by linear differential equations , and can be solved easily with powerful mathematical frequency domain methods such as the Laplace transform . [ citation needed ]
In contrast, many of the components that make up electronic circuits, such as diodes , transistors , integrated circuits , and vacuum tubes are nonlinear ; that is the current through [ clarification needed ] them is not proportional to the voltage , and the output of two-port devices like transistors is not proportional to their input. The relationship between current and voltage in them is given by a curved line on a graph, their characteristic curve (I-V curve). In general these circuits don't have simple mathematical solutions. To calculate the current and voltage in them generally requires either graphical methods or simulation on computers using electronic circuit simulation programs like SPICE .
However in some electronic circuits such as radio receivers , telecommunications, sensors, instrumentation and signal processing circuits, the AC signals are "small" compared to the DC voltages and currents in the circuit. In these, perturbation theory can be used to derive an approximate AC equivalent circuit which is linear, allowing the AC behavior of the circuit to be calculated easily. In these circuits a steady DC current or voltage from the power supply, called a bias , is applied to each nonlinear component such as a transistor and vacuum tube to set its operating point, and the time-varying AC current or voltage which represents the signal to be processed is added to it. The point on the graph of the characteristic curve representing the bias current and voltage is called the quiescent point (Q point). In the above circuits the AC signal is small compared to the bias, representing a small perturbation of the DC voltage or current in the circuit about the Q point. If the characteristic curve of the device is sufficiently flat over the region occupied by the signal, using a Taylor series expansion the nonlinear function can be approximated near the bias point by its first order partial derivative (this is equivalent to approximating the characteristic curve by a straight line tangent to it at the bias point). These partial derivatives represent the incremental capacitance , resistance , inductance and gain seen by the signal, and can be used to create a linear equivalent circuit giving the response of the real circuit to a small AC signal. This is called the "small-signal model".
The small signal model is dependent on the DC bias currents and voltages in the circuit (the Q point ). Changing the bias moves the operating point up or down on the curves, thus changing the equivalent small-signal AC resistance, gain, etc. seen by the signal.
Any nonlinear component whose characteristics are given by a continuous , single-valued , smooth ( differentiable ) curve can be approximated by a linear small-signal model. Small-signal models exist for electron tubes , diodes , field-effect transistors (FET) and bipolar transistors , notably the hybrid-pi model and various two-port networks . Manufacturers often list the small-signal characteristics of such components at "typical" bias values on their data sheets.
The (large-signal) Shockley equation for a diode can be linearized about the bias point or quiescent point (sometimes called Q-point ) to find the small-signal conductance , capacitance and resistance of the diode. This procedure is described in more detail under diode modelling#Small-signal_modelling , which provides an example of the linearization procedure followed in small-signal models of semiconductor devices.
A large signal is any signal having enough magnitude to reveal a circuit's nonlinear behavior. The signal may be a DC signal or an AC signal or indeed, any signal. How large a signal needs to be (in magnitude) before it is considered a large signal depends on the circuit and context in which the signal is being used. In some highly nonlinear circuits practically all signals need to be considered as large signals.
A small signal is part of a model of a large signal. To avoid confusion, note that there is such a thing as a small signal (a part of a model) and a small-signal model (a model of a large signal).
A small signal model consists of a small signal (having zero average value, for example a sinusoid, but any AC signal could be used) superimposed on a bias signal (or superimposed on a DC constant signal) such that the sum of the small signal plus the bias signal gives the total signal which is exactly equal to the original (large) signal to be modeled. This resolution of a signal into two components allows the technique of superposition to be used to simplify further analysis. (If superposition applies in the context.)
In analysis of the small signal's contribution to the circuit, the nonlinear components, which would be the DC components, are analyzed separately taking into account nonlinearity. | https://en.wikipedia.org/wiki/Small-signal_model |
Small is a weekly peer-reviewed scientific journal covering nanotechnology . It was established in 2005 as a monthly journal, switched to biweekly in 2009, and to weekly in 2015. It is published by Wiley-VCH and the editor-in-chief is José Oliveira. According to the Journal Citation Reports , the journal has a 2023 impact factor of 13.0. [ 1 ]
This article about a nanotechnology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Small_(journal) |
The Small Astronomy Satellite 2 , also known also as SAS-2 , SAS B or Explorer 48 , was a NASA gamma ray telescope . It was launched on 15 November 1972 into the low Earth orbit with a periapsis of 443 km and an apoapsis of 632 km. It completed its observations on 8 June 1973. [ 2 ] [ 3 ]
SAS 2 was the second in the series of small spacecraft designed to extend the astronomical studies in the X-ray , gamma-ray , ultraviolet , visible, and infrared regions. The primary objective of the SAS-B was to measure the spatial and energy distribution of primary galactic and extragalactic gamma radiation with energies between 20 and 300 MeV . The instrumentation consisted principally of a guard scintillation detector, an upper and a lower spark chamber, and a charged particle telescope.
The spacecraft was launched on 15 November 1972 into an initial orbit of about 632 km (393 mi) of apogee , 443 km (275 mi) of perigee , 1.90° of orbital inclination , with an orbital period of 95.40 minutes. [ 4 ] from the San Marco platform off the coast of Kenya , Africa , into a nearly equatorial orbit. The orbiting spacecraft was in the shape of a cylinder approximately 59 cm (23 in) in diameter and 135 cm (53 in) in length. Four solar paddles were used to recharge a 6 amp-h, eight-cell, nickel–cadmium battery and provide power to the spacecraft and telescope experiment. The spacecraft was spin stabilized by an internal wheel, and a magnetically torqued commandable control system was used to point the spin axis of the spacecraft to any point of the sky within approximately 1°. The experiment axis lay along this axis allowing the telescope to look at any selected region of the sky with its ± 30° acceptance aperture. The nominal spin rate was 1/12 rpm . Data were taken at 1000 bps and could be recorded on an onboard tape recorder and simultaneously transmitted in real time. The recorded data were transmitted once per orbit. This required approximately 5 minutes. [ 5 ]
The telescope experiment was initially turned on 20 November 1972 and by 27 November 1972, the spacecraft became fully operational. The low-voltage power supply for the experiment failed on 8 June 1973. No useful scientific data were obtained after that date. With the exception of a slightly degraded star sensor, the spacecraft control section performed in an excellent manner. [ 5 ]
SAS-2 first detected Geminga , a pulsar believed to be the remnant of a supernova that exploded 300,000 years ago. [ 6 ]
The instrument consisted of two spark-chamber assemblies, four plastic scintillation counters, four Cherenkov counters, and an anticoincidence scintillation counter dome assembled to form a telescope. The spark chamber assembly consisted of 16-wire spark-chamber modules with a magnetic core readout system. Sandwiched between these two assemblies was a plane of plastic scintillator formed by the four scintillation counters. Thin tungsten plates, averaging 0.010 cm (0.0039 in) thick, were interleaved between the spark chamber modules, which had an active area of 640-cm 2 . These plates provided the material for the gamma ray to convert into an electron - positron pair and provided a means of determining the energy of these particles by measuring their coulomb scattering. The spark chamber modules revealed the position and direction of the particles; from this information, the energy and direction of the gamma ray was determined. The scintillation counters and the four directional Cerenkov counters that were placed below the second spark chamber assembly constituted four independent counter coincidence systems. The single-piece plastic scintillator dome surrounded the whole assembly except at the bottom to discriminate against charged particles. The threshold of the instrument was about 30-MeV, and energies up to about 200-MeV could be measured along with the integral flux above 200 MeV. The angular resolution of the telescope varied as a function of energy and arrival direction from 1.5° to 5°. During the lifetime of the experiment from 15 November 1972 to 8 June 1973, approximately 55% of the celestial sphere, including most of the galactic plane, was surveyed. [ 7 ] | https://en.wikipedia.org/wiki/Small_Astronomy_Satellite_2 |
The Small Astronomy Satellite 3 ( SAS 3 , also known as SAS-C before launch) (Explorer 53) was a NASA X-ray astronomy space telescope . [ 1 ] It functioned from 7 May 1975 to 9 April 1979. It covered the X-ray range with four experiments on board. The satellite, built by the Johns Hopkins University Applied Physics Laboratory (APL), was proposed and operated by MIT 's Center for Space Research (CSR). It was launched on a Scout vehicle from the Italian San Marco platform ( Broglio Space Center ) near Malindi , Kenya , into a low-Earth, nearly equatorial orbit. It was also known as Explorer 53, as part of NASA's Explorer program . [ 2 ]
The spacecraft was 3-axis stabilized with a momentum wheel that was used to establish stability about the nominal rotation, or Z-axis. The orientation of the Z-axis could be altered over a period of hours using magnetic torque coils that interacted with the Earth's magnetic field . Solar panels charged batteries during the daylight portion of each orbit so that SAS 3 had essentially no expendables to limit its lifetime beyond the life of the tape recorders, batteries, and orbital drag. The spacecraft typically operated in a rotating mode, spinning at one revolution per 95-minute orbit, so that the LEDs , tube and slat collimator experiments, which looked out along the Y-axis, could view and scan the sky almost continuously. The rotation could also be stopped, allowing extended (up to 30 minutes) pointed observations of selected sources by the Y-axis instruments. Data were recorded on board by magnetic tape recorders, and played back during station passes every orbit. [ 3 ]
SAS 3 was commanded from the NASA Goddard Space Flight Center (GSFC) in Greenbelt, Maryland , but data was transmitted by modem to MIT for scientific analysis, where scientific and technical staff were on duty 24 hours a day. The data from each orbit were subjected to quick-look scientific analysis at MIT before the next orbital station passed, so the science operational plan could be altered by telephoned instruction from MIT to GSFC in order to study targets in near real-time.
The spacecraft was launched from the San Marco platform off the coast of Kenya , Africa , into a near-circular, near-equatorial orbit . This spacecraft contained four instruments: the Extragalactic Experiment, the Galactic Monitor Experiment, the Scorpio Monitor Experiment and the Galactic Absorption Experiment. In the orbital configuration, the spacecraft was 145.2 cm (57.2 in) high and the tip-to-tip dimension was 470.3 cm (185.2 in). Four solar paddles were used in conjunction with a 12-cell nickel–cadmium battery to provide power over the entire orbit. The spacecraft was stabilized along the Z-axis and rotated at about 0.1°/seconds. Changes to the spin-axis orientation were by ground command, either delayed or in real-time. The spacecraft could be made to move back and forth ± 2.5° across a selected source along the X-axis at 0.01°/seconds. The experiments looked along the Z-axis of the spacecraft, perpendicular to it, and at an angle. [ 4 ]
The major scientific objectives of the mission were:
Explorer 53 (SAS-C) was a small spacecraft whose objectives were to survey the celestial sphere for sources radiating in the X-ray, gamma ray , ultraviolet and other spectral regions. The primary missions of Explorer 53 were to measure the X-ray emission of discrete extragalactic sources, to monitor the intensity and spectra of galactic X-ray sources from 0.2 to 60-keV, and to monitor the X-ray intensity of Scorpio X-1 . [ 5 ]
This experiment determined the positions of very weak extragalactic X-ray sources . The instrument viewed a 100-sq-degree region of the sky around the direction of the spin axis of the satellite. The nominal targets for a 1-year study were: (1) the Virgo Cluster of galaxies for 4 months, (2) the galactic equator for 2 months, (3) the Andromeda Nebula for 3 months, and (4) the Magellanic Clouds for 3 months. The instrumentation consisted of one 2.5- arc-minutes and one 4.5-arc-minutes Full width at half maximum (FWHM) modulation collimator , as well as proportional counters sensitive over the energy range from 1.5 to 10- keV . The effective area of each collimator was about 225 cm 2 . The aspect system provided information on the orientation of the collimators to an accuracy of 15-arc-seconds. [ 6 ]
The density and distribution of interstellar matter were determined by measuring the variation in the intensity of the low-energy diffuse X-ray background as a function of galactic latitude . A 1- micrometer polypropylene window proportional counter was used for the 0.1- to 0.4-keV and 0.4- to 1.0-keV energy ranges, while a 2-micrometer titanium window counter covered the energy range from 0.3 to 0.5 keV. In addition, two 1-mm beryllium window counters were used for the 1.0- to 10-keV energy range. The collimators in this experiment had fields of view of 3° for the 1-micrometer counter, 2° for the 2-mm counter, and 2° for the 1-mm counters. [ 7 ]
The objectives of this experiment were to locate galactic X-ray sources to 15 arc-seconds and to monitor these sources for intensity variations. The source positions were determined with the use of the modulation collimators of the Extragalactic Experiment during the nominal 2-month observation of the galactic equator. The monitoring of the X-ray sky was accomplished by the use of three slat collimators. One collimator, 1° by 70° FWHM, was oriented perpendicular to the equatorial plane of the satellite, while the other two, each 0.5° by 45° FWHM, were oriented 30° above and 30° below the first. The detector behind each collimator was a proportional counter, sensitive from 1.5 to 13 keV, with an effective area of about 100 cm 2 . The 1.0° collimator had an additional counter of the same area, sensitive from 8 to 50 keV. Three lines of position were obtained for any given source when the satellite was being spun at a steady rotation of 4 arc-minutes/seconds about the Z-axis. [ 8 ]
A 12° by 50° FWHM slat collimator was oriented with its long axis perpendicular to the satellite spin axis such that a given point in the sky could be monitored for about 25% of a rotation. This collimator was inclined by 31° with respect to the equatorial plane of the satellite so that Scorpio X-1 was observed while the Z-axis was oriented to the Virgo Cluster of galaxies. The detectors used in this experiment were proportional counters with 1-mm beryllium windows. The energy range was from 1.0 to 60-keV, and the total effective area was about 40-cm 2 . [ 9 ]
SAS 3 was especially productive due to its flexibility and rapid responsiveness. Among its most important results were:
Lead investigators on SAS 3 were MIT professors George W. Clark , Hale V. Bradt, and Walter H. G. Lewin . Other major contributors were Profs Claude R. Canizares and Saul Rappaport , and Drs Jeffrey A. Hoffman , George Ricker,
Jeff McClintock, Rodger Doxsey , Garrett Jernigan , Lynn Cominsky , John Doty, and many others, including numerous graduate students. | https://en.wikipedia.org/wiki/Small_Astronomy_Satellite_3 |
Small Cajal body-specific RNAs ( scaRNAs ) are a class of small nucleolar RNAs (snoRNAs) that specifically localise to the Cajal body , a nuclear organelle (cellular sub-organelle) involved in the biogenesis of small nuclear ribonucleoproteins (snRNPs or snurps). ScaRNAs guide the modification ( methylation and pseudouridylation ) of RNA polymerase II transcribed spliceosomal RNAs U1 , U2 , U4 , U5 and U12 .
The first scaRNA identified was U85. [ 1 ] It is unlike typical snoRNAs in that it is a composite C/D box and H/ACA box snoRNAs and can guide both pseudouridylation and 2′- O -methylation. Not all scaRNAs are composite C/D and H/ACA box snoRNA and most scaRNAs are structurally and functionally indistinguishable from snoRNAs, directing ribosomal RNA (rRNA) modification in the nucleolus .
Several studies identified scaRNAs in Drosophila . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] One of the studies showed that several Drosophila scaRNAs could function equally well in the nucleoplasm of mutant flies that lack Cajal bodies. [ 4 ] Further investigation showed that scaRNA pugU6-40 targets U6 snRNA, whose modification occurs in the nucleolus not CB and that pugU1-6 and pug U2-55 guide 2 RNAs: snRNA and 28s rRNA. [ 5 ]
In molecular biology , Small Cajal body-specific RNA 1 (also known as SCARNA1 or ACA35) is a small nucleolar RNA found in Cajal bodies and believed to be involved in the pseudouridylation of U2 spliceosomal RNA at residue U89.
scaRNA1 is a non-coding RNA , which are functional products of genes not translated into proteins . Such RNA molecules usually contain important secondary structure or ligand- binding motifs and are involved in many important biological processes in the cell. [ 6 ]
scaRNA1 belongs to the H/ACA box class of snoRNAs, as it has the predicted hairpin-hinge-hairpin-tail structure, conserved H/ACA-box motifs, and is found associated with GAR1 protein. [ 7 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Small_Cajal_body-specific_RNA |
The Small Molecule Pathway Database (SMPDB) [ 1 ] [ 2 ] is a comprehensive, high-quality, freely accessible, online database containing more than 600 small molecule (i.e. metabolic) pathways found in humans. SMPDB is designed specifically to support pathway elucidation and pathway discovery in metabolomics , transcriptomics , proteomics and systems biology . It is able to do so, in part, by providing colorful, detailed, fully searchable, hyperlinked diagrams of five types of small molecule pathways: 1) general human metabolic pathways ; 2) human metabolic disease pathways; 3) human metabolite signaling pathways; 4) drug-action pathways and 5) drug metabolism pathways. SMPDB pathways may be navigated, viewed and zoomed interactively using a Google Maps-like interface. All SMPDB pathways include information on the relevant organs , subcellular compartments , protein cofactors, protein locations, metabolite locations, chemical structures and protein quaternary structures (Fig. 1). Each small molecule in SMPDB is hyperlinked to detailed descriptions contained in the HMDB [ 3 ] or DrugBank [ 4 ] and each protein or enzyme complex is hyperlinked to UniProt. [ 5 ] Additionally, all SMPDB pathways are accompanied with detailed descriptions and references, providing an overview of the pathway, condition or processes depicted in each diagram. Users can browse the SMPDB (Fig. 2) or search its contents by text searching (Fig. 3), sequence searching, or chemical structure searching. More powerful queries are also possible including searching with lists of gene or protein names, drug names, metabolite names, GenBank IDs, Swiss-Prot IDs, Agilent or Affymetrix microarray IDs. These queries will produce lists of matching pathways and highlight the matching molecules on each of the pathway diagrams. Gene , metabolite and protein concentration data can also be visualized through SMPDB's mapping interface.
SMPDB is part of a suite of metabolomics databases that also includes Human Metabolome Database , DrugBank , and the Toxin and Toxin-Target Database (T3DB). While DrugBank includes information on 7000 drugs and >4200 non-redundant drug targets, enzymes , transporters, and carriers, HMDB houses over 40,000 small molecule metabolites found in the human body. The suite is complemented by T3DB with its over 3100 common toxic substances and over 1300 corresponding toxin targets.
The first version of SMPDB was released on January 1, 2010. [ 1 ] This release contained more than 350 image-mapped pathways for small molecule pathways. The viewer interface was limited to scroll-bar image navigation with 3-step (small, medium, large) zooming. The pathways in this first version were limited to 1) human metabolic pathways ; 2) human metabolic disease pathways; and 3) human metabolite signaling pathways. The second version of SMPDB was released in 2014. [ 2 ] This version contained more than 620 small molecule pathways. The viewer interface was enhanced to include a Google-Map-like interface with click-n-drag image navigation and unlimited, interactive zooming. The pathways in this second version were expanded to include: 1) general human metabolic pathways; 2) human metabolic disease pathways; 3) human metabolite signaling pathways; 4) drug action pathways and 5) drug metabolism pathways. | https://en.wikipedia.org/wiki/Small_Molecule_Pathway_Database |
Small RNA sequencing (Small RNA-Seq) is a type of RNA sequencing based on the use of NGS technologies that allows to isolate and get information about noncoding RNA molecules in order to evaluate and discover new forms of small RNA and to predict their possible functions. By using this technique, it is possible to discriminate small RNAs from the larger RNA family to better understand their functions in the cell and in gene expression . Small RNA-Seq can analyze thousands of small RNA molecules with a high throughput and specificity. The greatest advantage of using RNA-seq is represented by the possibility of generating libraries of RNA fragments starting from the whole RNA content of a cell.
Small RNAs are noncoding RNA molecules between 20 and 200 nucleotide in length. The item "small RNA" is a rather arbitrary term, which is vaguely defined based on its length comparing with regular RNA such as messenger RNA (mRNA). Previously bacterial short regulatory RNAs have been referred to as small RNAs, but they are not related to eukaryotic small RNAs. [ 1 ]
Small RNAs include several different classes of noncoding RNAs, depending on their sizes and functions: snRNA , snoRNA , scRNA , piRNA , miRNA , YRNA , tsRNA, rsRNA, and siRNA . Their functions go from RNAi (specific for endogenously expressed miRNA and exogenously derived siRNA), RNA processing and modification, epigenetic modifications , protein stability and transport.
This step is very critical and important for any molecular-based technique since it ensures that the small RNA fragments found in the samples to be analyzed are characterized by a good level of purity and quality. There are different purification methods that can be used, based on the purposes of the experiment:
Once small RNAs have been isolated, it is important to quantify them and to evaluate the quality of the purification. There are two different methods to do this:
Many of the NGS sequencing protocols rely on the production of a genomic library that contains thousands of fragments of the target nucleic acids that will then be sequenced by proper technologies. According to the sequencing methods to be used, libraries can be created differently (in the case of the Ion Torrent technology RNA fragments are directly attached to a magnetic bead through an adapter, while for Illumina sequencing, the RNA fragments are firstly ligated to the adapters and then attached to the surface of a plate): generally, universal adapters A and B (containing well known sequences comprehensive of Unique Molecular Identifiers that are used to quantify small RNAs in a sample and sample indexing that allows to discriminate between different RNA molecules deriving from different samples) are ligated to the 5' and 3' ends of the RNA fragments thanks to the activity of the T4 RNA ligase 2 truncated. After the adapters are ligated to both ends of the small RNAs, retrotranscription occurs producing complementary DNA molecules ( cDNAs ) which will be, eventually, amplified by different amplification techniques depending on the sequencing protocol that is being followed (Ion Torrent exploits the emulsion PCR , while Illumina requires a bridge PCR ) in order to obtain up to billions of amplicons to be sequenced. [ 4 ] Besides the regular PCR mix, masking oligonucleotides targeting 5.8s rRNA are added to increase sensitivity to small RNA targets and to improve the amplification results. Caution has to be used, as RNA samples are prone to degradation, and further improvement of this technique should be oriented towards the elimination of adapter dimers. [ 4 ] Some specific RNA modifications (such as 5′ hydroxyl (5′-OH), 3′-phosphate (3′-P) and 2′,3′-cyclic phosphate (2′3′-cP)) can block the adapter ligation process, while some other RNA modifications ( such as m1A, m3C, m1G and m22G) can interfere with reverse transcription process. Small RNA bearing one or more of these modifications are often inefficiently and incompletely converted into cDNAs, leading to challenges with their detection and quantitation by deep sequencing, which can be overcome by enzyme (such as PNK and AlkB) pre-treatment. [ 5 ]
Depending on the purpose of the analysis, RNA-seq can be performed using different approaches:
The final step regards analysis of data and storage: after obtaining the sequencing reads, UMI and index sequences are automatically removed from the reads and their quality is analyzed by PHRED (software able to evaluate the quality of the sequencing process); reads can then be mapped or aligned to a reference genome in order to extract information about their similarity: reads having the same length, sequence and UMI are considered as equal and are removed from the hit list. Indeed, the number of different UMIs for a given small RNA sequence reflects its copy number.
The small RNAs are finally quantified by assigning molecules to transcript annotations from different databases (Mirbase, GtRNAdb and Gencode). [ 4 ]
Small RNA sequencing can be useful for: | https://en.wikipedia.org/wiki/Small_RNA_sequencing |
A small Solar System body ( SSSB ) is an object in the Solar System that is neither a planet , a dwarf planet , nor a natural satellite . The term was first defined in 2006 by the International Astronomical Union (IAU) as follows: "All other objects, except satellites, orbiting the Sun shall be referred to collectively as 'Small Solar System Bodies ' ". [ 1 ]
This encompasses all comets and all minor planets other than those that are dwarf planets . Thus SSSBs are: the comets; the classical asteroids , with the exception of the dwarf planet Ceres ; the trojans ; and the centaurs and trans-Neptunian objects , with the exception of the dwarf planets Pluto , Haumea , Makemake , Quaoar , Orcus , Sedna , Gonggong and Eris and others that may turn out to be dwarf planets .
The current definition was included in the 2006 IAU resolution that defined the term planet , demoting the status of Pluto to that of dwarf planet . In the context, it should be interpreted as, "All objects other than planets and dwarf planets orbiting the Sun shall be referred to collectively as 'Small Solar System Bodies'. The definition excludes interstellar objects traveling through the Solar System, such as the interstellar interlopers 1I/ ʻOumuamua and 2I/Borisov .
It is not presently clear whether a lower size bound will be established as part of the definition of small Solar System bodies in the future, or if it will encompass all material down to the level of meteoroids , the smallest macroscopic bodies in orbit around the Sun. (On a microscopic level there are even smaller objects such as interplanetary dust , particles of solar wind and free particles of hydrogen .)
Except for the largest, which are in hydrostatic equilibrium , natural satellites (moons) differ from small Solar System bodies not in size, but in their orbits. The orbits of natural satellites are not centered on the Sun , but around other Solar System objects such as planets, dwarf planets , and small Solar System bodies.
Some of the larger small Solar System bodies may be reclassified in future as dwarf planets, pending further examination to determine whether or not they are in hydrostatic equilibrium .
The orbits of the vast majority of small Solar System bodies are located in two distinct areas, namely the asteroid belt and the Kuiper belt . These two belts possess some internal structure related to perturbations by the major planets (particularly Jupiter and Neptune , respectively), and have fairly loosely defined boundaries. Other areas of the Solar System also encompass small bodies in smaller concentrations. These include the near-Earth asteroids , centaurs , comets , and scattered disc objects.
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Small_Solar_System_body |
Small activating RNAs (saRNAs) are double-stranded RNA molecules that induce gene expression at the transcriptional level, a phenomenon known as RNA activation (RNAa). This contrasts with the gene silencing typically associated with small interfering RNAs (siRNAs) in RNA interference. saRNAs offer a novel approach to upregulate genes of therapeutic interest, and have progressed to clinical trials.
saRNAs, typically 19 nucleotides in length with 2-nucleotide overhangs (similar to siRNAs), [ 1 ] mediate RNAa through the RNA-induced transcriptional activation (RITA) complex. This complex includes Argonaute 2 (AGO2), RNA helicase A (RHA), and CTR9 (a component of the PAF1 complex). [ 2 ] The RITA complex facilitates the transition of RNA polymerase II from a paused to an elongating state at the target gene's promoter, leading to increased transcription. (For a more detailed explanation of the mechanism, see RNA activation .)
Designing effective saRNAs involves careful consideration of several factors. Unlike siRNAs, which primarily target mRNA sequences for degradation, saRNAs target promoter regions, and their efficacy is highly dependent on the specific target location. [ 3 ] [ 4 ] Proximity to the transcription start site (TSS), sequence context, and local chromatin state are critical determinants of activation versus silencing outcomes. [ 5 ] [ 6 ] [ 3 ]
Key considerations for saRNA design include:
A set of guidelines for designing saRNAs has been published [ 3 ] and an online resource for saRNAs has been developed to integrate experimentally verified saRNAs and proteins involved. [ 11 ]
saRNAs represent a promising therapeutic modality for diseases where increasing the expression of a specific gene is beneficial. This approach is particularly attractive for targeting genes considered "undruggable" by conventional small molecule or antibody-based therapies. [ 12 ] [ 13 ]
saRNA based therapeutics have advanced from preclinical studies to human clinical trials.
While saRNAs hold significant therapeutic promise, challenges remain: | https://en.wikipedia.org/wiki/Small_activating_RNA |
Small cells are low-powered cellular radio access nodes that have a ranges of around 10 meters to a few kilometers. They are base stations with low power consumption and cost. They can provide high data rates by being deployed densely to achieve high spatial spectrum efficiency. [ 1 ]
In the United States, recent FCC orders have provided size and elevation guidelines to help more clearly define small cell equipment. [ 2 ] [ 3 ] They are "small" compared to a mobile macrocell , partly because they have a shorter range and partly because they typically handle fewer concurrent calls or sessions. As wireless carriers seek to 'densify' existing wireless networks to provide for the data capacity demands of 5G, small cells are currently viewed as a solution to allow re-using the same frequencies, [ 4 ] [ 5 ] [ 6 ] and as an important method of increasing cellular network capacity, quality, and resilience with a growing focus using LTE Advanced .
Small cells may encompass femtocells , picocells , and microcells . Small-cell networks can also be realized by means of distributed radio technology using centralized baseband units and remote radio heads . Beamforming technology (focusing a radio signal on a very specific area) can further enhance or focus small cell coverage. These approaches to small cells all feature central management by mobile network operators .
Small cells provide a small radio footprint, which can range from 10 meters within urban and in-building locations to 2 km for a rural location. Picocells and microcells can also have a range of a few hundred meters to a few kilometers, but they differ from femtocells in that they do not always have self-organising and self-management capabilities.
Small cells are available for a wide range of air interfaces including GSM , CDMA2000 , TD-SCDMA , W-CDMA , LTE and 5G . In 3GPP terminology, a Home Node B (HNB) is a 3G femtocell. A Home eNode B (HeNB) is an LTE femtocell. Wi-Fi is a small cell but does not operate in licensed spectrum and therefore cannot be managed as effectively as small cells utilising licensed spectrum. Small cell deployments vary according to the use case and radio technology employed.
The most common form of small cells are femtocells. They were initially designed for residential and small business use, with a short range and a limited number of channels. Femtocells with increased range and capacity spawned a proliferation of terms: metrocells, metro femtocells, public access femtocells, enterprise femtocells, super femtos, Class 3 femto, greater femtos and microcells. The term "small cells" is frequently used by analysts and the industry as an umbrella to describe the different implementations of femtocells, and to clear up any confusion that femtocells are limited to residential uses. Small cells are sometimes, incorrectly, also used to describe distributed-antenna systems (DAS) which are not low-powered access nodes.
Small cells can be used to provide in-building and outdoor wireless service. Mobile operators use them to extend their service coverage and/or increase network capacity .
ABI Research argues that small cells also help service providers discover new revenue opportunities through their location and presence information . If a registered user enters a femtozone, the network is notified of their location. The service provider, with the user's permission, could share this location information to update user's social media status, for instance. Opening up small-cell APIs to the wider mobile ecosystem could enable a long-tail effect.
Rural coverage is also a key market that has developed as mobile operators have started to install public access metrocells in remote and rural areas that either have only 2G coverage or no coverage at all. The cost advantages of small cells compared with macro cells make it economically feasible to provide coverage of much smaller communities – from a few ten to a few hundred. The Small Cell Forum have published a white paper outlining the technology and business case aspects. [ 7 ] Mobile operators in both developing and developed countries are either trialing or installing such systems. The pioneer in providing rural coverage using small cells was SoftBank Mobile – the Japanese mobile operator – who have installed more than 3000 public access 3G small cells on post offices throughout rural Japan. In the UK, Vodafone's Rural Open Sure Signal program and EE's rural 3G/4G scheme increase geographic coverage.
Small cells are an integral part of LTE networks. In 3G networks, small cells are viewed as an offload technique. [ 8 ] In 4G networks, the principle of heterogeneous network (HetNet) is introduced where the mobile network is constructed with layers of small and large cells. [ 9 ] In LTE, all cells will be self-organizing, drawing upon the principles laid down in current Home NodeB (HNB), the 3GPP term for residential femtocells.
Future innovations in radio access design introduce the idea of an almost flat architecture where the difference between a small cell and a macrocell depends on how many cubes are stacked together.
The transmitting signal from Macro Base Station (MBS) weakens quickly once the MBS signal reaches indoors. Femtocells provide a solution to the difficulties present in macrocell-based system. Thus, Femto Base Station (FBS) network coverage is one of the prime concerns in indoor environment to get good quality of service (QoS). [ 10 ]
By December 2017 a total of over 12 million small cells have been deployed worldwide, with forecasts as high as 70 million by 2025. [ 11 ]
Backhaul is needed to connect the small cells to the core network, internet and other services. For in-building use, existing broadband internet can be used. In urban outdoors, mobile operators consider this more challenging than macrocell backhaul because a) small cells are typically in hard-to-reach, near-street-level locations rather than in more open, above-rooftop locations and b) carrier grade connectivity must be provided at much lower cost per bit. Many different wireless and wired technologies have been proposed as solutions, and it is agreed that a toolbox of these will be needed to address a range of deployment scenarios. An industry consensus view of how the different solution characteristics match with requirements is published by the Small Cell Forum. [ 12 ] The backhaul solution is influenced by a number of factors, including the operator's original motivation to deploy small cells, which could be for targeted capacity, indoor or outdoor coverage. [ 13 ] | https://en.wikipedia.org/wiki/Small_cell |
A small conditional RNA ( scRNA ) is a small RNA molecule or complex (typically less than approximately 100 nt ) engineered to interact and change conformation conditionally in response to cognate molecular inputs so as to perform signal transduction in vitro , in situ , or in vivo . [ 1 ]
In the absence of cognate input molecules, scRNAs are engineered to coexist metastably or stably without interacting. Detection of the cognate inputs initiates downstream conformational changes of one or more scRNAs leading to generation of the desired output signal. The output signal may be intended to read out the state of endogenous biological circuitry (e.g., mapping gene expression for biological research or medical diagnosis), [ 2 ] or to regulate the state of endogenous biological circuitry (e.g., perturbing gene expression for biological research or medical treatment). [ 1 ] scRNA sequences can be programmed to recognize different inputs or to activate different outputs, [ 1 ] [ 2 ] [ 3 ] achieving even single-nucleotide sequence selectivity. [ 3 ] scRNA signal transduction exploits principles from the emerging disciplines of dynamic RNA nanotechnology, molecular programming, and synthetic biology .
Fluorophore-labeled scRNAs have been engineered to transduce between detection of mRNA targets and generation of bright fluorescent amplification polymers in situ (Figure 1). [ 2 ] In this context, scRNA signal transduction enables multiplexed mapping of mRNA expression within intact vertebrate embryos (Figure 2). [ 2 ] scRNAs have been engineered to perform shape and sequence transduction to conditionally produce a Dicer substrate targeting 'silencing target' mRNA Y upon detection of an independent 'detection target' mRNA X, with subsequent Dicer processing yielding a small interfering RNA ( siRNA ) targeting mRNA Y for destruction (Figure 3). [ 1 ] In this context, scRNA signal transduction provides a step towards implementing conditional RNA interference (Figure 4).
scRNAs can be engineered to exploit diverse design elements: [ 1 ] | https://en.wikipedia.org/wiki/Small_conditional_RNA |
Small intensely fluorescent cells (SIF cells) are the interneurons of the sympathetic ganglia (postganglionic neurons) of the Sympathetic division of the autonomic nervous system (ANS). The neurotransmitter for these cells is dopamine . They are a neural crest derivative and share a common sympathoadrenal precursor cell with sympathetic neurons and chromaffin cells (adrenal medulla).
Although an autonomic ganglion is the site where preganglionic fibers synapse on postganglionic neurons, the presence of small interneurons has been recognized. These cells exhibit catecholamine fluorescence and are referred to as small intensely fluorescent (SIF) cells. In some ganglia, these intemeurons receive preganglionic cholinergic fibers and may modulate ganglionic transmission. In other ganglia, they receive collateral branches and may serve some Integrative function. Many SIF cells contain dopamine, which Is thought to be their transmitter. | https://en.wikipedia.org/wiki/Small_intensely_fluorescent_cell |
Small interfering RNA ( siRNA ), sometimes known as short interfering RNA or silencing RNA , is a class of double-stranded non-coding RNA molecules , typically 20–24 base pairs in length, similar to microRNA (miRNA), and operating within the RNA interference (RNAi) pathway. It interferes with the expression of specific genes with complementary nucleotide sequences by degrading messenger RNA (mRNA) after transcription , preventing translation . [ 1 ] [ 2 ] It was discovered in 1998 by Andrew Fire at the Carnegie Institution for Science in Washington, D.C. and Craig Mello at the University of Massachusetts in Worcester.
Naturally occurring siRNAs have a well-defined structure that is a short (usually 20 to 24- bp ) double-stranded RNA (dsRNA) with phosphorylated 5' ends and hydroxylated 3' ends with two overhanging nucleotides.
The Dicer enzyme catalyzes production of siRNAs from long dsRNAs and small hairpin RNAs . [ 3 ] siRNAs can also be introduced into cells by transfection . Since in principle any gene can be knocked down by a synthetic siRNA with a complementary sequence, siRNAs are an important tool for validating gene function and drug targeting in the post-genomic era.
In 1998, Andrew Fire at Carnegie Institution for Science in Washington DC and Craig Mello at University of Massachusetts in Worcester discovered the RNAi mechanism while working on the gene expression in the nematode, Caenorhabditis elegans . [ 4 ] They won the Nobel prize for their research with RNAi in 2006. siRNAs and their role in post- transcriptional gene silencing (PTGS) was discovered in plants by David Baulcombe 's group at the Sainsbury Laboratory in Norwich , England and reported in Science in 1999. [ 5 ] Thomas Tuschl and colleagues soon reported in Nature that synthetic siRNAs could induce RNAi in mammalian cells. [ 6 ] In 2001, the expression of a specific gene was successfully silenced by introducing chemically synthesized siRNA into mammalian cells (Tuschl et al.) These discoveries led to a surge in interest in harnessing RNAi for biomedical research and drug development . Significant developments in siRNA therapies have been made with both organic (carbon based) and inorganic (non-carbon based) nanoparticles , which have been successful in drug delivery to the brain , offering promising methods to deliver therapeutics into human subjects. However, human applications of siRNA have had significant limitations to its success. One of these being off-targeting. [ 2 ] There is also a possibility that these therapies can trigger innate immunity . [ 4 ] Animal models have not been successful in accurately representing the extent of this response in humans. Hence, studying the effects of siRNA therapies has been a challenge.
In recent years, siRNA therapies have been approved and new methods have been established to overcome these challenges. There are approved therapies available for commercial use and several currently in the pipeline waiting to get approval. [ 7 ] [ 8 ]
The mechanism by which natural siRNA causes gene silencing through repression of translation occurs as follows:
siRNA is also similar to miRNA , however, miRNAs are derived from shorter stemloop RNA products. miRNAs typically silence genes by repression of translation and have broader specificity of action, while siRNAs typically work with higher specificity by cleaving the mRNA before translation, with 100% complementarity. [ 9 ] [ 10 ]
Gene knockdown by transfection of exogenous siRNA is often unsatisfactory because the effect is only transient, especially in rapidly dividing cells. This may be overcome by creating an expression vector for the siRNA. The siRNA sequence is modified to introduce a short loop between the two strands. The resulting transcript is a short hairpin RNA (shRNA), which can be processed into a functional siRNA by Dicer in its usual fashion. [ 11 ] Typical transcription cassettes use an RNA polymerase III promoter (e.g., U6 or H1) to direct the transcription of small nuclear RNAs (snRNAs) (U6 is involved in RNA splicing ; H1 is the RNase component of human RNase P). It is theorized that the resulting siRNA transcript is then processed by Dicer .
The gene knockdown efficiency can also be improved by using cell squeezing . [ 12 ]
The activity of siRNAs in RNAi is largely dependent on its binding ability to the RNA-induced silencing complex (RISC). Binding of the duplex siRNA to RISC is followed by unwinding and cleavage of the sense strand with endonucleases. The remaining anti-sense strand-RISC complex can then bind to target mRNAs for initiating transcriptional silencing. [ 13 ]
In addition to their role in RNAi, siRNAs can also activate gene expression, a phenomenon termed " RNA activation " or RNAa. This was first observed when synthetic siRNAs, termed " small activating RNA " (saRNA), targeting gene promoters were found to induce potent transcriptional activation of target genes. [ 14 ] RNAa has been demonstrated to be a conserved mechanism, observed across species from insects, C. elegans , and plants, to mammals (including humans). [ 15 ] [ 16 ] [ 17 ] [ 18 ]
The mechanism of RNAa involves the targeting of promoter regions by saRNAs, leading to the recruitment of transcriptional machinery and epigenetic changes that promote gene expression. This process often involves the RNA-induced transcriptional activation (RITA) complex, which includes Argonaute proteins (particularly Ago2), RNA helicase A (RHA), and CTR9. [ 19 ] [ 20 ] Endogenous miRNAs can also mediate RNAa, expanding the regulatory roles of these small RNAs beyond gene silencing.
Several saRNA-based therapeutics are currently in clinical development. MTL-CEBPA, developed by MiNA Therapeutics, targets the CEBPA gene and is in Phase II trials for liver cancer. [ 21 ] RAG-01, developed by Ractigen Therapeutics, targets the p21 gene and is in Phase I trials for non-muscle invasive bladder cancer (NMIBC). [ 22 ] These clinical trials represent a significant step towards translating the RNAa phenomenon into novel therapeutic strategies.
The siRNA-induced post transcriptional gene silencing is initiated by the assembly of the RNA-induced silencing complex (RISC). The complex silences certain gene expression by cleaving the mRNA molecules coding the target genes. To begin the process, one of the two siRNA strands, the guide strand (anti-sense strand), will be loaded into the RISC while the other strand, the passenger strand (sense strand), is degraded. Certain Dicer enzymes may be responsible for loading the guide strand into RISC. [ 23 ] Then, the siRNA scans for and directs RISC to perfectly complementary sequence on the mRNA molecules. [ 24 ] The cleavage of the mRNA molecules is thought to be catalyzed by the Piwi domain of Argonaute proteins of the RISC. The mRNA molecule is then cut precisely by cleaving the phosphodiester bond between the target nucleotides which are paired to siRNA residues 10 and 11, counting from the 5'end. [ 25 ] This cleavage results in mRNA fragments that are further degraded by cellular exonucleases . The 5' fragment is degraded from its 3' end by exosome while the 3' fragment is degraded from its 5' end by 5' -3' exoribonuclease 1( XRN1 ). [ 26 ] Dissociation of the target mRNA strand from RISC after the cleavage allow more mRNA to be silenced. This dissociation process is likely to be promoted by extrinsic factors driven by ATP hydrolysis . [ 25 ]
Sometimes cleavage of the target mRNA molecule does not occur. In some cases, the endonucleolytic cleavage of the phosphodiester backbone may be suppressed by mismatches of siRNA and target mRNA near the cleaving site. Other times, the Argonaute proteins of the RISC lack endonuclease activity even when the target mRNA and siRNA are perfectly paired. [ 25 ] In such cases, gene expression will be silenced by an miRNA induced mechanism instead [ 24 ]
[ 2 ]
Piwi-interacting RNAs are responsible for the silencing of transposons and are not siRNAs. [ 27 ] PIWI-interacting RNAs (piRNAs) are a recently-discovered class of small non-coding RNAs (ncRNAs) with a length of 21-35 nucleotides. They play a role in gene expression regulation, transposon silencing, and viral infection inhibition. Once considered as "dark matter" of ncRNAs, piRNAs emerged as important players in multiple cellular functions in different organisms. [ 28 ]
Many model organism, such as plants ( Arabidopsis thaliana ), yeast ( Saccharomyces cerevisiae ), flies ( Drosophila melanogaster ) and worms ( C. elegans ), have been used to study small non coding RNA-directed Transcriptional gene silencing. In human cell, RNA-directed transcriptional gene silencing was observed a decade ago when exogenous siRNAs silenced a transgenic elongation factor 1 α promoter driving a Green Fluorescent Protein (GFP) reporter gene. [ 29 ] The main mechanisms of transcriptional gene silencing (TGS) involving the RNAi machinery include DNA methylation, histone post-translational modifications , and subsequent chromatin remodeling around the target gene into a heterochromatic state. [ 29 ] SiRNAs can be incorporated into a RNA-induced transcriptional silencing (RITS) complex. An active RITS complex will trigger the formation of heterochromatin around DNA matching the siRNA, effectively silencing the genes in that region of the DNA.
One of the potent applications of siRNAs is the ability to distinguish the target versus non-target sequence with a single-nucleotide difference. This approach has been considered as therapeutically crucial for the silencing dominant gain-of-function (GOF) disorders, where mutant allele causing disease is differed from wt-allele by a single nucleotide (nt). These types of siRNAs with the capability to distinguish a single-nt difference, are termed as, allele-specific siRNAs. [ 2 ]
ASP-RNAi is an innovative category of RNAi with the objective of suppressing the dominant mutant allele while sparing expression of the corresponding normal allele with the specificity of single-nucleotide differences between the two. [ 2 ] ASP-siRNAs are potentially a novel and better remedial alternative for the treatment of autosomal dominant genetic disorders especially in cases where wild-type allele expression is crucial for organism survival such as Huntington disease (HD),DYT1 dystonia (Gonzalez-Alegre et al. 2003, 2005), Alzheimer's disease (Sierant et al. 2011), Parkinson's disease (PD) (Takahashi et al. 2015), amyloid lateral sclerosis (ALS) (Schwarz et al. 2006), and Machado–Joseph disease (Alves et al. 2008). Their therapeutic potential has also been assessed for various skin disorders like epidermolysis bullosa simplex (Atkinson et al. 2011), epidermolytic palmoplantar keratoderma (EPPK) (Lyu et al. 2016), and lattice corneal dystrophy type I (LCDI) (Courtney et al. 2014). [ 2 ]
RNAi intersects with a number of other pathways; as of 2010 it was not surprising that on occasion, nonspecific effects are triggered by the experimental introduction of an siRNA. [ 30 ] [ 31 ] When a mammalian cell encounters a double-stranded RNA such as an siRNA, it may mistake it as a viral by-product and mount an immune response. Furthermore, because structurally related microRNAs modulate gene expression largely via incomplete complementarity base pair interactions with a target mRNA , the introduction of an siRNA may cause unintended off-targeting. Chemical modifications of siRNA may alter the thermodynamic properties that also result in a loss of single nucleotide specificity. [ 32 ]
Introduction of too many siRNA can result in nonspecific events due to activation of innate immune responses. [ 33 ] Most evidence to date suggests that this is probably due to activation of the dsRNA sensor PKR, although retinoic acid-inducible gene I (RIG-I) may also be involved. [ 34 ] The induction of cytokines via toll-like receptor 7 (TLR7) has also been described. Chemical modification of siRNA is employed to reduce in the activation of the innate immune response for gene function and therapeutic applications. One promising method of reducing the nonspecific effects is to convert the siRNA into a microRNA. [ 35 ] MicroRNAs occur naturally, and by harnessing this endogenous pathway it should be possible to achieve similar gene knockdown at comparatively low concentrations of resulting siRNAs. This should minimize nonspecific effects.
Off-targeting is another challenge to the use of siRNAs as a gene knockdown tool. [ 31 ] Here, genes with incomplete complementarity are inadvertently downregulated by the siRNA (in effect, the siRNA acts as a miRNA), leading to problems in data interpretation and potential toxicity. This, however, can be partly addressed by designing appropriate control experiments, and siRNA design algorithms are currently being developed to produce siRNAs free from off-targeting. Genome-wide expression analysis, e.g., by microarray technology, can then be used to verify this and further refine the algorithms. A 2006 paper from the laboratory of Dr. Khvorova implicates 6- or 7-basepair-long stretches from position 2 onward in the siRNA matching with 3'UTR regions in off-targeted genes. [ 36 ] The tool of siRNA off-target predition is available at http://crdd.osdd.net/servers/aspsirna/asptar.php and published as ASPsiRNA resource. [ 37 ]
Plain RNAs may be poor immunogens, but antibodies can easily be created against RNA-protein complexes. Many autoimmune diseases see these types of antibodies. There haven't yet been reports of antibodies against siRNA bound to proteins. Some methods for siRNA delivery adjoin polyethylene glycol (PEG) to the oligonucleotide reducing excretion and improving circulating half-life. However recently a large Phase III trial of PEGylated RNA aptamer against factor IX had to be discontinued by Regado Biosciences because of a severe anaphylactic reaction to the PEG part of the RNA. This reaction led to death in some cases and raises significant concerns about siRNA delivery when PEGylated oligonucleotides are involved. [ 38 ]
siRNAs transfection into cells typically lowers the expression of many genes, however, the upregulation of genes is also observed. The upregulation of gene expression can partially be explained by the predicted gene targets of endogenous miRNAs. Computational analyses of more than 150 siRNA transfection experiments support a model where exogenous siRNAs can saturate the endogenous RNAi machinery, resulting in the de-repression of endogenous miRNA-regulated genes. [ 39 ] Thus, while siRNAs can produce unwanted off-target effects, i.e. unintended downregulation of mRNAs via a partial sequence match between the siRNA and target, the saturation of RNAi machinery is another distinct nonspecific effect, which involves the de-repression of miRNA-regulated genes and results in similar problems in data interpretation and potential toxicity. [ 40 ]
siRNAs have been chemically modified to enhance their therapeutic properties, Short interfering RNA (siRNA) must be delivered to the site of action in the cells of target tissues in order for RNAi to fulfill its therapeutic promise. A detailed database of all such chemical modifications is manually curated as siRNAmod in scientific literature. [ 41 ] Chemical modification of siRNA can also inadvertently result in loss of single-nucleotide specificity. [ 42 ]
Given the ability to knock down, in essence, any gene of interest, RNAi via siRNAs has generated a great deal of interest in both basic [ 43 ] and applied biology. [ 44 ]
One of the biggest challenges to siRNA and RNAi based therapeutics is intracellular delivery. [ 45 ] siRNA also has weak stability and pharmacokinetic behavior. [ 46 ] Delivery of siRNA via nanoparticles has shown promise. [ 45 ] siRNA oligos in vivo are vulnerable to degradation by plasma and tissue endonucleases and exonucleases [ 47 ] and have shown only mild effectiveness in localized delivery sites, such as the human eye. [ 48 ] Delivering pure DNA to target organisms is challenging because its large size and structure prevents it from diffusing readily across membranes . [ 45 ] siRNA oligos circumvent this problem due to their small size of 21-23 oligos. [ 49 ] This allows delivery via nano-scale delivery vehicles called nanovectors. [ 48 ]
A good nanovector for siRNA delivery should protect siRNA from degradation, enrich siRNA in the target organ and facilitate the cellular uptake of siRNA. [ 47 ] The three main groups of siRNA nanovectors are: lipid based, non-lipid organic-based, and inorganic. [ 47 ] Lipid based nanovectors are excellent for delivering siRNA to solid tumors, [ 47 ] but other cancers may require different non-lipid based organic nanovectors such as cyclodextrin based nanoparticles. [ 47 ] [ 50 ]
siRNAs delivered via lipid based nanoparticles have been shown to have therapeutic potential for central nervous system ( CNS) disorders . [ 51 ] Central nervous disorders are not uncommon, but the blood brain barrier (BBB) often blocks access of potential therapeutics to the brain . [ 51 ] siRNAs that target and silence efflux proteins on the BBB surface have been shown to create an increase in BBB permeability. [ 51 ] siRNA delivered via lipid based nanoparticles is able to cross the BBB completely. [ 51 ]
A huge difficulty in siRNA delivery is the problem of off-targeting. [ 45 ] [ 48 ] Since genes are read in both directions, there exists a possibility that even if the intended antisense siRNA strand is read and knocks out the target mRNA, the sense siRNA strand may target another protein involved in another function. [ 52 ]
Phase I results of the first two therapeutic RNAi trials (indicated for age-related macular degeneration , aka AMD) reported at the end of 2005 that siRNAs are well tolerated and have suitable pharmacokinetic properties. [ 53 ]
In a phase 1 clinical trial, 41 patients with advanced cancer metastasised to liver were administered RNAi delivered through lipid nanoparticles . The RNAi targeted two genes encoding key proteins in the growth of the cancer cells, vascular endothelial growth factor, ( VEGF ), and kinesin spindle protein ( KSP ). The results showed clinical benefits, with the cancer either stabilized after six months, or regression of metastasis in some of the patients. Pharmacodynamic analysis of biopsy samples from the patients revealed the presence of the RNAi constructs in the samples, proving that the molecules reached the intended target. [ 54 ] [ 55 ]
Proof of concept trials have indicated that Ebola-targeted siRNAs may be effective as post-exposure prophylaxis in humans, with 100% of non-human primates surviving a lethal dose of Zaire Ebolavirus, the most lethal strain. [ 56 ]
Currently, SiRNA are currently chemically synthesized and so, are legally categorized inside EU and in USA as simple medicinal products. But as bioengineered siRNA (BERAs) are in development, these would be classified as biological medicinal products, at least in EU. The development of the BERAs technology raises the question of the categorization of drugs having the same mechanism of action but being produced chemically or biologically. This lack of consistency should be addressed. [ 57 ]
There is great potential for RNA interference (RNAi) to be used therapeutically to reversibly silence any gene. For RNAi to realize its therapeutic potential, small interfering RNA (siRNA) must be delivered to the site of action in the cells of target tissues. But finding safe and efficient delivery mechanisms is a major obstacle to achieving the full potential of siRNA-based therapies. Unmodified siRNA is unstable in the bloodstream, has the potential to cause immunogenicity , and has difficulty readily navigating cell membranes. [ 58 ] As a result, chemical alterations and/or delivery tools are needed to safely transfer siRNA to its site of action. [ 58 ] There are three main techniques of delivery for siRNA that differ on efficiency and toxicity.
In this technique siRNA first must be designed against the target gene. Once the siRNA is configured against the gene it has to be effectively delivered through a transfection protocol. Delivery is usually done by cationic liposomes , polymer nanoparticles, and lipid conjugation. [ 59 ] This method is advantageous because it can deliver siRNA to most types of cells, has high efficiency and reproducibility, and is offered commercially. The most common commercial reagents for transfection of siRNA are Lipofectamine and Neon Transfection. However, it is not compatible with all cell types and has low in vivo efficiency. [ 60 ] [ 61 ]
Electrical pulses are also used to intracellularly deliver siRNA into cells. The cell membrane is made of phospholipids which makes it susceptible to an electric field. When quick but powerful electrical pulses are initiated the lipid molecules reorient themselves, while undergoing thermal phase transitions because of heating. This results in the making of hydrophilic pores and localized perturbations in the lipid bilayer cell membrane also causing a temporary loss of semipermeability. This allows for the escape of many intracellular contents, such as ions and metabolites as well as the simultaneous uptake of drugs, molecular probes, and nucleic acids. For cells that are difficult to transfect electroporation is advantageous however cell death is more probable under this technique. [ 62 ]
This method has been used to deliver siRNA targeting VEGF into the xenografted tumors in nude mice, which resulted in a significant suppression of tumor growth. [ 63 ]
The gene silencing effects of transfected designed siRNA are generally transient, but this difficulty can be overcome through an RNAi approach. Delivering this siRNA from DNA templates can be done through several recombinant viral vectors based on retrovirus, adeno-associated virus, adenovirus , and lentivirus . [ 64 ] The latter is the most efficient virus that stably delivers siRNA to target cells as it can transduce nondividing cells as well as directly target the nucleus. [ 65 ] These specific viral vectors have been synthesized to effectively facilitate siRNA that is not viable for transfection into cells. Another aspect is that in some cases synthetic viral vectors can integrate siRNA into the cell genome which allows for stable expression of siRNA and long-term gene knockdown. This technique is advantageous because it is in vivo and effective for difficult to transfect cell. However problems arise because it can trigger antiviral responses in some cell types leading to mutagenic and immunogenic effects.
This method has potential use in gene silencing of the central nervous system for the treatment of Huntington's disease . [ 66 ]
A decade after the discovery of RNAi mechanism in 1993, the pharmaceutical sector heavily invested in the research and development of siRNA therapy. There are several advantages that this therapy has over small molecules and antibodies. It can be administered quarterly or every six months. Another advantage is that, unlike small molecule and monoclonal antibodies that need to recognize specific conformation of a protein, siRNA functions by Watson-Crick basepairing with mRNA. Therefore, any target molecule that needs to be treated with high affinity and specificity can be selected if the right nucleotide sequence is available. [ 46 ] One of the biggest challenges researchers needed to overcome was the identification and establishment of a delivery system through which the therapies would enter the body. And that the immune system often mistakes the RNAi therapies as remnants of infectious agents, which can trigger an immune response. [ 4 ] Animal models did not accurately represent the degree of immune response that was seen in humans and despite the promise in the treatment investors divested away from RNAi. [ 4 ]
However, there were a few companies that continued with the development of RNAi therapy for humans. Alnylam Pharmaceuticals , Sirna Therapeutics and Dicerna Pharmaceuticals are few of the companies still working on bringing RNAi therapies to market. It was learned that almost all siRNA therapies administered in the bloodstream accumulated in the liver. That is why most of the early drug targets were diseases that affected the liver. Repeated developmental work also shed light on improving the chemical composition of the RNA molecule to reduce the immune response, subsequently causing little to no side effects. [ 67 ] Listed below are some of approved therapies or therapies in pipeline.
In 2018, Alnylam Pharmaceuticals became the first company to have a siRNA therapy approved by the FDA . Onpattro (patisiran) was approved for the treatment of polyneuropathy of hereditary transthyretin-mediated (hATTR) amyloidosis in adults. hATTR is a rare, progressively debilitating condition. During hATTR amyloidosis, misfolded transthyretin (TTR) protein is deposited in the extracellular space. Under typical folding conditions, TTR tetramers are made up of four monomers. Hereditary ATTR amyloidosis is caused by a fault or mutation in the transthyretin (TTR) gene which is inherited. Changing just one amino-acid changes the tetrameric transthyretin proteins, resulting in unstable tetrameric transthyretin protein that aggregates in monomers and forms insoluble extracellular amyloid deposits. Amyloid buildup in various organ systems causes cardiomyopathy, polyneuropathy, gastrointestinal dysfunction. It affects 50,000 people worldwide. To deliver the drug directly to the liver, siRNA is encased in a lipid nanoparticle. The siRNA molecule halts the production of amyloid proteins by interfering with the RNA production of abnormal TTR proteins. This prevents the accumulation of these proteins in different organs of the body and helps the patients manage this disease. [ 68 ] [ 69 ]
Traditionally, liver transplantation has been the standard treatment for hereditary transthyretin amyloidosis, however its effectiveness may be limited by the persistent deposition of wild-type transthyretin amyloid after transplantation. There are also small molecule medications that provide temporary relief. Before Onpattro was released, the treatment options for hATTR were limited. After the approval of Onpattro, FDA awarded Alnylam with the Breakthrough Therapy Designation, which is given to drugs that are intended to treat a serious condition and are a substantial improvement over any available therapy. It was also awarded Orphan Drug Designations given to those treatments that are intended to safely treat conditions affecting less than 200,000 people. [ 70 ]
Along with Onpattro, another RNA interference therapeutic drug has also been discovered (Partisiran) which has property of inhibiting hepatic synthesis of transthyretin. Target messenger RNA (mRNA) is cleaved as a result by tiny interfering RNAs coupled to the RNA-induced silencing complex . Patisiran, an investigational RNAi therapeutic drug, uses this process to decrease the production of mutant and wild-type transthyretin by cleaving on 3-untranslated region of transthyretin mRNA. [ 71 ]
In 2019, FDA approved the second RNAi therapy, Givlaari (givosiran) used to treat acute hepatic porphyria (AHP). The disease is caused due to the accumulation of toxic porphobilinogen (PBG) molecules which are formed during the production of heme. These molecules accumulate in different organs and this can lead to the symptoms or attacks of AHP.
Givlaari is an siRNA drug that downregulates the expression of aminolevulinic acid synthase 1 (ALAS1), a liver enzyme involved in an early step in heme production. The downregulation of ALAS1 lowers the levels of neurotoxic intermediates that cause AHP symptoms. [ 46 ]
Years of research has led to a greater understanding of siRNA therapies beyond those affecting the liver. As of 2019, Alnylam Pharmaceuticals was involved in therapies that may treat amyloidosis and CNS disorders like Huntington's disease and Alzheimer's disease . [ 4 ] They have also partnered with Regeneron Pharmaceuticals to develop therapies for CNS, eye and liver diseases.
As of 2020, ONPATTRO and GIVLAARI, were available for commercial application, and two siRNAs, lumasiran (ALN-GO1) and inclisiran , have been submitted for new drug application to the FDA. Several siRNAs are undergoing phase 3 clinical studies, and more candidates are in the early developmental stage. [ 46 ] In 2020, Alnylam and Vir pharmaceuticals announced a partnership and have started working on a RNAi therapy that would treat severe cases of COVID-19. [ 72 ]
Other companies that have had success in developing a pipeline of siRNA therapies are Dicerna Pharmaceuticals, partnered Eli Lilly and Company and Arrowhead Pharmaceuticals partnered with Johnson and Johnson . Several other big pharmaceutical companies such as Amgen and AstraZeneca have also invested heavily in siRNA therapies as they see the potential success of this area of biological drugs. [ 73 ] | https://en.wikipedia.org/wiki/Small_interfering_RNA |
In molecular biology and pharmacology , a small molecule or micromolecule is a low molecular weight (≤ 1000 daltons [ 1 ] ) organic compound that may regulate a biological process, with a size on the order of 1 nm [ citation needed ] . Many drugs are small molecules; the terms are equivalent in the literature. Larger structures such as nucleic acids and proteins , and many polysaccharides are not small molecules, although their constituent monomers (ribo- or deoxyribonucleotides, amino acids , and monosaccharides, respectively) are often considered small molecules. Small molecules may be used as research tools to probe biological function as well as leads in the development of new therapeutic agents . Some can inhibit a specific function of a protein or disrupt protein–protein interactions . [ 2 ]
Pharmacology usually restricts the term "small molecule" to molecules that bind specific biological macromolecules and act as an effector , altering the activity or function of the target . Small molecules can have a variety of biological functions or applications, serving as cell signaling molecules, drugs in medicine , pesticides in farming, and in many other roles. These compounds can be natural (such as secondary metabolites ) or artificial (such as antiviral drugs ); they may have a beneficial effect against a disease (such as drugs ) or may be detrimental (such as teratogens and carcinogens ).
The upper molecular-weight limit for a small molecule is approximately 900 daltons, which allows for the possibility to rapidly diffuse across cell membranes so that it can reach intracellular sites of action. [ 1 ] [ 3 ] This molecular weight cutoff is also a necessary but insufficient condition for oral bioavailability as it allows for transcellular transport through intestinal epithelial cells. In addition to intestinal permeability, the molecule must also possess a reasonably rapid rate of dissolution into water and adequate water solubility and moderate to low first pass metabolism . A somewhat lower molecular weight cutoff of 500 daltons (as part of the " rule of five ") has been recommended for oral small molecule drug candidates based on the observation that clinical attrition rates are significantly reduced if the molecular weight is kept below this limit. [ 4 ] [ 5 ]
Most pharmaceuticals are small molecules, although some drugs can be proteins (e.g., insulin and other biologic medical products ). With the exception of therapeutic antibodies , many proteins are degraded if administered orally and most often cannot cross cell membranes . Small molecules are more likely to be absorbed, although some of them are only absorbed after oral administration if given as prodrugs . One advantage that small molecule drugs (SMDs) have over "large molecule" biologics is that many small molecules can be taken orally whereas biologics generally require injection or another parenteral administration. [ 6 ] Small molecule drugs are also typically simpler to manufacture and cheaper for the purchaser. A downside is that not all targets are amenable to modification with small-molecule drugs; bacteria and cancers are often resistant to their effects. [ 7 ]
A variety of organisms including bacteria, fungi, and plants, produce small molecule secondary metabolites also known as natural products , which play a role in cell signaling, pigmentation and in defense against predation. Secondary metabolites are a rich source of biologically active compounds and hence are often used as research tools and leads for drug discovery. [ 8 ] Examples of secondary metabolites include:
Enzymes and receptors are often activated or inhibited by endogenous protein , but can be also inhibited by endogenous or exogenous small molecule inhibitors or activators , which can bind to the active site or on the allosteric site .
An example is the teratogen and carcinogen phorbol 12-myristate 13-acetate , which is a plant terpene that activates protein kinase C , which promotes cancer, making it a useful investigative tool. [ 10 ] There is also interest in creating small molecule artificial transcription factors to regulate gene expression , examples include wrenchnolol (a wrench shaped molecule). [ 11 ]
Binding of ligand can be characterised using a variety of analytical techniques such as surface plasmon resonance , microscale thermophoresis [ 12 ] or dual polarisation interferometry to quantify the reaction affinities and kinetic properties and also any induced conformational changes .
Small-molecule anti-genomic therapeutics , or SMAT, refers to a biodefense technology that targets DNA signatures found in many biological warfare agents. SMATs are new, broad-spectrum drugs that unify antibacterial, antiviral and anti-malarial activities into a single therapeutic that offers substantial cost benefits and logistic advantages for physicians and the military. [ 13 ] | https://en.wikipedia.org/wiki/Small_molecule |
Small-molecule sensors are an effective way to detect the presence of metal ions in solution. [ 1 ] Although many types exist, most small molecule sensors comprise a subunit that selectively binds to a metal that in turn induces a change in a fluorescent subunit. This change can be observed in the small molecule sensor's spectrum , which can be monitored using a detection system such as a microscope or a photodiode . [ 2 ] Different probes exist for a variety of applications, each with different dissociation constants with respect to a particular metal, different fluorescent properties, and sensitivities. They show great promise as a way to probe biological processes by monitoring metal ions at low concentrations in biological systems. Since they are by definition small and often capable of entering biological systems, they are conducive to many applications for which other more traditional bio-sensing are less effective or not suitable. [ 3 ]
Metal ions are essential to virtually all biological systems and hence studying their concentrations with effective probes is highly advantageous. Since metal ions are key to the causes of cancer , diabetes , and other diseases, monitoring them with probes that can provide insight into their concentrations with spatial and temporal resolution is of great interest to the scientific community. [ 3 ] There are many applications that one can envision for small molecule sensors. It has been shown that one can use them to differentiate effectively between acceptable and harmful concentrations of mercury in fish . [ 4 ] Further, since some types of neurons uptake zinc during their operation, these probes can be used as a way to track activity in the brain and could serve as an effective alternative to functional MRI . [ 5 ] One can also track and quantify the growth of a cell , such as a fibroblast , that uptakes metal ions as it constructs itself. [ 3 ] Numerous other biological processes can be tracked using small molecule sensors as many change metal concentrations as they occur, which can then be monitored. Still, the sensor must be tailored for its specific environment and sensing requirements. Depending on the application, the metal sensor should be selective for a certain type of metal, and especially needs to be able to bind its target metal with greater affinity than metals that naturally exist at high concentrations within the cell . Further, they should provide a response with a strong modulation in fluorescent spectrum and hence provide a high signal-to-noise ratio . Finally, it is essential that a sensor is not toxic to the biological system in which it is used. [ 3 ]
Most detection mechanisms involved in small molecule sensors comprise some modulation in the fluorescent behavior of the sensing molecule upon binding the target metal. When a metal coordinates to such a sensor, it may either enhance or reduce the original fluorescent emission. The former is known as the Chelation Enhancement Fluorescence effect (CHEF), while the latter is called the Chelation Enhancement Quenching effect (CHEQ). By changing the intensity of emission at different wavelengths, the resulting fluorescent spectrum may attenuate, amplify, or shift upon the binding and dissociation of a metal. This shift in spectra can be monitored using a detector such as a microscope or a photodiode. [ 2 ] [ 6 ] Listed below are some examples of mechanisms by which emission is modulated. Their participation in CHEQ or CHEF is dependent on the metal and small molecule sensor in question.
Fluorophores are essential to our measurement of the metal binding event, and indirectly, metal concentration. There are many types, all with different properties that make them advantageous for different applications. Some work as small metal sensors completely on their own while others must be complexed with a subunit that can chelate or bind a metal ion. Rhodamine for example undergoes a conformation change upon the binding of a metal ion. In so doing it switches between a colorless, non-fluorescent spirocyclic form to a fluorescent, pink open cyclic form. [ 2 ] [ 8 ] Quinoline based sensors have been developed that form luminescent complexes with Cd(II) and fluorescent ones with Zn(II). It is hypothesized to function by changing its lowest luminescent state from n– π * to π – π * when coordinating to a metal. [ 2 ] [ 9 ] [ 10 ] When the Dansyl group DNS binds to a metal, it loses a sulfonamide hydrogen, causing fluorescence quenching via a PET or reverse PET mechanism in which an electron is transferred either to or from the metal that is bound. [ 11 ]
Zinc is one of the most common metal ions in biological systems. [ 6 ] Small molecule sensors for it include:
Copper is a biologically important metal to detect. It has many sensors developed for it including:
Iron is used a great deal in biological systems, a fact that is well known due to its role in Hemoglobin . For it, there are many small molecule sensors including:
Cobalt sensors have been made that capitalize on the breaking of C-O bonds by Co(II) in a fluorescent probe known as Cobalt Probe 1 (CP1). [ 17 ]
Mercury is a toxic heavy metal , and as such it is important to be able to detect it in biological systems. Sensors include: | https://en.wikipedia.org/wiki/Small_molecule_sensors |
Small nuclear RNA ( snRNA ) is a class of small RNA molecules that are found within the splicing speckles and Cajal bodies of the cell nucleus in eukaryotic cells. The length of an average snRNA is approximately 150 nucleotides. They are transcribed by either RNA polymerase II or RNA polymerase III . [ 1 ] Their primary function is in the processing of pre- messenger RNA ( hnRNA ) in the nucleus. They have also been shown to aid in the regulation of transcription factors ( 7SK RNA ) or RNA polymerase II (B2 RNA), and maintaining the telomeres .
snRNA are always associated with a set of specific proteins, and the complexes are referred to as small nuclear ribonucleoproteins ( snRNP , often pronounced "snurps"). Each snRNP particle is composed of a snRNA component and several snRNP-specific proteins (including Sm proteins , a family of nuclear proteins). The most common human snRNA components of these complexes are known, respectively, as: U1 spliceosomal RNA , U2 spliceosomal RNA , U4 spliceosomal RNA , U5 spliceosomal RNA , and U6 spliceosomal RNA . Their nomenclature derives from their high uridine content.
snRNAs were discovered by accident during a gel electrophoresis experiment in 1966. [ 2 ] An unexpected type of RNA was found in the gel and investigated. Later analysis has shown that these RNA were high in uridylate and were established in the nucleus.
snRNAs and small nucleolar RNAs (snoRNAs) are not the same and neither is a subtype of the other. Both are different and are a class under small RNAs. These are small RNA molecules that play an essential role in RNA biogenesis and guide chemical modifications of ribosomal RNAs (rRNAs) and other RNA genes (tRNA and snRNAs). They are located in the nucleolus and the Cajal bodies of eukaryotic cells (the major sites of RNA synthesis), where they are called scaRNAs (small Cajal body-specific RNAs).
snRNA are often divided into two classes based upon both common sequence features as well as associated protein factors such as the RNA-binding LSm proteins. [ 3 ]
The first class, known as Sm-class snRNA , is more widely studied and consists of U1, U2, U4, U4atac , U5, U7 , U11 , and U12 . Sm-class snRNA are transcribed by RNA polymerase II . The pre-snRNA are transcribed and receive the usual 7-methylguanosine five-prime cap in the nucleus . They are then exported to the cytoplasm through nuclear pores for further processing. In the cytoplasm, the snRNA receive 3′ trimming to form a 3′ stem-loop structure, as well as hypermethylation of the 5′ cap to form trimethylguanosine. [ 4 ] The 3′ stem structure is necessary for recognition by the survival of motor neuron (SMN) protein. [ 5 ] This complex assembles the snRNA into stable ribonucleoproteins (RNPs). The modified 5′ cap is then required to import the snRNP back into the nucleus. All of these uridine-rich snRNA, with the exception of U7, form the core of the spliceosome . Splicing, or the removal of introns , is a major aspect of post-transcriptional modification, and takes place only in the nucleus of eukaryotes. U7 snRNA has been found to function in histone pre-mRNA processing. [ citation needed ]
The second class, known as Lsm-class snRNA , consists of U6 and U6atac . Lsm-class snRNAs are transcribed by RNA polymerase III and never leave the nucleus, in contrast to Sm-class snRNA. Lsm-class snRNAs contain a 5′-γ-monomethylphosphate cap [ 6 ] and a 3′ stem–loop, terminating in a stretch of uridines that form the binding site for a distinct heteroheptameric ring of Lsm proteins. [ 7 ]
Spliceosomes catalyse splicing , an integral step in eukaryotic precursor messenger RNA maturation. A splicing mistake in even a single nucleotide can be devastating to the cell, and a reliable, repeatable method of RNA processing is necessary to ensure cell survival. The spliceosome is a large, protein-RNA complex that consists of five small nuclear RNAs (U1, U2, U4, U5, and U6) and over 150 proteins. The snRNAs, along with their associated proteins, form ribonucleoprotein complexes (snRNPs), which bind to specific sequences on the pre-mRNA substrate. [ 8 ] This intricate process results in two sequential transesterification reactions. These reactions will produce a free lariat intron and ligate two exons to form a mature mRNA. There are two separate classes of spliceosomes. The major class, which is far more abundant in eukaryotic cells, splices primarily U2-type introns. The initial step of splicing is the bonding of the U1 snRNP and its associated proteins to the 5' splice end to the hnRNA . This creates the commitment complex which will constrain the hnRNA to the splicing pathway. [ 9 ] Then, U2 snRNP is recruited to the spliceosome binding site and forms complex A, after which the U5.U4/U6 tri-snRNP complex binds to complex A to form the structure known as complex B. After rearrangement, complex C is formed, and the spliceosome is active for catalysis. [ 10 ] In the catalytically active spliceosome U2 and U6 snRNAs fold to form a conserved structure called the catalytic triplex. [ 11 ] This structure coordinates two magnesium ions that form the active site of the spliceosome. [ 12 ] [ 13 ] This is an example of RNA catalysis .
In addition to this main spliceosome complex, there exists a much less common (~1%) minor spliceosome . This complex comprises U11, U12, U4atac, U6atac and U5 snRNPs. These snRNPs are functional analogs of the snRNPs used in the major spliceosome. The minor spliceosome splices U12-type introns. The two types of introns mainly differ in their splicing sites: U2-type introns have GT-AG 5′ and 3′ splice sites while U12-type introns have AT-AC at their 5′ and 3′ ends. The minor spliceosome carries out its function through a different pathway from the major spliceosome. [ citation needed ]
U1 snRNP is the initiator of spliceosomal activity in the cell by base pairing with the 5′ splice site of the pre-mRNA. In the major spliceosome, experimental data has shown that the U1 snRNP is present in equal stoichiometry with U2, U4, U5, and U6 snRNP. However, U1 snRNP's abundance in human cells is far greater than that of the other snRNPs. [ 14 ] Through U1 snRNA gene knockdown in HeLa cells, studies have shown the U1 snRNA holds great importance for cellular function. When U1 snRNA genes were knocked out, genomic microarrays showed an increased accumulation of unspliced pre-mRNA. [ 15 ] In addition, the knockout was shown to cause premature cleavage and polyadenylation primarily in introns located near the beginning of the transcript. When other uridine based snRNAs were knocked out, this effect was not seen. Thus, U1 snRNA–pre-mRNA base pairing was shown to protect pre-mRNA from polyadenylation as well as premature cleavage. This special protection may explain the overabundance of U1 snRNA in the cell. [ citation needed ]
Through the study of small nuclear ribonucleoproteins (snRNPs) and small nucleolar (sno)RNPs we have been able to better understand many important diseases.
Spinal muscular atrophy - Mutations in the survival motor neuron-1 (SMN1) gene result in the degeneration of spinal motor neurons and severe muscle wasting. The SMN protein assembles Sm-class snRNPs, and probably also snoRNPs and other RNPs. [ 16 ] Spinal muscular atrophy affects up to 1 in 6,000 people and is the second leading cause of neuromuscular disease, after Duchenne muscular dystrophy . [ 17 ]
Dyskeratosis congenita – Mutations in the assembled snRNPs are also found to be a cause of dyskeratosis congenita, a rare syndrome that presents by abnormal changes in the skin, nails and mucous membrane. Some ultimate effects of this disease include bone-marrow failure as well as cancer. This syndrome has been shown to arise from mutations in multiple genes, including dyskerin , telomerase RNA and telomerase reverse transcriptase . [ 18 ]
Prader–Willi syndrome – This syndrome affects as many as 1 in 12,000 people and has a presentation of extreme hunger, cognitive and behavioural problems, poor muscle tone and short stature. [ 19 ] The syndrome has been linked to the deletion of a region of paternal chromosome 15 that is not expressed on the maternal chromosome. This region includes a brain-specific snRNA that targets the serotonin -2C receptor mRNA . [ citation needed ]
Medulloblastoma – The U1 snRNA is mutated in a subset of these brain tumors , and leads to altered RNA splicing . [ 20 ] The mutations predominantly occur in adult tumors, and are associated with poor prognosis.
In eukaryotes , snRNAs contain a significant amount of 2′-O-methylation modifications and pseudouridylations . [ 21 ] These modifications are associated with snoRNA activity which canonically modify pre-mature rRNAs but have been observed in modifying other cellular RNA targets such as snRNAs. Finally, oligo-adenylation (short poly(A)tailing) can determine the fate of snRNAs (that are usually not poly(A)-tailed) and thereby induce their RNA decay . [ 22 ] This mechanism regulating the abundance of snRNAs is in turn coupled to a widespread change of alternative RNA splicing. [ citation needed ] | https://en.wikipedia.org/wiki/Small_nuclear_RNA |
In molecular biology , small nucleolar RNAs ( snoRNAs ) are a class of small RNA molecules that primarily guide chemical modifications of other RNAs, mainly ribosomal RNAs , transfer RNAs and small nuclear RNAs . There are two main classes of snoRNA, the C/D box snoRNAs, which are associated with methylation , and the H/ACA box snoRNAs, which are associated with pseudouridylation .
SnoRNAs are commonly referred to as guide RNAs but should not be confused with the guide RNAs that direct RNA editing in trypanosomes or the guide RNAs (gRNAs) used by Cas9 for CRISPR gene editing .
After transcription , nascent rRNA molecules (termed pre-rRNA) undergo a series of processing steps to generate the mature rRNA molecule. Prior to cleavage by exo- and endonucleases, the pre-rRNA undergoes a complex pattern of nucleoside modifications. These include methylations and pseudouridylations, guided by snoRNAs.
Each snoRNA molecule acts as a guide for only one (or two) individual modifications in a target RNA. [ 2 ] In order to carry out modification, each snoRNA associates with at least four core proteins in an RNA/protein complex referred to as a small nucleolar ribonucleoprotein particle (snoRNP). [ 3 ] The proteins associated with each RNA depend on the type of snoRNA molecule (see snoRNA guide families below). The snoRNA molecule contains an antisense element (a stretch of 10–20 nucleotides ), which are base complementary to the sequence surrounding the base ( nucleotide ) targeted for modification in the pre-RNA molecule. This enables the snoRNP to recognise and bind to the target RNA. Once the snoRNP has bound to the target site, the associated proteins are in the correct physical location to catalyse the chemical modification of the target base. [ 4 ]
The two different types of rRNA modification (methylation and pseudouridylation) are directed by two different families of snoRNAs. These families of snoRNAs are referred to as antisense C/D box and H/ACA box snoRNAs based on the presence of conserved sequence motifs in the snoRNA. There are exceptions, but as a general rule C/D box members guide methylation and H/ACA members guide pseudouridylation. The members of each family may vary in biogenesis, structure, and function, but each family is classified by the following generalised characteristics. For more detail, see review. [ 5 ] SnoRNAs are classified under small nuclear RNA in MeSH . The HGNC , in collaboration with snoRNABase and experts in the field, has approved unique names for human genes that encode snoRNAs. [ 6 ]
C/D box snoRNAs contain two short conserved sequence motifs, C (RUGAUGA) and D (CUGA), located near the 5′ and 3′ ends of the snoRNA, respectively. Short regions (~ 5 nucleotides) located upstream of the C box and downstream of the D box are usually base complementary and form a stem-box structure, which brings the C and D box motifs into close proximity. This stem-box structure has been shown to be essential for correct snoRNA synthesis and nucleolar localization. [ 7 ] Many C/D box snoRNA also contain an additional less-well-conserved copy of the C and D motifs (referred to as C' and D') located in the central portion of the snoRNA molecule. A conserved region of 10–21 nucleotides upstream of the D box is complementary to the methylation site of the target RNA and enables the snoRNA to form an RNA duplex with the RNA. [ 8 ] The nucleotide to be modified in the target RNA is usually located at the 5th position upstream from the D box (or D' box). [ 9 ] [ 10 ] C/D box snoRNAs associate with four evolutionary conserved and essential proteins— fibrillarin (Nop1p), NOP56 , NOP58 , and SNU13 (15.5-kD protein in eukaryotes; its archaeal homolog is L7Ae)—which make up the core C/D box snoRNP. [ 5 ]
There exists a eukaryotic C/D box snoRNA ( snoRNA U3 ) that has not been shown to guide 2′- O -methylation.
Instead, it functions in rRNA processing by directing pre-rRNA cleavage.
H/ACA box snoRNAs have a common secondary structure consisting of a two hairpins and two single-stranded regions termed a hairpin-hinge-hairpin-tail structure. [ 5 ] H/ACA snoRNAs also contain conserved sequence motifs known as H box (consensus ANANNA) and the ACA box (ACA). Both motifs are usually located in the single-stranded regions of the secondary structure. The H motif is located in the hinge and the ACA motif is located in the tail region; 3 nucleotides from the 3′ end of the sequence. [ 11 ] The hairpin regions contain internal bulges known as recognition loops in which the antisense guide sequences (bases complementary to the target sequence) are located. These guide sequences essentially mark the location of the uridine on the target rRNA that is going to be modified. This recognition sequence is bipartite (constructed from the two different arms of the loop region) and forms complex pseudo-knots with the target RNA. H/ACA box snoRNAs associate with four evolutionary conserved and essential proteins— dyskerin (Cbf5p), GAR1 , NHP2 , and NOP10 —which make up the core of the H/ACA box snoRNP. [ 5 ] Dyskerin is likely the catalytic component of the ribonucleoprotein (RNP) complex because it possesses several conserved pseudouridine synthase sequences, and is closely related to the pseudouridine synthase that modifies uridine in tRNA . In lower eukaryotic cells such as trypanosomes, similar RNAs exist in the form of single hairpin structure and an AGA box instead of ACA box at the 3′ end of the RNA. [ 12 ] Like Trypanosomes, Entamoeba histolytica has mix population of single hairpin as well as double hairpin H/ACA box snoRNAs. It was reported that there occurred processing of the double hairpin H/ACA box snoRNA to the single hairpin snoRNAs however, unlike trypanosomes, it has a regular ACA motif at 3′ tail. [19]
The RNA component of human telomerase (hTERC) contains an H/ACA domain for pre-RNP formation and nucleolar localization of the telomerase RNP itself. [ 13 ] The H/ACA snoRNP has been implicated in the rare genetic disease dyskeratosis congenita (DKC) due to its affiliation with human telomerase. Mutations in the protein component of the H/ACA snoRNP result in a reduction in physiological TERC levels. This has been strongly correlated with the pathology behind DKC, which seems to be primarily a disease of poor telomere maintenance.
An unusual guide snoRNA U85 that functions in both 2′-O-ribose methylation and pseudouridylation of small nuclear RNA (snRNA) U5 has been identified. [ 14 ] This composite snoRNA contains both C/D and H/ACA box domains and associates with the proteins specific to each class of snoRNA (fibrillarin and Gar1p, respectively). More composite snoRNAs have now been characterised. [ 15 ]
These composite snoRNAs have been found to accumulate in a subnuclear organelle called the Cajal body and are referred to as small Cajal body-specific RNAs (scaRNAs). This is in contrast to the majority of C/D box or H/ACA box snoRNAs, which localise to the nucleolus. These Cajal body specific RNAs are proposed to be involved in the modification of RNA polymerase II transcribed spliceosomal RNAs U1, U2, U4, U5 and U12. [ 15 ] Not all snoRNAs that have been localised to Cajal bodies are composite C/D and H/ACA box snoRNAs.
The targets for newly identified snoRNAs are predicted on the basis of sequence complementarity between putative target RNAs and the antisense elements or recognition loops in the snoRNA sequence. However, there are increasing numbers of 'orphan' guides without any known RNA targets, which suggests that there might be more proteins or transcripts involved in rRNA than previously and/or that some snoRNAs have different functions not concerning rRNA. [ 16 ] [ 17 ] There is evidence that some of these orphan snoRNAs regulate alternatively spliced transcripts. [ 18 ] For example, it appears that the C/D box snoRNA SNORD115 regulates the alternative splicing of the serotonin 2C receptor mRNA via a conserved region of complementarity. [ 19 ] [ 20 ] Another C/D box snoRNA, SNORD116 , that resides in the same cluster as SNORD115 has been predicted to have 23 possible targets within protein coding genes using a bioinformatic approach. Of these, a large fraction were found to be alternatively spliced, suggesting a role of SNORD116 in the regulation of alternative splicing. [ 21 ]
More recently, SNORD90 has been suggested to be able to guide N6-methyladenosine (m6A) modifications onto target RNA transcripts. [ 22 ] More specifically, Lin et al. demonstrated that SNORD90 can reduce the expression of neuregulin 3 (NRG3). [ 22 ]
The precise effect of the methylation and pseudouridylation modifications on the function of the mature RNAs is not yet known. The modifications do not appear to be essential but are known to subtly enhance the RNA folding and interaction with ribosomal proteins. In support of their importance, target site modifications are exclusively located within conserved and functionally important domains of the mature RNA and are commonly conserved among distant eukaryotes. [ 5 ] A novel method, Nm-REP-seq, was developed for enriching 2'-O-Methylations guided by C/D snoRNAs by using RNA exoribonuclease (Mycoplasma genitalium RNase R, MgR) and periodate oxidation reactivity to eliminate 2'-hydroxylated (2'-OH) nucleosides. [ 23 ]
SnoRNAs are located diversely in the genome. The majority of vertebrate snoRNA genes are encoded in the introns of genes encoding proteins involved in ribosome synthesis or translation, and are synthesized by RNA polymerase II . SnoRNAs are also shown to be located in intergenic regions, ORFs of protein coding genes, and UTRs. [ 24 ] SnoRNAs can also be transcribed from their own promoters by RNA polymerase II or III .
In the human genome, there are at least two examples where C/D box snoRNAs are found in tandem repeats within imprinted loci. These two loci (14q32 on chromosome 14 and 15q11q13 on chromosome 15) have been extensively characterised, and in both regions multiple snoRNAs have been found located within introns in clusters of closely related copies.
In 15q11q13, five different snoRNAs have been identified ( SNORD64 , SNORD107, SNORD108, SNORD109 (two copies), SNORD116 (29 copies) and SNORD115 (48 copies). Loss of the 29 copies of SNORD116 (HBII-85) from this region has been identified as a cause of Prader-Willi syndrome [ 25 ] [ 26 ] [ 27 ] [ 28 ] whereas gain of additional copies of SNORD115 has been linked to autism . [ 29 ] [ 30 ] [ 31 ]
Region 14q32 contains repeats of two snoRNAs SNORD113 (9 copies) and SNORD114 (31 copies) within the introns of a tissue-specific ncRNA transcript ( MEG8 ). The 14q32 domain has been shown to share common genomic features with the imprinted 15q11-q13 loci and a possible role for tandem repeats of C/D box snoRNAs in the evolution or mechanism of imprinted loci has been suggested. [ 32 ] [ 33 ]
snoRNAs can function as miRNAs . It has been shown that human ACA45 is a bona fide snoRNA that can be processed into a 21- nucleotides -long mature miRNA by the RNAse III family endoribonuclease dicer . [ 34 ] This snoRNA product has previously been identified as mmu-miR-1839 and was shown to be processed independently from the other miRNA-generating endoribonuclease drosha . [ 35 ] Bioinformatical analyses have revealed that putatively snoRNA-derived, miRNA-like fragments occur in different organisms. [ 36 ]
Recently, it has been found that snoRNAs can have functions not related to rRNA. One such function is the regulation of alternative splicing of the trans gene transcript, which is done by the snoRNA HBII-52 , which is also known as SNORD115. [ 19 ]
In November 2012, Schubert et al. revealed that specific RNAs control chromatin compaction and accessibility in Drosophila cells. [ 37 ]
In July 2023, Lin et al. showed that snoRNAs have the potential to guide other RNA modifications, specifically N6-methyladenosine , however this is subject to further investigation. [ 22 ]
TB11Cs4H1 is a member of the H/ACA-like class of non-coding RNA ( ncRNA ) molecule (a snoRNA) that guide the sites of modification of uridines to pseudouridines of substrate RNAs. TB11Cs4H1 is predicted to guide the pseudouridylation of LSU3 ribosomal RNA ( rRNA ) at residue Ψ1357. [ 38 ] | https://en.wikipedia.org/wiki/Small_nucleolar_RNA |
In mathematics, especially in category theory , Quillen’s small object argument , when applicable, constructs a factorization of a morphism in a functorial way. In practice, it can be used to show some class of morphisms constitutes a weak factorization system in the theory of model categories.
The argument was introduced by Quillen to construct a model structure on the category of (reasonable) topological spaces. [ 1 ] The original argument was later refined by Garner. [ 2 ]
Let C {\displaystyle C} be a category that has all small colimits. We say an object x {\displaystyle x} in it is compact with respect to an ordinal ω {\displaystyle \omega } if Hom ( x , − ) {\displaystyle \operatorname {Hom} (x,-)} commutes with an ω {\displaystyle \omega } -filterted colimit. In practice, we fix ω {\displaystyle \omega } and simply say an object is compact if it is so with respect to that fixed ω {\displaystyle \omega } .
If F {\displaystyle F} is a class of morphismms, we write l ( F ) {\displaystyle l(F)} for the class of morphisms that satisfy the left lifting property with respect to F {\displaystyle F} . Similarly, we write r ( F ) {\displaystyle r(F)} for the right lifting property. Then
Theorem — [ 3 ] [ 4 ] Let F {\displaystyle F} be a class of morphisms in C {\displaystyle C} . If the source (domain) of each morphism in F {\displaystyle F} is compact, then each morphism f {\displaystyle f} in C {\displaystyle C} admits a functorial factorization f = p ∘ i {\displaystyle f=p\circ i} where i , p {\displaystyle i,p} are in l ( r ( F ) ) , r ( F ) {\displaystyle l(r(F)),r(F)} .
Here is a simple example of how the argument works in the case of the category C {\displaystyle C} of presheaves on some small category. [ 5 ]
Let I {\displaystyle I} denote the set of monomorphisms of the form K → L {\displaystyle K\to L} , L {\displaystyle L} a quotient of a representable presheaf. Then l ( r ( I ) ) {\displaystyle l(r(I))} can be shown to be equal to the class of monomorphisms. Then the small object argument says: each presheaf morphism f {\displaystyle f} can be factored as f = p ∘ i {\displaystyle f=p\circ i} where i {\displaystyle i} is a monomorphism and p {\displaystyle p} in r ( I ) = r ( l ( r ( I ) ) {\displaystyle r(I)=r(l(r(I))} ; i.e., p {\displaystyle p} is a morphism having the right lifting property with respect to monomorphisms.
For now, see. [ 6 ] But roughly the construction is a sort of successive approximation. | https://en.wikipedia.org/wiki/Small_object_argument |
A small stationary reformer is an on-site device used for the production of hydrogen from hydrocarbons on a small scale. [ 1 ] [ 2 ]
A membrane reactor is a device where oxygen separation, steam reforming and POX is combined in a single step. In 1997 Argonne National Laboratory and Amoco published a paper "Ceramic membrane reactor for converting methane to syngas" [ 3 ] which resulted in different small scale systems that combined an ATR based oxygen membrane with a water-gas shift reactor and a hydrogen membrane.
Partial oxidation (POX) is a type of chemical reaction . It occurs when a substoichiometric fuel-air mixture is partially combusted in a reformer, creating a hydrogen-rich syngas which can then be put to further use, for example in a fuel cell . A distinction is made between thermal partial oxidation (TPOX) and catalytic partial oxidation (CPOX).
The capital cost of methane reformer plants is prohibitive for small to medium size applications because the technology does not scale down well. Conventional steam reforming plants operate at pressures between 200 and 600 psi with outlet temperatures in the range of 815 to 925 °C. However, analyses have shown that even though it is more costly to construct, a well-designed SMR can produce hydrogen more cost-effectively than an ATR. [ 6 ] To lower the costs both pressure and used temperature are lowered which allows for the use of cheaper materials. | https://en.wikipedia.org/wiki/Small_stationary_reformer |
In geometry , the small stellated 120-cell or stellated polydodecahedron is a regular star 4-polytope with Schläfli symbol {5/2,5,3}. It is one of 10 regular Schläfli-Hess polytopes .
It has the same edge arrangement as the great grand 120-cell , and also shares its 120 vertices with the 600-cell and eight other regular star 4-polytopes. It may also be seen as the first stellation of the 120-cell. In this sense it could be seen as analogous to the three-dimensional small stellated dodecahedron , which is the first stellation of the dodecahedron . Indeed, the small stellated 120-cell is dual to the icosahedral 120-cell , which could be taken as a 4D analogue of the great dodecahedron , dual of the small stellated dodecahedron.
The edges of the small stellated 120-cell are τ 2 as long as those of the 120-cell core inside the 4-polytope.
This 4-polytope article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Small_stellated_120-cell |
In the geometry of hyperbolic 4-space , the small stellated 120-cell honeycomb is one of four regular star- honeycombs . With Schläfli symbol {5/2,5,3,3}, it has three small stellated 120-cells around each face. It is dual to the pentagrammic-order 600-cell honeycomb .
It can be seen as a stellation of the 120-cell honeycomb , and is thus analogous to the three-dimensional small stellated dodecahedron {5/2,5} and four-dimensional small stellated 120-cell {5/2,5,3}. It has density 5.
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Small_stellated_120-cell_honeycomb |
Plasmid vectors are circular strands of DNA , found in virions , that are used in genetic engineering to integrate new genes into a host cell genome.
The small T intron is an intron , that is used in some plasmid vectors, in order to induce gene expression in mammalian cells.
The function of this intron in the vectors is unknown, but it is theorized that it might be involved in splicing or translation efficiency. [ 1 ] [ 2 ]
Vectors such as pME18s contain it.
This cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Small_t_intron |
In data compression and the theory of formal languages , the smallest grammar problem is the problem of finding the smallest context-free grammar that generates a given string of characters (but no other string). The size of a grammar is defined by some authors as the number of symbols on the right side of the production rules. [ 1 ] Others also add the number of rules to that. [ 2 ] A grammar that generates only a single string, as required for the solution to this problem, is called a straight-line grammar . [ 3 ]
Every binary string of length n {\displaystyle n} has a grammar of length O ( n / log n ) {\displaystyle O(n/\log n)} , as expressed using big O notation . [ 3 ] For binary de Bruijn sequences , no better length is possible. [ 4 ]
The (decision version of the) smallest grammar problem is NP-complete . [ 1 ] It can be approximated in polynomial time to within a logarithmic approximation ratio ; more precisely, the ratio is O ( log n g ) {\displaystyle O(\log {\tfrac {n}{g}})} where n {\displaystyle n} is the length of the given string and g {\displaystyle g} is the size of its smallest grammar. It is hard to approximate to within a constant approximation ratio. An improvement of the approximation ratio to o ( log n / log log n ) {\displaystyle o(\log n/\log \log n)} would also improve certain algorithms for approximate addition chains . [ 5 ]
This algorithms or data structures -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Smallest_grammar_problem |
A smalley is a type of small excavator with two wheels on a single axle. It had no drive to the wheels, moving instead by pulling itself along using the excavator or ' backhoe ' arm. Once in location the machine worked as any other 360° excavator, with two fixed-adjustable front legs, and two rear legs which could be mechanically height-adjusted from within the cab. For larger distances the machine could be towed on the road at moderate speeds using a suitable vehicle such as a Land Rover or large van .
Smalley evolved over the years and produced the Smalley 425 which has two drive wheels and two steering wheels. It uses a Lister Diesel engine, ST1. Single Cylinder, 6.5HP . 360 Degree turn, no electrics and manual start. Later models, which are still made today, use a different engine and have an alternator to power an electric starter motor. The 1977 needed the side support or else it would be too easy to tip over. (Note: The photo is missing the cab)
Richard Smalley is credited with being the inventor of the world’s first mini excavator in 1959, although now superseded by tracked derivations of the compact excavator at the concept was highly successful in allowing a compact and cost effective machine, with these 'walking' or 'tow-behind' excavators having been sold into more than forty countries throughout the world (including over 100 machines to Japan before 1968). [ 1 ]
The manufacturer, Richard Smalley (Engineering) Ltd . was based in Osbournby near Sleaford in Lincolnshire , England.
This tool article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Smalley_(excavator) |
The smallpox virus retention debate has been going on among scientists and health officials since the smallpox virus was declared eradicated by the World Health Organization (WHO) in 1980. [ 1 ] [ 2 ] The debate centers on whether the last two known remnants of the Variola virus known to cause smallpox, which are kept in tightly controlled government laboratories in the United States and Russia , should be finally and irreversibly destroyed. Advocates of final destruction maintain that there is no longer any valid rationale for retaining the samples, which pose the hazard of escaping the laboratories, while opponents of destruction maintain that the samples may still be of value to scientific research, especially since variants of the smallpox virus may still exist in the natural world and thus present the possibility of the disease re-emerging in the future or being used as a bio-weapon .
In 1981, the four countries that either served as a WHO collaborating center or were actively working with variola virus were the United States, the United Kingdom, the Soviet Union, and South Africa. The last cases of smallpox occurred in an outbreak of two cases , one of which was fatal, in Birmingham , United Kingdom, in 1978. A medical photographer , Janet Parker , contracted the disease at the University of Birmingham Medical School and died on September 11, 1978. [ 3 ] In light of this incident, all known stocks of the smallpox virus were destroyed or transferred to one of two World Health Organization reference laboratories which had BSL-4 facilities—the Centers for Disease Control and Prevention (CDC) in the United States and the State Research Center of Virology and Biotechnology VECTOR in Koltsovo , Soviet Union. [ 4 ] Since 1984, only these two labs have been authorized by the WHO to hold stocks of live smallpox virus. [ 5 ]
In 1986, the WHO first recommended destruction of all smallpox samples, and later set the date of destruction to be 30 December 1993. This was postponed to 30 June 1999, [ 6 ] then again to 30 June 2002. Due to resistance from the U.S. and Russia, in 2002 the World Health Assembly agreed to permit temporary retention of the virus stocks for specific research purposes. [ 7 ] Destroying existing stocks would reduce the risk involved with ongoing smallpox research; the stocks are not needed to respond to a smallpox outbreak. [ 8 ] Some scientists have argued that the stocks may be useful in developing new vaccines , antiviral drugs , and diagnostic tests. [ 9 ] A 2010 review by a team of public health experts appointed by the WHO, however, concluded that no essential public health purpose is served by the American and Russian laboratories continuing to retain live virus stocks. [ 10 ] The latter view is frequently supported in the scientific community, particularly among veterans of the WHO Smallpox Eradication Program (1958–1979). [ 11 ]
An Ad Hoc Committee on Orthopox Infections, advising the WHO, has debated the fate of the remaining samples of smallpox in the remaining two official repositories since 1980. Smallpox expert D. A. Henderson has been foremost in favor of destruction, while U.S. Army scientist Peter Jahrling has argued against it on the basis that further research is needed, since he believes that smallpox almost certainly exists outside of the repositories. [ 12 ] Other scientists have expressed similar opinions. [ 13 ]
In 2011, Kathleen Sebelius , Secretary of the U.S. Department of Health and Human Services , laid out the rationale of the administration of President Barack Obama in a New York Times op-ed piece. She said, in part:
The global public health community assumes that all nations acted in good faith; however, no one has ever attempted to verify or validate compliance with the WHO request.... Although keeping the samples may carry a minuscule risk, both the United States and Russia believe the dangers of destroying them now are far greater.... It is quite possible that undisclosed or forgotten stocks exist. Also, 30 years after the disease was eradicated, the virus' genomic information is available online and the technology now exists for someone with the right tools and the wrong intentions to create a new smallpox virus in a laboratory.... Destroying the virus now is merely a symbolic act that would slow our progress and could even stop it completely, leaving the world vulnerable.... Destruction of the last securely stored viruses is an irrevocable action that should occur only when the global community has eliminated the threat of smallpox once and for all. To do any less keeps future generations at risk from the re-emergence of one of the deadliest diseases humanity has ever known. Until this research is complete, we cannot afford to take that risk. [ 14 ]
As of May 2018, based on the latest (19th) meeting of the WHO Advisory Committee on Variola Virus Research (1–2 November 2017), the question remained as to whether the use of live variola virus for their further development was "essential for public health." [ 19 ]
In September 2019, the Russian lab housing smallpox samples experienced a gas explosion that injured one worker. It did not occur near the virus storage area, and no samples were compromised, but the incident prompted a review of risks to containment. [ 20 ]
In November 2021 the CDC announced that several frozen vials labeled "Smallpox" were discovered in a freezer in a Merck & Co. vaccine research facility at Montgomery County, Pennsylvania . [ 21 ] [ 22 ] [ 23 ] The vials were determined to contain the vaccinia virus, used in making the vaccine, not the variola virus, which causes smallpox. [ 24 ] | https://en.wikipedia.org/wiki/Smallpox_virus_retention_debate |
A smart object is an object that enhances the interaction with not only people but also with other smart objects. Also known as smart connected products or smart connected things ( SCoT ), they are products, assets and other things embedded with processors, sensors, software and connectivity that allow data to be exchanged between the product and its environment, manufacturer, operator/user, and other products and systems. Connectivity also enables some capabilities of the product to exist outside the physical device, in what is known as the product cloud. The data collected from these products can be then analyzed to inform decision-making, enable operational efficiencies and continuously improve the performance of the product.
It can not only refer to interaction with physical world objects but also to interaction with virtual (computing environment) objects. A smart physical object may be created either as an artifact or manufactured product or by embedding electronic tags such as RFID tags or sensors into non-smart physical objects. Smart virtual objects are created as software objects that are intrinsic when creating and operating a virtual or cyber world simulation or game. The concept of a smart object has several origins and uses, see History. There are also several overlapping terms, see also smart device , tangible object or tangible user interface and Thing as in the Internet of things .
In the early 1990s, Mark Weiser, from whom the term ubiquitous computing originated, referred to a vision "When almost every object either contains a computer or can have a tab attached to it, obtaining information will be trivial", [ 1 ] [ 2 ] Although Weiser did not specifically refer to an object as being smart, his early work did imply that smart physical objects are smart in the sense that they act as digital information sources. Hiroshi Ishii and Brygg Ullmer [ 3 ] refer to tangible objects in terms of tangibles bits or tangible user interfaces that enable users to "grasp & manipulate" bits in the center of users' attention by coupling the bits with everyday physical objects and architectural surfaces.
The smart object concept was introduced by Marcelo Kallman and Daniel Thalmann [ 4 ] as an object that can describe its own possible interactions. The main focus here is to model interactions of smart virtual objects with virtual humans, agents, in virtual worlds. The opposite approach to smart objects is 'plain' objects that do not provide this information. The additional information provided by this concept enables far more general interaction schemes, and can greatly simplify the planner of an artificial intelligence agent . [ 4 ]
In contrast to smart virtual objects used in virtual worlds, Lev Manovich focuses on physical space filled with electronic and visual information. Here, "smart objects" are described as "objects connected to the Net; objects that can sense their users and display smart behaviour". [ 5 ]
More recently in the early 2010s, smart objects are being proposed as a key enabler for the vision of the Internet of things. [ 6 ] The combination of the Internet and emerging technologies such as near field communications , real-time localization, and embedded sensors enables everyday objects to be transformed into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of things and enable novel computing applications. [ 6 ] In 2018, one of the world's first smart houses was built in Klaukkala , Finland in the form of a five-floor apartment block, using the Kone Residential Flow solution created by KONE , allowing even a smartphone to act as a home key. [ 7 ] [ 8 ]
Although we can view interaction with physical smart object in the physical world as distinct from interaction with virtual smart objects in a virtual simulated world, these can be related. Poslad [ 2 ] considers the progression of: how
The concept smart for a smart physical object simply means that it is active, digital, networked, can operate to some extent autonomously, is reconfigurable and has local control of the resources it needs such as energy, data storage, etc. [ 2 ] Note, a smart object does not necessarily need to be intelligent as in exhibiting a strong essence of artificial intelligence —although it can be designed to also be intelligent.
Physical world smart objects can be described in terms of three properties: [ 6 ]
Based upon these properties, these have been classified into three types: [ 6 ]
For the virtual object in a virtual world case, an object is called smart when it has the ability to describe its possible interactions. [ 4 ] This focuses on constructing a virtual world using only virtual objects that contain their own interaction information. There are four basic elements to constructing such a smart virtual object framework. [ 4 ]
Some versions of smart objects also include animation information in the object information, but this is not considered to be an efficient approach, since this can make objects inappropriately oversized. [ 9 ]
The term smart products can be confusing as it is used to cover a broad range of different products, ranging from smart home appliances (e.g., smart bathroom scales or smart light bulbs) to smart cars (e.g., Tesla). While these products share certain similarities, they often differ substantially in their capabilities. Raff et al. developed a conceptual framework that distinguishes different smart products based on their capabilities, which features 4 types of smart product archetypes (in ascending order of "smartness")[2]
The terms smart, connected product or smart product can be confusing as it is used to cover a broad range of different products, ranging from smart home appliances (e.g., smart bathroom scales or smart light bulbs) to smart cars (e.g., Tesla). While these products share certain similarities, they often differ substantially in their capabilities. Raff et al. developed a conceptual framework that distinguishes different smart products based on their capabilities, which features 4 types of smart product archetypes (in ascending order of "smartness"). [ 10 ]
Smart, connected products have three primary components: [ 11 ] : 67
Each component expands the capabilities of one another resulting in "a virtuous cycle of value improvement". [ 11 ] First, the smart components of a product amplify the value and capabilities of the physical components. Then, connectivity amplifies the value and capabilities of the smart components. These improvements include:
The Internet of things is the network of physical objects that contain embedded technology to communicate and sense or interact with their internal states or the external environment. [ 13 ] The phrase "Internet of things" reflects the growing number of smart, connected products and highlights the new opportunities they can represent. The Internet, whether involving people or things, is a mechanism for transmitting information. What makes smart, connected products fundamentally different is not the Internet, but the changing nature of the 'things'. [ 11 ] : 66 Once a product is smart and connected to the cloud, the products and services will become part of an interconnected management solution. Companies can evolve from making products to offering more complex, higher-value offerings within a "system of systems". [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Smart,_connected_products |
SmartFIX40 was a major transportation improvement project coordinated by the Tennessee Department of Transportation (TDOT) along Interstate 40 (I-40) in downtown Knoxville, Tennessee . The project, referred to as the most ambitious TDOT project at the time, [ 4 ] consisted of two separate phases and contracts, started construction in 2005 and was completed in June 2009 at a cost of $190 million. The second phase of the project required the closure of 1.5 miles of I-40 in downtown for a 14-month period, rerouting traffic onto the Interstate 640 (I-640) northern bypass of downtown. [ 5 ] At the time of its completion, SmartFIX40 was the largest awarded contract and construction project in Tennessee history, [ 6 ] and in retrospective documentation has received acclaim for its methods of accelerated construction and project delivery, including nationwide awards from the American Association of State Highway and Transportation Officials (AASHTO). [ 7 ] [ 8 ] [ 9 ]
The current freeway system in Knoxville originated from a 1945 plan commissioned by the city that recommended a series of controlled-access highways be constructed to relieve congestion on surface streets. [ 10 ] Planners intended these routes to be integrated into the then-planned nationwide freeway network that became the Interstate Highway System. This plan included three major arteries out of the city; and east–west route known as the West Expressway and East Expressway, and a north–south route known as the North–South Expressway. These three routes would come together at a junction near downtown. A northern beltway known as the Dutch Valley Loop would bypass downtown to the north. [ 11 ] [ 12 ] The plan was expanded in 1951 to include a square-shaped freeway loop around downtown known as the Downtown Loop. [ 13 ] The northern leg of this loop included the East–West Expressway, which became known as the Magnolia Avenue Expressway at this time, and was the first freeway in Tennessee. [ 14 ]
With the passage of the Federal-Aid Highway Act of 1956 most of the freeway system in Knoxville became part of the Interstate Highway System . [ 12 ] In 1957, the East Expressway became part of I-40, the North-South Expressway part of I-75, and the West Expressway a concurrency of both routes. [ 15 ] The following year, the Dutch Valley Loop became I-640, [ 16 ] and the Downtown Loop was designated SR 158. [ 17 ] [ 18 ] Part of the section of I-40 that was reconstructed in SmartFix40 was a long viaduct that was originally part of the eastern leg of the Magnolia Avenue Expressway, opened to traffic on December 10, 1955. [ 19 ] The remainder of the section, including the interchanges with SR 158 and Hall of Fame Drive, opened on April 11, 1967. [ 20 ] [ 21 ] The controlled-access section of SR 158 directly south of this interchange opened on June 23, 1964, [ 22 ] and the adjacent section to the south opened on September 15, 1973. [ 23 ] This route was renamed James White Parkway in 1991. [ 24 ] [ 25 ] The completion of I-40/75 in West Knoxville resulted in the commercial center of the city shifting to the west, and as a result, the original plans for a complete beltway around downtown were never realized. [ 26 ]
Within a few years of the completion of I-40 and I-75 in Knoxville, the city's highway network was already starting to suffer from congestion and a high accident rate. The interchange between I-40 and SR 158, which included left-hand entrance and exit ramps on I-40 westbound, became a particularly hazardous point. Reconstruction of this interchange was suggested as early as 1971. [ 27 ] Improving the section of I-40 west of where SmartFIX40 would take place became a more urgent priority as the city began to expand to the west. By the mid-1970s, the cloverleaf interchange between I-40 and I-75 was suffering from severe congestion, and had earned the nickname "Malfunction Junction". [ 28 ] Between May 1980 and March 1982, TDOT undertook a massive $250 million (equivalent to $668 million in 2023 [ 29 ] ) project that reconstructed Malfunction Junction, widened sections of I-40 and I-75 approaching downtown, reconstructed and improved interchanges on these Interstates, and completed I-640. [ 30 ] [ 31 ] [ 32 ] I-75 was also rerouted from downtown Knoxville onto the western leg of I-640 during this time, and the freeway section between I-40 and I-640 became I-275 . [ 33 ] This project was undertaken on an accelerated timescale in preparation for the 1982 World's Fair . [ 30 ] [ 31 ] The two-lane section of I-40 east of downtown was not included in this project, but nevertheless, TDOT began making preparations for improvements to this problematic spot.
TDOT would embark on a constructability analysis of the reconstruction of I-40 through downtown in a meeting with federal, state, and local leaders in February 2003. [ 34 ]
In April 2004, TDOT hosted a workshop meeting, dubbed the Accelerated Construction Technology Transfer (ACTT) workshop. The ACTT involved 82 transportation engineering professionals across 19 states discussing methods to satisfy TDOT's main goal of minimizing time during the construction phase of SmartFIX40. Several areas of concern were addressed such as structures such as retaining walls and bridges, materials, accelerated testing of materials, geotechnical issues , intelligent transportation systems , and constructability. The closure of I-40 through downtown Knoxville and utilizing the I-640 bypass as a detour route was recommended. [ 35 ]
Transportation engineering firm Wilbur-Smith Associates was selected by TDOT as the consultant tasked with the planning and design of the SmartFIX40 corridor. With initial projections of four years to complete SmartFIX40, Wilbur-Smith drafted a design and construction plan that reduced the project's schedule to just 14 months, saving more than $20 million in state funding. The outsourcing of the engineering work for SmartFIX40 also reduced the need for additional hiring of engineers at TDOT if the project was performed in-house. [ 36 ]
Due to its size, the SmartFIX40 project was broken into two phases, and awarded in separate contracts. Both phases would be awarded to BB SmartFIX Constructors, a joint venture of two heavy highway construction general contractors , Ray Bell & Associates of Brentwood, Tennessee , and Charles Blalock & Sons of Sevierville, Tennessee . The first contract was awarded at a price over $80 million, and the second at a price of $104.6 million.
For the first time in the history of TDOT, it would enforce a "no excuse" deadline, requiring both phases to be completed by BB SmartFIX Constructors at or before the contract's documented day of completion. If neither phase met its deadline, TDOT would issue fines of $25,000 per day as liquidated damages for late completion. [ 37 ] | https://en.wikipedia.org/wiki/SmartFIX40 |
Smart Bitrate Control , commonly referred to as SBC , was a technique for achieving greatly improved video compression efficiency using the DivX 3.11 Alpha video codec or Microsoft's proprietary MPEG4v2 video codec and the Nandub video encoder. SBC relied on two main technologies to achieve this improved efficiency: Multipass encoding and Variable Keyframe Intervals (VKI). SBC ceased to be commonly used after XviD and DivX development progressed to a point where they incorporated the same features that SBC pioneered [ citation needed ] and could offer even more efficient video compression without the need for a specialized application. Files created by SBC are compatible with DivX 3.11 Alpha and can be decoded by most codecs that support ISO MPEG4 video. [ citation needed ]
The DivX 3.11 Alpha codec allowed a user to control three aspects of the encoding process: the average bitrate , keyframe interval, and whether the codec preserved smoother motion or more detailed images. DivX attempted to encode an entire movie at an average bitrate the user specified, varying the quality of the video in order to achieve the target bitrate . This meant that a simple section of video, such as a still image, would look very good, but complex video, such as an action scene, would look very bad. DivX's keyframe placement was also very simplistic, it would place keyframes only on the interval that the user selected, every 300 frames (10 seconds at 30 frame/s) by default.
Nandub's multipass encoding encoded the video twice; in the first pass it would analyze the video (and write information to a log file), in the second it would actually produce the output file. Instead of varying the image quality to achieve an average bitrate, this allowed SBC to vary the bitrate to achieve an average quality, using higher bitrate for more complex scenes and lower bitrate for simpler scenes. VKI would place keyframes only where needed, such as at scene changes, rather than at a fixed interval. This significantly improved both the compression efficiency and visual quality of the resulting video. A VKI patch (called the DivX Scene Detect Patch) was also available for DivX to allow for VKI functionality without using Nandub, but it offered inferior performance compared to the VKI algorithms included in Nandub.
Nandub was a modification of the open source VirtualDub video encoder performed by Nando that incorporated SBC features. | https://en.wikipedia.org/wiki/Smart_Bitrate_Control |
Smart Cells are radio access nodes that provide wireless connectivity across multiple spectrum ranges and technologies. As of January 2014, Macrocells , Small Cells , and Wi-Fi connections were the primary means of data connectivity. For these types of cells, the spectrum utilized is static and is based on the antenna installed. A Smart Cell may transmit multiple frequencies and technologies which are controlled by the software and not the hardware (antenna).
Smart Cells are currently in the research and development stage, but support software-defined networks , which are proliferating the current mobile network structure, are being supported. [ 1 ]
It's possible that Smart Cells will lower capital and operational costs due to reduced equipment and manual manipulations needed to modify cell site coverage. The term Smart Cell is also used to identify other technologies that enhance cell sites where it has reduced the need to manually manipulate radio access equipment or add additional carriers at a radio access node . [ 2 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Smart_Cell |
Smart Communications Inc. , commonly referred to as Smart , is a wholly owned wireless communications and digital services subsidiary of PLDT Inc., [ 1 ] a telecommunications and digital services provider based in the Philippines. [ 2 ] As of November 2023, it is currently the largest mobile network with 55.2 million subscribers. [ 3 ]
Smart offers commercial wireless services through its 2G , 3G , 3.5G HSPA+ , 4G LTE , and LTE-A networks, with 5G currently being deployed in multiple locations in the Philippines. [ 4 ] [ 5 ] Smart's terrestrial wireless telephony service is being complemented by its satellite communication services Smart Sat and Marino which also serve the global maritime industry.
The company has introduced wireless offerings such as Smart Money, [ 6 ] a mobile electronic wallet that also enables its SMS-based money remittance service Smart Padala (now integrated with Maya ). It has also been recognized for introducing the world's first over-the-air electronic prepaid loading service called Smart Load. [ 7 ] One of its services, [ 8 ] PasaLoad, allows its users to pass phone credits to other Smart prepaid accounts through SMS.
Anticipating the liberalization of the telecommunications industry in the Philippines a group of Filipino investors led by Orlando B. Vea and David T. Fernando organized Smart (then named Smart Information Technology, Inc.) on January 24, 1991. The company obtained its congressional franchise in April 1992 and was granted a provisional authority to operate a mobile cellular service in May 1993. [ 9 ] In December 1993, Smart commenced commercial operations of its cellular service. By then, Smart had drawn in partners. These were: First Pacific , a Hong Kong–based conglomerate through its Philippine flagship Metro Pacific Investments Corporation and Nippon Telegraph and Telephone of Japan (NTT).
In compliance with the government's telecommunications program, Smart established a local exchange service in the cities and provinces assigned to it under the "service area scheme." The company also obtained licenses to provide international gateway, paging and inter-carrier transmission services. [ 10 ]
On March 24, 2000, PLDT completed its share-swap acquisition of Smart, making Smart a 100%-owned PLDT subsidiary. In 2003, Smart was named the best employer in the Philippines in a study conducted by the firm Hewitt Associates . [ 11 ]
In February 2011, [ 12 ] Smart unveiled the Netphone , its own line of Android-compliant smartphones designed for emerging markets at the GSMA Mobile World Congress in Barcelona, Spain . The Netphone was introduced as the world's first smartphone backed by an operator-managed platform.
On August 25, 2012, [ 13 ] Smart launched the Philippines' first 4G mobile broadband commercial service running on LTE technology. On April 13, 2016, [ 14 ] Smart introduced the first commercial LTE-A Service in Boracay , Aklan .
On June 13, 2016, Smart and its parent company PLDT unveiled their new logos and identity as part of the company's continuing digital pivot. [ 15 ]
In February 2017, [ 16 ] Smart and parent company PLDT signed a memorandum of understanding with China-based Huawei Technologies "to shape the strategic and commercial development of the 5G ecosystem in the Philippines".
On April 21, 2017, Philippine President Rodrigo Duterte signed Republic Act No. 10926 which renewed Smart's license for another 25 years. The law granted Smart a franchise to establish, maintain, lease and operate integrated telecommunications, computers, electronics, and stations throughout the Philippines. [ 17 ] [ 18 ]
In October 2018, petitioners asked the Supreme Court to stop Globe and Smart from using the 700 MHz [ 19 ] and Smart announced that they were working to fix its slow internet service. [ 20 ] On July 30, 2020, Smart activated their 5G mobile network initially in Makati Central Business District , Bonifacio Global City , Araneta City , SM Megamall and SM Mall of Asia Bay Area. [ 21 ]
In Opensignal 's April report, on internet speed contest , Dito Telecommunity outplaced Smart Communications and Globe Telecom in the first quarter, with a download speed of 32 Mbps. It is also now the fastest operator for 5G , averaging 302.9 Mbps as against Smart’s 143.3 Mbps. In the reliability experience of subscribers, it further scored 835 out of 1,000 to breaking Smart’s 771 and Globe’s 748. [ 22 ]
The company has more than 66 million mobile subscribers as of 2022, under the brands Smart, Sun , and TNT , in addition to wireless broadband subscribers number more than 3.9 million under the brands Smart Bro and Sun Wireless Broadband. [ 23 ]
By virtue of SIM Registration Act and due to the deactivation of unregistered SIM cards, Smart has 50.0 million subscribers as of July 26, 2023. [ 24 ]
As of November 2023, Smart reported that it has 55.2 million subscribers following the implementation of SIM card registration, which is slightly more than 54.7 million of close competitor Globe Telecom Inc., regaining its place as the mobile network with the largest subscriber base. [ 3 ]
Smart and its parent company PLDT launched Omega esports , a professional esports team for Dota 2 , Mobile Legends: Bang Bang , and Tekken 7 that competed in the 2019 The Nationals . [ 25 ] [ 26 ] [ 27 ] It is also a major sponsor of Mobile Legends: Bang Bang esports events in the Philippines such as the previous MSC 2019. [ 28 ] | https://en.wikipedia.org/wiki/Smart_Communications |
Smart Data Compression is a compressed GIS dataset format developed by Esri . It stores all types of feature data and attribute information together as a core data structure. The SDC format was used in Esri products such as ArcGIS StreetMap, ArcIMS Route Server, RouteMAP IMS, ArcGIS Business Analyst, and the ArcMobile SDK .
The SDC file format is no longer supported by Esri software since ArcMap 10.2, although later versions of ArcMap can convert SDC data to a geodatabase. [ 1 ] ArcGIS Pro does not support the SDC format at all. [ 2 ]
Compression ratios range from 8x to 20x depending on the data source and structure. SDC data is optimized for rapid map display, accurate routing, and high-performance geocoding.
Smart Data Compression is a proprietary format. The FAQ for ESRI's RouteServer IMS notes that additional datasets for that application must be prepared by an ESRI subsidiary at additional cost. [ 3 ] The SDC technology was developed by Software Technologies, ESRI partner in Russia. [ 4 ] Tele Atlas and NAVTEQ provide North American commercial street datasets in SDC format. This data was prepared using the Data Development Kit Pro (DDK Pro), which ESRI licenses to select vendors. [ 5 ]
The term Smart Data and idea was coined and created by Dr. James A Rodgers professor at Indiana University of Pennsylvania and James A George, Circa [ 5 ] "Smart Data Enterprise Performance Optimization Strategy".
This product article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Smart_Data_Compression |
Smart Materials and Structures is a monthly peer-reviewed scientific journal covering technical advances in smart materials , systems and structures; including intelligent systems, sensing and actuation, adaptive structures, and active control.
The initial editors-in-chief starting in 1992 were Vijay K. Varadan ( Pennsylvania State University ), Gareth J. Knowles ( Grumman Corporation ), and Richard O. Claus ( Virginia Tech ); in 2008 Ephrahim Garcia ( Cornell University ) took over as editor-in-chief until 2014. Christopher S. Lynch ( University of California, Los Angeles ) assumed the position of editor-in-chief in 2015 and was succeeded by Alper Erturk ( Georgia Institute of Technology ) in 2023, who serves as the current editor-in-chief.
The journal is abstracted and indexed in:
According to the Journal Citation Reports , the journal has a 2023 impact factor of 3.7. [ 1 ] | https://en.wikipedia.org/wiki/Smart_Materials_and_Structures |
Smart Mobility Architecture ( SMARC ) is a computer hardware standard for computer-on-modules (COMs).
SMARC modules are specifically designed for the development of extremely compact low-power systems, such as mobile devices. [ 1 ]
The SMARC hardware specification V1.0 was published by the Standardization Group for Embedded Technologies (SGeT). SGeT had its first meeting in 2012, headed by Engelbert Hörmannsdorfer. [ 2 ] The specification is freely available as a download on the SGeT website. Generally, SMARC modules are based on ARM architecture processors, they can, however, also be fitted with other low-power system on a chip (SoC) architectures, like, for example, ones based on x86 processors. [ 3 ] Typically, SMARC modules' power requirement is in the range of a few watts. [ 4 ]
Computer-on-modules integrate the core function of a bootable computer, as well as additional circuitry, including DRAM, boot-flash, voltage distribution, Ethernet and display transmitter. The modules are deployed together with an application-specific carrier board, whose size and form can be defined to meet customer-specific requirements. The carrier board executes the required interfaces and can integrate, if required, any further functionalities, such as audio codecs, touch controller, wireless communication interfaces, etc.
The SMARC specification outlines both the dimensions of the module and the positioning of the anchor points as well as the connector to the carrier board and the executed interfaces with the pin-out. The pin-out is optimized for ARM and low-power SoC interfaces and is distinguished from classical PC interfaces by its target-oriented focus on low-power and mobile applications.
SMARC is based on the ultra-low power (ULP-COM) form factor which was introduced by the companies Kontron and Adlink in February 2012. [ 5 ] During the specification process by the SGeT the standard was renamed to SMARC. SGET approved the 1.0 specification in December 2012. [ 1 ]
SMARC Evolution and Adoption
Over time, the SMARC (Smart Mobility ARChitecture) standard has evolved, with the latest version being 2.x. Initially, the standard was primarily supported by companies such as Kontron and Adlink. However, the adoption of SMARC modules has grown significantly, with additional providers entering the market. Notable companies now offering SMARC modules include Toradex , [ 6 ] alongside the original pioneers, Kontron and Adlink
SMARC defines two module sizes:
SMARC Computer-on-Modules have 314 card edge contacts on the printed circuit board (PCB) of the module which is plugged via a low-profile connector on the carrier board. In most cases, the connector has a construction height of 4.3 mm. It is also used for Mobile PCI Express Module 3.0 graphic cards, which naturally have completely different pin assignments.
Signal transmission is carried out via a total of 314 pins. 33 of these are reserved signal lines for power supply and grounding, so that with SMARC a total of 281 signal lines are effectively available. ARM- and SoC-typical energy-saving interfaces, like, for instance, parallel LCD for display connection, mobile industry processor interfaces for cameras, Serial Peripheral Interface (SPI) for general peripheral connection, I²S for audio and I2C are included. Besides these, classical computer interfaces such as USB , SATA and PCI Express are also defined.
In the 2.0 version of the SMARC specification not all of the 314 signal lines are assigned to fixed I/Os. The Alternate Function Block (AFB) has free pins available for different requirements. This is to ensure that the SMARC specification can flexibly accommodate up and coming technical developments which today are not foreseeable while remaining fully compatibility to previous designs. On the one hand, extended versions of the SMARC specification can assign new standard functions to these 20 AFB signal lines. On the other hand, the SMARC specification 1.0 lists the Media Oriented System Transport (MOST) bus, dual Gigabit Ethernet , Super Speed USB, or industrial network protocols which could be imagined as or might be assigned as interfaces of the AFB. | https://en.wikipedia.org/wiki/Smart_Mobility_Architecture |
The Smart Personal Objects Technology ( SPOT ) is a discontinued initiative by Microsoft to create intelligent and personal home appliances , consumer electronics , and other objects through new hardware capabilities and software features.
Development of SPOT began as an incubation project initiated by the Microsoft Research division. [ 1 ] [ 2 ] [ 3 ] SPOT was first announced by Bill Gates at the COMDEX computer exposition event in 2002, [ 4 ] and additional details were revealed by Microsoft at the 2003 Consumer Electronics Show where Gates demonstrated a set of prototype smartwatches —the first type of device that would support the technology. [ 1 ] [ 5 ] Unlike more recent technologies, SPOT did not use more traditional forms of connectivity, such as 3G or Wi-Fi , but relied on FM broadcasting subcarrier transmission as a method of data distribution. [ 6 ] [ 7 ]
While several types of electronics would eventually support the technology throughout its lifecycle, SPOT was considered a commercial failure . Reasons that have been cited for its failure include its subscription-based business model , support limited to North America , the emergence of more efficient and popular forms of data distribution, and mobile feature availability that surpasses the features that SPOT offered. [ 7 ]
Development of SPOT began as an incubation project led by Microsoft engineer, Bill Mitchell, and initiated by the Microsoft Research division. [ 1 ] [ 2 ] [ 3 ] Mitchell would enlist the help of Larry Karr, president of SCA Data Systems, to develop the project. Karr had previously worked in the 1980s to develop technology for Atari that would distribute games in a manner distinct from the company's competitors; Karr proposed FM broadcasting subcarrier transmission as a method of distribution, technology which would also be used by Microsoft's SPOT. [ 6 ] [ 8 ] Microsoft Research and SCA Data Systems would ultimately develop the DirectBand subcarrier technology for SPOT. [ 9 ] [ 10 ] National Semiconductor would aid in the development of device chipsets , which would feature an ARM7 CPU and ROM , SRAM , and a 100 MHz RF receiver chip . [ 2 ]
SPOT was unveiled by Bill Gates at the annual COMDEX computer exposition event in fall of 2002. [ 4 ] Gates stated that "new devices and technologies will help bring about the next computing revolution" and demonstrated refrigerator magnets that displayed the current time and sports scores , and an alarm clock that could display a list of upcoming appointments, traffic updates , and weather forecasts . [ 11 ] [ 12 ] [ 13 ]
At the Consumer Electronics Show of 2003, Microsoft announced that wristwatches would be the first type of device to utilize the technology in a partnership with watch manufacturers Citizen Watch Co. , Fossil , and Suunto . [ 1 ] [ 5 ] [ 15 ] [ 16 ] Bill Gates also demonstrated a set of prototype smart watches. [ 17 ] SPOT was not Microsoft's first foray into the smartwatch business—the company previously co-developed the Timex Datalink with Timex in 1994. [ 18 ] During CES, Microsoft claimed that the first SPOT-based smartwatches would be released in the fall of that year; [ 15 ] the company would also release a promotional video that displayed an estimated delivery time of fall 2003, [ 19 ] but the first devices would be delayed until the beginning of 2004. [ 20 ] [ 21 ] [ 22 ]
At the Windows Hardware Engineering Conference of 2003, Gates unveiled a new set of hardware-based navigational controls codenamed XEEL, designed to create a consistent navigation experience across Windows -based devices, such as smart phones , tablet PCs , and those powered by SPOT. [ 23 ] [ 24 ] Microsoft intended for XEEL to create a consistent navigation experience across hardware devices that equaled the software interface navigation consistency introduced by the mouse scroll wheel . [ 25 ]
In June 2003, Microsoft unveiled its MSN Direct wireless service developed specifically for SPOT, which would be made available across North America. The company stated that the service would enable the delivery of personalized information on devices and, as an example of this functionality, would allow users to receive messages sent from MSN Messenger or calendar appointment reminders from Microsoft Outlook . [ 26 ] [ 27 ] MSN Direct would use a subscription-based business model, available through monthly or yearly service plans. [ 26 ] [ 28 ] MSN Direct relied on the DirectBand subcarrier technology developed by Microsoft in conjunction with SCA Data Systems. [ 9 ]
The first devices to make use of SPOT were released in 2004 by Fossil and Suunto. [ 10 ] [ 29 ] Tissot would later introduce the first compatible watch to feature a touchscreen , [ 30 ] [ 31 ] and Swatch would release the first compatible watch, largely tailored towards younger consumers. [ 32 ] [ 33 ] As smartwatches were the first type of devices to make use of the technology, they became the de facto type of device that represented it.
In 2006, Oregon Scientific released the second type of SPOT device, a weather station that displayed regional weather forecasts and other various types of information. [ 34 ] A second generation of smartwatches was also released, and were designed to address the shortcomings observed in first generation models. [ 35 ] Later that year, Melitta released the third type of device to utilize the technology: a coffee maker that displayed weather forecasts on an electronic visual display. [ 36 ] Garmin released the first SPOT-compatible GPS navigation units in 2007. [ 37 ]
In early 2008, Microsoft announced that MSN Direct would be available for Windows Mobile , [ 38 ] [ 39 ] and in early 2009, the service would receive additional location-based enhancements. [ 40 ]
Production of SPOT watches ceased in 2008. [ 10 ] [ 41 ] [ 42 ] [ 43 ] [ 44 ] In 2009, Microsoft announced that it would discontinue the MSN Direct service at the beginning of 2012. [ 45 ] The company stated that this decision was due to decreased demand for the service and because of the emergence of more efficient and popular forms of data distribution, such as Wi-Fi . [ 45 ] [ 46 ] The MSN Direct service continued to support existing SPOT devices until transmissions ceased on January 1, 2012. [ 10 ] [ 46 ] [ 47 ]
SPOT extended functionality of traditional devices to include features not originally envisaged for them; a SPOT-powered coffeemaker, for example, would be able to display information such as weather forecasts on an electronic visual display. [ 7 ] Smartwatches featured digital watch displays, referred to as Channels , that presented information in a manner that could be customized by a user—a user could also specify the default channel to be displayed; this feature was functionally analogous with a home screen commonly seen in mobile operating systems . Additional channels could be downloaded from a specialized website, [ 1 ] [ 3 ] and a Glance feature would allow a user to cycle through downloaded information. [ 1 ] [ 48 ]
Manufacturers could also add their own features to SPOT-based devices; [ 7 ] as an example, a manufacturer could create its own smartwatch channel in order to distinguish its product from a competitor's product. [ 1 ] Each SPOT-based device included a unique identification number used to enable secure authentication and encryption of DirectBand signals. Microsoft also reportedly considered an alarm function for SPOT-based smartwatches that would activate in the event of theft. [ 1 ]
SPOT relied on the .NET Micro Framework for the creation and management of embedded device firmware . [ 49 ] This technology would later be used for the Windows SideShow feature introduced in Windows Vista , which shares design similarities with SPOT. [ 49 ] [ 50 ] [ 51 ] In 2007, five years after SPOT was announced, Microsoft released the first software development kit for the .NET Micro Framework. [ 52 ] [ 53 ] | https://en.wikipedia.org/wiki/Smart_Personal_Objects_Technology |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.