text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Synthetic microbial consortia or Synthetic microbial communities (commonly called SynComs) are multi-population systems that can contain a diverse range of microbial species, and are adjustable to serve a variety of industrial, ecological, [ 1 ] and tautological [ clarification needed ] interests. For synthetic biology , consortia take the ability to engineer novel cell behaviors to a population level.
Consortia are more common than not in nature, and generally prove to be more robust than monocultures. [ 2 ] Just over 7,000 species of bacteria have been cultured and identified to date. Many of the estimated 1.2 million bacteria species that remain have yet to be cultured and identified, in part due to inabilities to be cultured axenically . [ 3 ] Evidence for symbiosis between microbes strongly suggests it to have been a necessary precursor of the evolution of land plants and for their transition from algal communities in the sea to land. [ 4 ] When designing synthetic consortia, or editing naturally occurring consortia, synthetic biologists keep track of pH, temperature, initial metabolic profiles, incubation times, growth rate, and other pertinent variables. [ 2 ]
One of the more salient applications of engineering behaviors and interactions between microbes in a community is the ability to combine or even switch metabolisms. The combination of autotrophic and heterotrophic microbes allows the unique possibility of a self-sufficient community that may produce desired biofuels to be collected. [ 2 ] Co-culture dyads of autotrophic Synechococcus elongatus and heterotrophic Escherichia coli were found to be able to grow synchronously when the strain of S. elongatus was transformed to include a gene for sucrose export. [ 5 ] The commensal combination of the sucrose-producing cyanobacteria with the modified E. coli metabolism may allow for a diverse array of metabolic products such as various butanol biofuels, terpenoids, and fatty-acid derived fuels. [ 6 ]
Including a heterotroph also provides a solution to the issues of contamination when producing carbohydrates, as competition may limit contaminant species viability. [ 2 ] In isolated systems this can be a restriction to the feasibility of large-scale biofuel operations, like algae ponds, where contamination can significantly reduce the desired output. [ 7 ]
Through interactions between Geobacter spp. and methanogens from the soil in a rice paddy field, it was discovered that the use of interspecies electron transfer stimulated the production of methane. [ 8 ] Considering the abundance of conductive metals in soils and the use of methane (natural gas) as a fuel, this may lead to a bioenergy-producing process. [ 8 ]
Use of the extensive range of microbial metabolism offers opportunities to those interested in bioremediation . Through consortia, synthetic biologists have been able to design an enhanced efficiency in bacteria that can excrete bio-surfactants as well as degrade hydrocarbons for the interests of cleaning oil contamination in Assam, India. [ 9 ] Their experiment took combinations of five native naturally occurring hydrocarbon-degrading bacteria, and analyzed the different cocktails to see which degraded poly-aromatic hydrocarbons the best. [ 9 ] The combination of Bacillus pumilis KS2 and Bacillus cereus R2 was found to be the most effective, degrading 84.15% of the TPH after 5 weeks. [ 9 ]
Further remediation efforts have turned to the issue of agricultural pesticide run-off. Pesticides vary in class and function, and in high concentration often lead to highly toxic environmental risks. [ 10 ] Of the over-500 types of pesticides in current use, two serious issues are their general lack of biodegradability and unpredictability. [ 11 ] In Kyrgyzstan, researchers assessed soil around a pesticide dump and discovered not only that the soil had poor microflora diversity, but that some of the species that were present used metabolic pathways to digest the pesticides. [ 10 ] The two most-efficient species found were Pseudomonas fluorescens and Bacillus polymyxa , with B. polymyxa degrading 48.2% of the pesticide Aldrin after 12 days. [ 10 ] However, when the strains were combined with each other as well as some other less-efficient yet native bacteria, pesticide degradation increased to 54.0% in the same conditions. [ 10 ] Doolatkeldieva et al. discussed their findings, saying
"It is consequently possible that the degrading capacity of the bacteria could be increased only through co-cultivation, which shows that these bacteria naturally coexist and are dependent on each other for the utilization of environmental substances. In the oxidation and hydrolysis pathways of pesticide degradation, each bacterium can produce metabolites that will be utilized by the enzyme system of the next bacterium". [ 10 ]
As an answer to the increase in use of non-biodegradable, oil-based plastics and its subsequent accumulation as waste, scientists have developed biodegradable and compostable alternatives often called bioplastics . [ 12 ] However, not all biologically created plastics are necessarily biodegradable, and this can be a source of confusion. [ 13 ] [ unreliable source? ] Therefore it is important to distinguish between the types of bioplastics, biodegradable bioplastics which can be degraded by some microflora and simply bio-based plastics which are a renewable source of plastic but require more effort to dispose of. [ 13 ]
One of the bioplastics of interest is Polyhydroxybutyrate , abbreviated to PHB. PHB is a biodegradable bioplastic that has applications for food packaging due to being non-toxic. [ 14 ] Repurposed E. coli , as well as Halomonas boliviensis , have been shown to produce PHB. [ 15 ] [ 16 ] PHB production starting from carbon dioxide in a co-culture between S. elongatus and H. boliviensis has proven to be a stable continually-productive pair for 5 months without the aid of antibiotics. [ 15 ] | https://en.wikipedia.org/wiki/Synthetic_microbial_consortia |
Synthetic molecular motors are molecular machines capable of continuous directional rotation under an energy input. [ 2 ] Although the term "molecular motor" has traditionally referred to a naturally occurring protein that induces motion (via protein dynamics ), some groups also use the term when referring to non-biological, non-peptide synthetic motors. Many chemists are pursuing the synthesis of such molecular motors.
The basic requirements for a synthetic motor are repetitive 360° motion, the consumption of energy and unidirectional rotation. [ citation needed ] The first two efforts in this direction, the chemically driven motor by Dr. T. Ross Kelly of Boston College with co-workers and the light-driven motor by Ben Feringa and co-workers, were published in 1999 in the same issue of Nature .
As of 2020, the smallest atomically precise molecular machine has a rotor that consists of four atoms. [ 3 ]
An example of a prototype for a synthetic chemically driven rotary molecular motor was reported by Kelly and co-workers in 1999. [ 5 ] Their system is made up from a three-bladed triptycene rotor and a helicene , and is capable of performing a unidirectional 120° rotation.
This rotation takes place in five steps. The amine group present on the triptycene moiety is converted to an isocyanate group by condensation with phosgene ( a ). Thermal or spontaneous rotation around the central bond then brings the isocyanate group in proximity of the hydroxyl group located on the helicene moiety ( b ), thereby allowing these two groups to react with each other ( c ). This reaction irreversibly traps the system as a strained cyclic urethane that is higher in energy and thus energetically closer to the rotational energy barrier than the original state. Further rotation of the triptycene moiety therefore requires only a relatively small amount of thermal activation in order to overcome this barrier, thereby releasing the strain ( d ). Finally, cleavage of the urethane group restores the amine and alcohol functionalities of the molecule ( e ).
The result of this sequence of events is a unidirectional 120° rotation of the triptycene moiety with respect to the helicene moiety. Additional forward or backward rotation of the triptycene rotor is inhibited by the helicene moiety, which serves a function similar to that of the pawl of a ratchet . The unidirectionality of the system is a result from both the asymmetric skew of the helicene moiety as well as the strain of the cyclic urethane which is formed in c . This strain can be only be lowered by the clockwise rotation of the triptycene rotor in d , as both counterclockwise rotation as well as the inverse process of d are energetically unfavorable. In this respect the preference for the rotation direction is determined by both the positions of the functional groups and the shape of the helicene and is thus built into the design of the molecule instead of dictated by external factors.
The motor by Kelly and co-workers is an elegant example of how chemical energy can be used to induce controlled, unidirectional rotational motion, a process which resembles the consumption of ATP in organisms in order to fuel numerous processes. However, it does suffer from a serious drawback: the sequence of events that leads to 120° rotation is not repeatable. Kelly and co-workers have therefore searched for ways to extend the system so that this sequence can be carried out repeatedly. Unfortunately, their attempts to accomplish this objective have not been successful and currently the project has been abandoned. [ 6 ] In 2016 David Leigh 's group invented the first autonomous chemically-fuelled synthetic molecular motor. [ 7 ]
Some other examples of synthetic chemically driven rotary molecular motors that all operate by sequential addition of reagents have been reported, including the use of the stereoselective ring opening of a racemic biaryl lactone by the use of chiral reagents, which results in a directed 90° rotation of one aryl with respect to the other aryl. Branchaud and co-workers have reported that this approach, followed by an additional ring closing step, can be used to accomplish a non-repeatable 180° rotation. [ 8 ]
Feringa and co-workers used this approach in their design of a molecule that can repeatably perform 360° rotation. [ 9 ] The full rotation of this molecular motor takes place in four stages. In stages A and C rotation of the aryl moiety is restricted, although helix inversion is possible. In stages B and D the aryl can rotate with respect to the naphthalene with steric interactions preventing the aryl from passing the naphthalene. The rotary cycle consists of four chemically induced steps which realize the conversion of one stage into the next. Steps 1 and 3 are asymmetric ring opening reactions which make use of a chiral reagent in order to control the direction of the rotation of the aryl. Steps 2 and 4 consist of the deprotection of the phenol , followed by regioselective ring formation.
In 1999 the laboratory of Prof. Dr. Ben L. Feringa at the University of Groningen , The Netherlands , reported the creation of a unidirectional molecular rotor. [ 10 ] Their 360° molecular motor system consists of a bis- helicene connected by an alkene double bond displaying axial chirality and having two stereocenters .
One cycle of unidirectional rotation takes 4 reaction steps. The first step is a low temperature endothermic photoisomerization of the trans ( P , P ) isomer 1 to the cis ( M , M ) 2 where P stands for the right-handed helix and M for the left-handed helix. In this process, the two axial methyl groups are converted into two less sterically favorable equatorial methyl groups.
By increasing the temperature to 20 °C these methyl groups convert back exothermally to the ( P , P ) cis axial groups ( 3 ) in a helix inversion . Because the axial isomer is more stable than the equatorial isomer, reverse rotation is blocked. A second photoisomerization converts ( P , P ) cis 3 into ( M , M ) trans 4 , again with accompanying formation of sterically unfavorable equatorial methyl groups. A thermal isomerization process at 60 °C closes the 360° cycle back to the axial positions.
A major hurdle to overcome is the long reaction time for complete rotation in these systems, which does not compare to rotation speeds displayed by motor proteins in biological systems. In the fastest system to date, with a fluorene lower half, the half-life of the thermal helix inversion is 0.005 seconds. [ 11 ] This compound is synthesized using the Barton-Kellogg reaction . In this molecule the slowest step in its rotation, the thermally induced helix-inversion, is believed to proceed much more quickly because the larger tert -butyl group makes the unstable isomer even less stable than when the methyl group is used. This is because the unstable isomer is more destabilized than the transition state that leads to helix-inversion. The different behaviour of the two molecules is illustrated by the fact that the half-life time for the compound with a methyl group instead of a tert -butyl group is 3.2 minutes. [ 12 ]
The Feringa principle has been incorporated into a prototype nanocar . [ 13 ] The car synthesized has a helicene-derived engine with an oligo (phenylene ethynylene) chassis and four carborane wheels and is expected to be able to move on a solid surface with scanning tunneling microscopy monitoring, although so far this has not been observed. The motor does not perform with fullerene wheels because they quench the photochemistry of the motor moiety . Feringa motors have also been shown to remain operable when chemically attached to solid surfaces. [ 14 ] [ 15 ] The ability of certain Feringa systems to act as an asymmetric catalyst has also been demonstrated. [ 16 ] [ 17 ]
In 2016, Feringa was awarded a Nobel prize for his work on molecular motors.
A single-molecule electrically operated motor made from a single molecule of n -butyl methyl sulfide (C 5 H 12 S) has been reported. The molecule is adsorbed onto a copper (111) single-crystal piece by chemisorption . [ 18 ] | https://en.wikipedia.org/wiki/Synthetic_molecular_motor |
Synthetic morphology is a sub-discipline of the broader field of synthetic biology .
In standard synthetic biology , artificial gene networks are introduced into cells , inputs (e.g. chemicals, light) are applied to those networks, and the networks perform logical operations on them and output the result of the operation as the activity of an enzyme or as the amount of green fluorescent protein . Using this approach, synthetic biologists have demonstrated the ability of their gene networks to perform Boolean computation , to hold a memory, and to generate pulses and oscillation .
Synthetic morphology extends this idea by adding output modules that alter the shape or social behaviour of cells in response to the state of the artificial gene network. For example, instead of just making a fluorescent protein, a gene network may switch on an adhesion molecule so that cells stick to each other, or activate a motility system so that cells move. It has been argued that the formation of properly-shaped tissues by mammalian cells involves mainly a set of about ten basic cellular events (cell proliferation, cell death, cell adhesion, differential adhesion, cell de-adhesion, cell fusion, cell locomotion, chemotaxis, haptotaxis, cell wedging). [ 1 ] Broadly similar lists exist for tissues of plants, fungi etc. In principle, therefore, a fairly small set of output modules might allow biotechnologists to 'program' cells to produce artificially-designed arrangements, shapes and eventually 'tissues'.
The term synthetic morphology was introduced to the peer reviewed scientific literature in 2008 [ 1 ] and is now becoming more widely used both in peer-reviewed literature [ 2 ] and texts. [ 3 ] | https://en.wikipedia.org/wiki/Synthetic_morphology |
Synthetic musks are a class of synthetic aroma compounds to emulate the scent of deer musk and other animal musks ( castoreum and civet ). Synthetic musks have a clean, smooth and sweet scent lacking the fecal notes of animal musks. They are used as flavorings and fixatives in cosmetics , detergents , perfumes and foods , supplying the base note of many perfume formulas. Most musk fragrance used in perfumery today is synthetic. [ 1 ]
Synthetic musks in a narrower sense are chemicals modeled after the main odorants in animal musk: muscone in deer musk, and civetone in civet. Muscone and civetone are macrocyclic ketones. Other structurally distinct compounds with similar odors are also known as musks.
An artificial musk was obtained by Albert Baur in 1888 by condensing toluene with isobutyl bromide in the presence of aluminium chloride , and nitrating the product. It was discovered accidentally as a result of Baur's attempts at producing a more effective form of trinitrotoluene (TNT). It appears that the odour depends upon the symmetry of the three nitro groups.
The creation of this class of musks was largely prompted through the need for eliminating the nitro functional group from nitro-musks due to their photochemical reactivity and their instability in alkaline media. This was shown to be possible through the discovery of ambroxide , a non-nitro aromatic musk, which promoted research in the development of nitro-free musks. This led to the eventual discovery of phantolide, so named due to its commercialization by Givaudan without initial knowledge of its chemical structure (elucidated 4 years later). While poorer in smell strength, the performance and stability of this compound class in harsh detergents led to its common use, which spurred further development of other polycyclic musks including Galaxolide . [ 5 ]
A class of artificial musk consisting of a single ring composed of more than 6 carbons (often 10–15). Of all artificial musks, these most resemble the primary odoriferous compound from Tonkin musk in its "large ringed" structure. While the macrocyclic musks extracted from plants consists of large ringed lactones , all animal derived macrocyclic musks are ketones . [ 5 ]
Although muscone, the primary macrocyclic compound of musk was long known, it was only in 1926 that Leopold Ruzicka was able to synthesize this compound in very small quantities. Despite this discovery and the discovery of other pathways for synthesis of macrocyclic musks, compounds of this class were not commercially produced and commonly utilized until the late 1990s due to difficulties in their synthesis and consequently higher price. [ 6 ]
Alicyclic musks, otherwise known as cycloalkyl ester or linear musks, are a relatively novel class of musk compounds. The first compound of this class was introduced 1975 with Cyclomusk, though similar structures were noted earlier in citronellyl oxalate and Rosamusk. [ 7 ] Alicyclic musks are dramatically different in structure than previous musks (aromatic, polycyclic, macrocyclic) in that they are modified alkyl esters. [ 8 ] Although they were discovered prior to 1980, it was only in 1990 with the discovery and introduction of Helvetolide at Firmenich that a compound of this class was produced at a commercial scale. [ 7 ] Romandolide, a more ambrette and less fruity alicyclic musk compared to Helvetolide, was introduced ten years later. [ 8 ]
Synthetic musks are lipophilic compounds and tend to deposit and persist in fat tissues. [ 9 ] Nitromusks and polycyclic musks – having been used for 100 years – have low biodegradability and accumulate in the environment. [ citation needed ] | https://en.wikipedia.org/wiki/Synthetic_musk |
Synthetic oil is a lubricant consisting of chemical compounds that are artificially modified or synthesised. Synthetic oil is used as a substitute for petroleum-refined oils when operating in extreme temperature, in metal stamping to provide environmental and other benefits, and to lubricate pendulum clocks . There are various types of synthetic oils. Advantages of using synthetic motor oils include better low-and high-temperature viscosity performance, better (higher) viscosity index (VI), and chemical and shear stability, while disadvantages are that synthetics are substantially more expensive (per volume) than mineral oils and have potential decomposition problems.
Synthetic oil lubricant comprises chemical compounds that are artificially modified or synthesised. Synthetic lubricants can be manufactured using chemically modified petroleum components rather than whole crude oil , but can also be synthesized from other raw materials. The base material, however, is still overwhelmingly crude oil that is distilled and then modified physically and chemically. The actual synthesis process and composition of additives is generally a commercial trade secret and will vary among producers. [ 1 ]
Some synthetic oils are made from Group III base stock , some from Group IV while other synthetic oils may be a blend of the two. Mobil sued Castrol and Castrol prevailed in showing that their Group III base stock oil was changed enough that it qualified as full synthetic. Since then American Petroleum Institute (API) has removed all references to synthetic in their documentation regarding standards. "Full synthetic" is a marketing term and is not a measurable quality.
Poly-alpha-olefin (poly-α-olefin, PAO) is a non-polar polymer made by polymerizing an alpha-olefin. They are designated at API Group IV and are a 100% synthetic chemical compound. It is a specific type of olefin (organic) that is used as a base stock in the production of most synthetic lubricants. [ 5 ] An alpha-olefin (or α-olefin) is an alkene where the carbon-carbon double bond starts at the α-carbon atom, i.e. the double bond is between the #1 and #2 carbons in the molecule. [ 6 ]
Group V base oils are defined by API as any other type of oil other than mineral oils or PAO lubricants.
Esters are the most famous synthetics in Group V, which are 100% synthetic chemical compounds consisting of a carbonyl adjacent to an ether linkage. They are derived by reacting an oxoacid with a hydroxyl compound such as an alcohol or phenol . Esters are usually derived from an inorganic acid or organic acid in which at least one -OH (hydroxyl) group is replaced by an -O-alkyl ( alkoxy ) group, most commonly from carboxylic acids and alcohols. That is to say, esters are formed by condensing an acid with an alcohol.
Many chemically different "esters"—due to their polarity and usually excellent lubricity—are used for various reasons as either "additives" or "base stocks" for lubricants. [ 6 ]
PAGs offer properties that include: high lubricity, polarity, low traction properties, high viscosity index, controlled quenching speeds, good temperature stability and low wear. They are available in both water soluble and insoluble forms. [ 9 ]
PAGs are commonly used in quenching fluids, metalworking fluids, gear oils, chain oils, food-grade lubricants and as lubricants in HFC type hydraulics and gas compressor equipment. [ 9 ] PAG lubricants are used by the two largest U.S. air compressor OEMs in rotary screw air compressors. [ 7 ] PAG oils of different viscosity grades (usually either ISO VG 46 or ISO VG 100) are often used as compressor lubricants for automotive air conditioning systems employing low global warming potential refrigerants.
Semi-synthetic oils (also called "synthetic blends") are a mixture of mineral oil and synthetic oil, which are engineered to have many of the benefits of full synthetic oil without the cost. Motul introduced the first semi-synthetic motor oil in 1966. [ 20 ]
Lubricants that have synthetic base stocks even lower than 30% but with high-performance additives consisting of esters can also be considered synthetic lubricants. In general, the ratio of the synthetic base stock is used to define commodity codes among the customs declarations for tax purposes.
API Group II- and API Group III-type base stocks help to formulate more economic-type semi-synthetic lubricants. API Group I-, II-, II+-, and III-type mineral-base oil stocks are widely used in combination with additive packages, performance packages, and ester and/or API Group IV poly-alpha-olefins in order to formulate semi-synthetic-based lubricants. API Group III base oils are sometimes considered fully synthetic, but they are still classified as highest-top-level mineral-base stocks. A synthetic or synthesized material is one that is produced by combining or building individual units into a unified entity. Synthetic base stocks as described above are man-made and tailored to have a controlled molecular structure with predictable properties, unlike mineral base oils, which are complex mixtures of naturally occurring hydrocarbons and paraffins. [ 21 ] [ 22 ]
The advantages of using synthetic motor oils include better low-and high-temperature viscosity performance at service temperature extremes, [ 23 ] better (higher) Viscosity Index (VI), [ 24 ] and chemical and shear stability. [ 25 ] This also helps in decreasing the loss due to evaporation. [ 24 ] [ 26 ] [ 27 ] [ 28 ] It is resistant to oxidation, thermal breakdown, oil sludge problems [ 29 ] and provides extended drain intervals, with the environmental benefit of less used oil waste generated. It provides better lubrication in extreme cold conditions. [ 24 ] The use of synthetic oils promises a longer engine life [ 24 ] with superior protection against "ash" and other deposit formation in engine hot spots (in particular in turbochargers and superchargers) for less oil burn-off and reduced chances of damaging oil passageway clogging. [ 23 ] The performance of automobiles is improved as net increase in horsepower and torque due to less internal drag on engine. [ 29 ] Moreover, it can improve fuel efficiency - 1.8% to 5% as has been documented in fleet tests. [ 24 ]
However, synthetic motor oils are substantially more expensive (per volume) than mineral oils [ 30 ] and have potential decomposition problems in certain chemical environments (predominantly in industrial use). [ 31 ] | https://en.wikipedia.org/wiki/Synthetic_oil |
Synthetic paper is a material made out of synthetic resin which is made to have properties similar to regular paper .
Synthetic paper is usually made out of either biaxially oriented polypropylene (BOPP) or high-density polyethylene (HDPE). The applications include paper for labels (thus that can bond with ink) and non-label paper . The products can be highly water resistant , flexible, durable and tear resistant. [ 1 ]
To the polypropylene resin can be added calcium carbonate . [ 1 ]
The market for synthetic paper includes packaged food and beverage consumption and cosmetics industries. [ 1 ] | https://en.wikipedia.org/wiki/Synthetic_paper |
A synthetic radioisotope is a radionuclide that is not found in nature : no natural process or mechanism exists which produces it, or it is so unstable that it decays away in a very short period of time. [ 1 ] Frédéric Joliot-Curie and Irène Joliot-Curie were the first to produce a synthetic radioisotope in the 20th century. [ 2 ] Examples include technetium-98 and promethium-146 . Many of these are found in, and harvested from, spent nuclear fuel assemblies. Some must be manufactured in particle accelerators . [ 3 ]
Some synthetic radioisotopes are extracted from spent nuclear reactor fuel rods, which contain various fission products . For example, it is estimated that up to 1994, about 49,000 terabecquerels (78 metric tons ) of technetium were produced in nuclear reactors; as such, anthropogenic technetium is far more abundant than technetium from natural radioactivity. [ 4 ]
Some synthetic isotopes are produced in significant quantities by fission but are not yet being reclaimed. Other isotopes are manufactured by neutron irradiation of parent isotopes in a nuclear reactor (for example, technetium-97 can be made by neutron irradiation of ruthenium-96 ) or by bombarding parent isotopes with high energy particles from a particle accelerator. [ 5 ] [ 6 ]
Many isotopes, including radiopharmaceuticals , are produced in cyclotrons . For example, the synthetic fluorine-18 and oxygen-15 are widely used in positron emission tomography . [ 7 ]
Most synthetic radioisotopes have a short half-life . Though a health hazard, radioactive materials have many medical and industrial uses.
The field of nuclear medicine covers use of radioisotopes for diagnosis or treatment.
Radioactive tracer compounds, radiopharmaceuticals , are used to observe the function of various organs and body systems. These compounds use a chemical tracer which is attracted to or concentrated by the activity which is being studied. That chemical tracer incorporates a short lived radioactive isotope, usually one which emits a gamma ray which is energetic enough to travel through the body and be captured outside by a gamma camera to map the concentrations. Gamma cameras and other similar detectors are highly efficient, and the tracer compounds are generally very effective at concentrating at the areas of interest, so the total amounts of radioactive material needed are very small.
The metastable nuclear isomer technetium-99m is a gamma-ray emitter widely used for medical diagnostics because it has a short half-life of 6 hours, but can be easily made in the hospital using a technetium-99m generator . Weekly global demand for the parent isotope molybdenum-99 was 440 TBq (12,000 Ci ) in 2010, overwhelmingly provided by fission of uranium-235 . [ 8 ]
Several radioisotopes and compounds are used for medical treatment , usually by bringing the radioactive isotope to a high concentration in the body near a particular organ. For example, iodine-131 is used for treating some disorders and tumors of the thyroid gland.
Alpha particle , beta particle , and gamma ray radioactive emissions are industrially useful. Most sources of these are synthetic radioisotopes. Areas of use include the petroleum industry , industrial radiography , homeland security , process control , food irradiation and underground detection. [ 9 ] [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Synthetic_radioisotope |
Synthetic rescue (or synthetic recovery or synthetic viability when a lethal phenotype is rescued) refers to a genetic interaction in which a cell that is nonviable, sensitive to a specific drug, or otherwise impaired due to the presence of a genetic mutation becomes viable when the original mutation is combined with a second mutation in a different gene. The second mutation can either be a loss-of-function mutation (equivalent to a knockout) or a gain-of-function mutation .
The term synthetic rescue is derived from synthetic lethality , where the combination of two mutations leads to cell death (whereas neither alone is lethal). [ 1 ] Synthetic rescue is the inverse process: instead of causing lethality, the second genetic change rescues the organism from the harmful effects of the first. [ 2 ]
This phenomenon occurs in the yeast Saccharomyces cerevisiae , wherein a deletion of the DNA helicase gene SRS2 compensates for the lethality and DNA repair defects caused by the loss of the RAD54 gene. [ 3 ]
Synthetic rescue provides insight into the function of the genes involved in intragenic interactions. [ 4 ] Synthetic rescue could also potentially be exploited for gene therapy .
Overexpression of a gene compensates for a loss-of-function mutation, for example, extra HIS4 copies rescuing his4 auxotrophy in yeast.
A mutation in one gene compensates for another, for example; SRS2 deletion rescuing rad54 Δ lethality in yeast.
A suppressor mutation activates an alternative pathway to bypass a defect, for example; EXO1 deletion rescuing cdc13-1 by halting telomere degradation in mutant yeast. [ 5 ]
Synthetic rescue principles underpin PARP inhibitor treatments in BRCA -deficient cancers. While PARP inhibition is synthetic lethal with BRCA loss, synthetic rescue interactions such as 53BP1 deletion restoring viability reveal resistance mechanisms and alternative targets. [ 6 ]
The Systems Metabolic Engineering Group of KAIST engineered synthetic rescue in E.coli by deleting sdhA and compensating with mutations in icd for the purpose of rescuing lethal metabolic pathways with the goal of expanding the scope of genome-scale engineering and developing platform technologies for sustainable biochemical production. [ 9 ] | https://en.wikipedia.org/wiki/Synthetic_rescue |
Synthetic resins are industrially produced resins , typically viscous substances that convert into rigid polymers by the process of curing . In order to undergo curing, resins typically contain reactive end groups, [ 2 ] such as acrylates or epoxides . Some synthetic resins have properties similar to natural plant resins , but many do not. [ 3 ]
Synthetic resins are of several classes. Some are manufactured by esterification of organic compounds . Some are thermosetting plastics in which the term "resin" is loosely applied to the reactant(s), the product, or both. "Resin" may be applied to one of two monomers in a copolymer , the other being called a "hardener", as in epoxy resins . For thermosetting plastics that require only one monomer, the monomer compound is the "resin". For example, liquid methyl methacrylate is often called the "resin" or "casting resin" while in the liquid state, before it polymerizes and "sets". After setting, the resulting poly(methyl methacrylate) (PMMA) is often renamed "acrylic glass" or "acrylic". (This is the same material called Plexiglas and Lucite).
The classic variety is epoxy resin , manufactured through polymerization -polyaddition or polycondensation reactions, used as a thermoset polymer for adhesives and composites . [ 4 ] Epoxy resin is two times stronger than concrete, seamless, and waterproof. [ citation needed ] Accordingly, it has been mainly in use for industrial flooring purposes since the 1960s. Since 2000, however, epoxy and polyurethane resins are used in interiors as well, mainly in Western Europe.
Synthetic casting "resin" for embedding display objects in Plexiglas/Lucite ( PMMA ) is simply methyl methacrylate liquid, into which a polymerization catalyst is added and mixed, causing it to "set" (polymerize). The polymerization creates a block of PMMA plastic ("acrylic glass") which holds the display object inside a transparent block.
Another synthetic polymer, sometimes called by the same general category, is acetal resin . By contrast with the other synthetics, however, it has a simple chain structure with the repeat unit of form −[CH 2 O]−.
Ion-exchange resins are used in water purification and catalysis of organic reactions . (See also AT-10 resin , melamine resin .) Certain ion-exchange resins are also used pharmaceutically as bile acid sequestrants , mainly as hypolipidemic agents , although they may be used for purposes other than lowering cholesterol .
Solvent impregnated resins (SIRs) are porous resin particles which contain an additional liquid extractant inside the porous matrix. The contained extractant is supposed to enhance the capacity of the resin particles.
A large category of resins, which constitutes 75% of resins used, [ citation needed ] is that of the unsaturated polyester resins .
The production of PVC entails the production of "vinyl chloride resins", which differ in the degree of polymerization. [ 5 ]
Silicone resins are silicone -based polymers that exhibit various useful properties like weatherability (durability), dielectricity , water repellency, thermal stability, and chemical inertness . [ 6 ]
Health hazards potentially associated with synthetic resins are typically less of a concern than the hazards associated with the cured products, which are more commonly in contact with consumers. Issues of interest include the effects of unconsumed monomers, oligomers, and solvent carriers.
Dental restorative materials based on bis-GMA -containing resins [ 7 ] can break down into or be contaminated with the related compound bisphenol A , a potential endocrine disruptor . However, no negative health effects of bis-GMA use in dental resins have been found. [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Synthetic_resin |
Synthetic ribosomes are artificial small-molecules that can synthesize peptides in a sequence-specific matter. [ 1 ]
David Alan Leigh 's lab built synthetic ribosome using a chemical structure based on a rotaxane . [ 2 ]
The Cédric Orelle research group created ribosomes with tethered and inseparable subunits (or Ribo-T ). [ 3 ] | https://en.wikipedia.org/wiki/Synthetic_ribosome |
Synthetic schlieren is a process that is used to visualize the flow of a fluid of variable refractive index . Named after the schlieren method of visualization, it consists of a digital camera or video camera pointing at the flow in question, with an illuminated target pattern behind. The method was first proposed in 1999. [ 1 ]
Variations in refractive index cause the light from the target to refract as it passes through the fluid, which causes a distortion of the pattern in the image seen by the camera. Pattern matching algorithms can measure this distortion and calculate a qualitative density field of the flow.
The method of synthetic schlieren can be used to observe any flow which has variations in refractive index. Commonly these are caused by variations in concentration of a solute in an aqueous solution , or variations in the density of a compressible flow , caused by temperature or pressure variations. As with the optical schlieren method, the clearest results are obtained from flows which are largely two-dimensional. | https://en.wikipedia.org/wiki/Synthetic_schlieren |
Synthetic setae emulate the setae found on the toes of a gecko and scientific research in this area is driven towards the development of dry adhesives . Geckos have no difficulty mastering vertical walls and are apparently capable of adhering themselves to just about any surface. The five-toed feet of a gecko are covered with elastic hairs called setae and the ends of these hairs are split into nanoscale structures called spatulae (because of their resemblance to actual spatulas ). The sheer abundance and proximity to the surface of these spatulae make it sufficient for van der Waals forces alone to provide the required adhesive strength. [ 2 ] Following the discovery of the gecko's adhesion mechanism in 2002, which is based on van der Waals forces, biomimetic adhesives have become the topic of a major research effort. These developments are poised to yield families of novel adhesive materials with superior properties which are likely to find uses in industries ranging from defense and nanotechnology to healthcare and sport.
Geckos are renowned for their exceptional ability to stick and run on any vertical and inverted surface (excluding Teflon [ 3 ] ). However gecko toes are not sticky in the usual way like chemical adhesives. Instead, they can detach from the surface quickly and remain quite clean around everyday contaminants even without grooming.
The two front feet of a tokay gecko can withstand 20.1 N of force parallel to the surface with 227 mm 2 of pad area, [ 4 ] a force as much as 40 times the gecko's weight. Scientists have been investigating the secret of this extraordinary adhesion ever since the 19th century, and at least seven possible mechanisms for gecko adhesion have been discussed over the past 175 years. There have been hypotheses of glue, friction, suction, electrostatics , micro-interlocking and intermolecular forces . Sticky secretions were ruled out first early in the study of gecko adhesion since geckos lack glandular tissue on their toes. The friction hypothesis was also dismissed quickly because the friction force only acts in shear which cannot explain the adhesive capabilities of geckos on inverted surfaces. The hypothesis that the toe pads act as suction cups was dispelled in 1934 by experiments carried out in a vacuum in which the gecko's toes remained stuck. Similarly, the electrostatic hypothesis was refuted by an experiment showing that geckos could still adhere even when the build-up of electrostatic charge was impossible (such as on a metal surface in air ionized by a stream of x-rays). The mechanism of microinterlocking which suggested that the curved tips of setae could act as microscale hooks was also challenged by the fact that geckos generate large adhesive forces even on molecularly smooth surfaces.
The possibilities finally narrowed down to intermolecular forces, and the development of electron microscopy in the 1950s, which revealed the micro-structure of the setae on the gecko's foot, provided further proof to support this hypothesis. The problem was finally solved in 2000 by a research team led by biologists Kellar Autumn of Lewis & Clark College in Portland, Oregon, and Robert Full at the University of California at Berkeley. [ 6 ] They showed that the underside of a gecko toe typically bears a series of ridges, which are covered with uniform ranks of setae, and each seta further divides into hundreds of split ends and flat tips called spatulas (see figure on the right). A single seta of the tokay gecko is roughly 110 micrometers long and 4.2 micrometers wide. Each of a seta's branches ends in a thin, triangular spatula connected at its apex. The end is about 0.2 micrometers long and 0.2 micrometers wide. [ 5 ] The adhesion between gecko's foot and the surfaces is exactly the result of the Van der Waals force between each seta and the surface molecules. A single seta can generate up to 200 μN of force. [ 7 ] There are about 14,400 setae per square millimeter on the foot of a tokay gecko, which leads to a total number of about 3,268,800 setae on a tokay gecko's two front feet. From the equation for intermolecular potential:
where ρ 1 {\displaystyle \rho _{1}} and ρ 2 {\displaystyle \rho _{2}} are the number of contacts of the two surfaces, R is the radius of each contact and D is the distance between the two surfaces.
We find that the intermolecular force, or the van der Waals force in this case between two surfaces is greatly dominated by the number of contacts. This is exactly the reason why the gecko's feet can generate extraordinary adhesion force to different kinds of surfaces. The combined effect of millions of spatulae provides an adhesive force many times greater than the gecko needs to hang from a ceiling by one foot.
The surprisingly large forces generated by the gecko's toes [ 8 ] raised the question of how geckos manage to lift their feet so quickly – in just 15 milliseconds – with no measurable detachment forces. Kellar Autumn and his research group found out the 'Lift-off mechanism' of the gecko's feet. Their discovery revealed that gecko adhesive actually works in a 'programmable' way that by increasing the angle between the setal shaft and the substrate to 30 degrees, no matter how big the perpendicular adhesive force is, geckos 'turn off' the stickiness since the increased stress at the trailing edge of the seta causes the bonds between seta and the substrate to break. The seta then returns to an unloaded default state. On the other hand, by applying preload and dragging along the surface, the geckos turn on the modulate stickiness. This 'Lift-off' mechanism can be shown in the figure on the right.
Unlike conventional adhesives, gecko adhesive becomes cleaner with repeated use, and thus stays quite clean around everyday contaminants such as sand, dust, leaf litter and pollen. In addition, unlike some plants and insects that have the ability of self-cleaning by droplets, geckos are not known to groom their feet in order to retain their adhesive properties – all they need is only a few steps to recover their ability to cling to vertical surfaces.
Kellar Autumn and his research group have conducted experiments to test and demonstrate this ability of the gecko. [ 9 ] They also use the contact mechanical model to suggest that self-cleaning occurs by an energetic disequilibrium between the adhesive forces attracting a dirt particle to the substrate and those attracting the same particle to one or more spatulae. In other words, the Van der Waals interaction energy for the particle-wall system requires a sufficiently great number of particle-spatula systems to counterbalance; however, relatively few spatulae can actually attach to a single particle, therefore the contaminant particles tend to attach to the substrate surface rather than the gecko's toe due to this disequilibrium. Figure on the right shows the model of interaction between N spatulas, a dirt particle and a planar wall.
It is important to know that this property of self-cleaning appears intrinsic to the setal nano-structure and therefore should be replicable in synthetic adhesive materials. In fact, Kellar Autumn's group observed how self-cleaning still occurred in arrays of setae when isolated from the geckos used.
The discoveries about gecko's feet led to the idea that these structures and mechanisms might be exploited in a new family of adhesives, and research groups from around the world are now investigating this concept. And thanks to the development of nano science and technology, people are now able to create biomimetic adhesive inspired by gecko's setae using nanostructures . Indeed, interest and new discoveries in gecko-type adhesives are booming, as illustrated by the growing number of papers published on this topic. [ 10 ] however, synthetic setae are still at a very early stage.
Effective design of geckolike adhesives will require deep understanding of the principles underlying the properties observed in the natural system. These properties, principles, and related parameters of the gecko adhesive system are shown in the following table. [ 11 ] This table also gives us an insight into how scientists translate those good properties of gecko's setae (as shown in the first column) into the parameters they can actually control and design (as shown in the third column).
*JKR refers to the Johnson, Kendall, Roberts model of adhesion [ 12 ]
In summary, the key parameters in the design of synthetic gecko adhesive include:
There is a growing list of benchmark properties that can be used to evaluate the effectiveness of synthetic setae, and the adhesion coefficient, which is defined as:
μ ′ = F adhesion / F preload {\displaystyle \mu ^{'}=F_{\text{adhesion}}/F_{\text{preload}}}
where F preload {\displaystyle F_{\text{preload}}} is the applied preload force, and F adhesion {\displaystyle F_{\text{adhesion}}} is the generated adhesion force.
The adhesion coefficient of real gecko setae is typically 8~16.
In the first developments of synthetic setae, polymers like polyimide , polypropylene and polydimethylsiloxane (PDMS) are frequently used since they are flexible and easily fabricated. Later, as nanotechnology rapidly developed, Carbon Nanotubes (CNTs) are preferred by most research groups and used in most recent projects. CNTs have much larger possible length-to-diameter ratio than polymers, and they exhibit both extraordinary strength and flexibility, as well as good electrical properties. It is these novel properties that make synthetic setae more effective.
A number of MEMS / NEMS fabrication techniques are applied to the fabrication of synthetic setae, which include photolithography / electron beam lithography , plasma etching , deep reactive ion etching (DRIE), chemical vapor deposition (CVD), and micro-molding, etc.
In this section, several typical examples will be given to show the design and fabrication process of synthetic setae. We can also gain an insight into the development of this biomimetic technology over the past few years from these examples.
This example is one of the first developments of synthetic setae, which arose from a collaboration between the Manchester Centre for Mesoscience and Nanotechnology , and the Institute for Microelectronics Technology in Russia. Work started in 2001 and 2 years later results were published in Nature Materials. [ 13 ]
The group prepared flexible fibers of polyimide as the synthetic setae structures on the surface of a 5 μm thick film of the same material using electron beam lithography and dry etching in an oxygen plasma . The fibres were 2 μm long, with a diameter of around 500 nm and a periodicity of 1.6 μm, and covered an area of roughly 1 cm 2 (see figure on the left). Initially, the team used a silicon wafer as a substrate but found that the tape's adhesive power increased by almost 1,000 times if they used a soft bonding substrate such as Scotch tape – This is because the flexible substrate yields a much higher ratio of the number of setae in contact with the surface over the total number of setae.
The result of this "gecko tape" was tested by attaching a sample to the hand of a 15 cm high plastic Spider-Man figure weighing 40 g, which enabled it to stick to a glass ceiling, as is shown in the figure. The tape, which had a contact area of around 0.5 cm 2 with the glass, was able to carry a load of more than 100 g. However, the adhesion coefficient was only 0.06, which is low compared with real geckos (8~16).
As nanoscience and nanotechnology develop, more projects involve the application of nanotechnology, notably the use of carbon nanotubes (CNTs). In 2005, researchers from the University of Akron and Rensselaer Polytechnic Institute , both in the US, created synthetic setae structures by depositing multiwalled CNTs by chemical vapour deposition onto quartz and silicon substrates [ 15 ]
The nanotubes were typically 10–20 nm in diameter and around 65 μm long. The group then encapsulated the vertically aligned nanotubes in PMMA polymer before exposing the top 25 μm of the tubes by etching away some of the polymer. The nanotubes tended to form entangled bundles about 50 nm in diameter because of the solvent drying process used after etching. (As is shown in the figure on the right).
The results were tested with a scanning probe microscope , and it showed that the minimum force per unit area as 1.6±0.5×10 −2 nN/nm 2 , which is far larger than the figure the team estimated for the typical adhesive force of a gecko's setae, which was 10 −4 nN/nm 2 . Later experiments [ 16 ] with the same structures on Scotch tape revealed that this material could support a shear stress of 36 N/cm 2 , nearly four times higher than a gecko foot. This was the first time synthetic setae exhibited better properties than those of natural gecko foot. Moreover, this new material can adhere to a wider variety of materials, including glass and Teflon.
This new material has some problems, though. When pulled parallel to a surface, the tape releases, not because the CNTs lose adhesion from the surface but because they break, and the tape cannot be reused in this case. Moreover, unlike gecko's setae, this material only works for small area (approx. 1 cm 2 ). The researchers are currently working on a number of ways to strengthen the nanotubes and are also aiming to make the tape reusable thousands of times, rather than the dozens of times it can now be used.
While most developments concern dry adhesion, a group of researchers studied how derivatives of naturally occurring adhesive compounds from mollusks could be combined with gecko-type structures to yield adhesives that operate in both dry and wet conditions . [ 17 ]
The resulting adhesive, named 'geckel', was described to be an array of gecko-mimetic, 400 nm wide silicone pillars, fabricated by electron-beam lithography and coated with a mussel-mimetic polymer, a synthetic form of the amino acid that occurs naturally in mussels (left). [ clarification needed ] .
Unlike true gecko glue, the material depends on van der Waals forces for its adhesive properties and on the chemical interaction of the surface with the hydroxyl groups in the mussel protein. The material improves wet adhesion 15-fold compared with uncoated pillar arrays. The so-called "geckel" tape adheres through 1,000 contact and release cycles, sticking strongly in both wet and dry environments.
So far, the material has been tested on silicon nitride , titanium oxide and gold, all of which are used in the electronics industry. However, for it to be used in bandages and medical tape, a key potential application, it must be able to adhere to human skin. The researchers tested other mussel-inspired synthetic proteins that have similar chemical groups and found that they adhere to living tissue. [ 17 ]
Geckel is an adhesive that can attach to both wet and dry surfaces. Its strength "comes from coating fibrous silicone, similar in structure to a gecko's foot, with a polymer that mimics the 'glue' used by mussels." [ 18 ]
The team drew inspiration from geckos , who can support hundreds of times their own body weight. Geckos rely on billions of hair-like structures, known as setae to adhere. Researchers combined this ability with the sticking power of mussels. Tests showed that "the material could be stuck and unstuck more than 1,000 times, even when used under water", retaining 85 percent of their adhesive strength. [ 19 ] [ 20 ] [ 21 ]
Phillip Messersmith, lead researcher on the team that developed the product, believes that the adhesive could have many medical applications, for example tapes that could replace sutures to close a wound and a water resistant adhesive for bandages and drug-delivery patches. [ 18 ]
Automated, high-volume fabrication techniques will be necessary for these adhesives to be produced commercially and were being investigated by several research groups. A group led by Metin Sitti from Carnegie Mellon University studied [ when? ] a range of different techniques which include deep reactive ion etching (DRIE), which has been used successfully to fabricate mushroom-shaped polymer fibre arrays, micro-moulding processes, direct self-assembly and photolithography. [ citation needed ]
In 2006, researchers at BAE Systems Advanced Technology Centre at Bristol, UK, announced that they had produced samples of "synthetic gecko" – arrays of mushroom-shaped hairs of polyimide – by photolithography, with diameters up to 100 μm. These were shown to stick to almost any surface, including those covered in dirt, and a pull-off of 3,000 kg/m^2 was measured. [ citation needed ] More recently, the company has used the same technique to create patterned silicon moulds to produce the material and has replaced the polyimide with polydimethylsiloxane (PDMS). This latest material exhibited a strength of 220 kPa. Photo-lithography has the benefit of being widely used, well understood and scalable up to very large areas cheaply and easily, which is not the case with some of the other methods used to fabricate prototype materials. [ citation needed ]
In 2019, researchers from Akron Ascent Innovations, LLC, a company spun-out from University of Akron technology, announced the commercial availability of " ShearGrip " brand dry adhesives. [ 22 ] Rather than relying on photolithography or other micro-fabrication strategies, the researchers employed electrospinning to produce small diameter fibers based on the principle of contact splitting exploited by geckos. The product has reported shear strength greater than 80 pounds per square inch, with clean removal and reusability on many surfaces, and the ability to laminate the material to various face stocks in one or two sided constructions. [ 23 ] The approach is claimed to be more scalable than other strategies to produce synthetic setae and has been used to produce products for consumer markets under the brand name Pinless.
There have been a wide range of applications of synthetic setae, also known as "gecko tape," ranging from nanotechnology and military uses to health care and sport.
" Nano tape " (also called "gecko tape") is often sold commercially as double-sided adhesive tape . It can be used to hang lightweight items such as pictures and decorative items on smooth walls without punching holes in the wall. The carbon nanotube arrays leave no residue after removal and can stay sticky in extreme temperatures. [ 24 ]
No machine yet exists that can maneuver in the "scansorial" regime – that is, perform nimbly in general vertical terrain environments without loss of competence in level ground operation. Two major research challenges face the development of scansorial robotics: First, they seek to understand, characterize and implement the dynamics of climbing (wall reaction forces, limb trajectories, surface interactions, etc.); and second, they must design, fabricate and deploy adhesive patch technologies that yield appropriate adhesion and friction properties to facilitate necessary surface interactions.
As progress continues in legged robotics , research has begun to focus on developing robust climbers. Various robots have been developed that climb flat vertical surfaces using suction, magnets, and arrays of small spines, to attach their feet to the surface.
The RiSE platform was developed in Biomimetics and Dexterous Manipulation Laboratory, Stanford University. It has twelve degrees of freedom (DOF), with six identical two DOF mechanisms spaced equally in pairs along the length of the body. Two actuators on each hip drive a four bar mechanism, which is converted to foot motion along a prescribed trajectory, and positions the plane of the four bar mechanism angularly with respect to the platform. For the RiSE robot to succeed in climbing in both natural and man-made environments it has proven necessary to use multiple adhesion mechanisms. The RiSE robot does not, but will use dry adhesion in combination with spines. [ 25 ]
More recently, robots have been developed that utilize synthetic adhesive materials for climbing smooth surfaces such as glass.
These crawler and climbing robots can be used in the military context to examine the surfaces of aircraft for defects and are starting to replace manual inspection methods. Today's crawlers use vacuum pumps and heavy-duty suction pads which could be replaced by this material.
Researchers at Stanford University have also created a robot called Stickybot which uses synthetic setae in order to scale even extremely smooth vertical surfaces just as a gecko would. [ 26 ] [ 27 ]
Stickybot is an embodiment of the hypotheses about the requirements for mobility on vertical surfaces using dry adhesion. The main point is that we need controllable adhesion. The essential ingredients are:
Another similar example is "Geckobot" developed in Carnegie Mellon University, [ 28 ] which has climbed at angles of up to 60°.
Adhesives based on synthetic setae have been proposed as a means of picking up, moving and aligning delicate parts such as ultra-miniature circuits, nano-fibres and nanoparticles, microsensors and micro-motors. In the macro-scale environment, they could be applied directly to the surface of a product and replace joints based on screws, rivets, conventional glues and interlocking tabs in manufactured goods. In this way, both assembly and disassembly processes would be simplified. It would also be beneficial to replace conventional adhesive with synthetic gecko adhesive in vacuum environment (e.g. in space) since the liquid ingredient in conventional adhesive would easily evaporate and causes the connection to fail. [ citation needed ] | https://en.wikipedia.org/wiki/Synthetic_setae |
A synthetic substance or synthetic compound refers to a substance that is man-made by synthesis, rather than being produced by nature. It also refers to a substance or compound formed under human control by any chemical reaction , either by chemical synthesis ( chemosyntesis ) or by biosynthesis .
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Synthetic_substance |
A synthetic vaccine is a vaccine consisting mainly of synthetic peptides , carbohydrates , or antigens . They are usually considered to be safer than vaccines from bacterial cultures. Creating vaccines synthetically has the ability to increase the speed of production. This is especially important in the event of a pandemic.
The world's first synthetic vaccine was created in 1976 from diphtheria toxin by Louis Chedid (scientist) from the Pasteur Institute and Michael Sela from the Weizmann Institute . [1]
In 1986, Manuel Elkin Patarroyo created the SPf66 , the first version of a synthetic vaccine for Malaria . [ 1 ]
During the H1N1 outbreak in 2009, vaccines only became available in large quantities after the peak of human infections. This was a learning experience for vaccination companies. Novartis Vaccine and Diagnostics, among other companies, developed a synthetic approach that very rapidly generates vaccine viruses from sequence data in order to be able to administer vaccinations early in the pandemic outbreak. Philip Dormatizer, the leader of viral vaccine research at Novartis, says they have "developed a way of chemically synthesizing virus genomes and growing them in tissue culture cells". [ 2 ]
Phase I data of UB-311, a synthetic peptide vaccine targeting amyloid beta, showed that the drug was able to generate antibodies to specific amyloid beta oligomers and fibrils with no decrease in antibody levels in patients of advanced age. Results from the Phase II trial are expected in the second half of 2018. [ 3 ] [ 4 ]
This article about vaccines or vaccination is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Synthetic_vaccine |
Synthetic virology is a branch of virology engaged in the study and engineering of synthetic man-made viruses . It is a multidisciplinary research field at the intersection of virology, synthetic biology , computational biology , and DNA nanotechnology , from which it borrows and integrates its concepts and methodologies. There is a wide range of applications for synthetic viral technology such as medical treatments, investigative tools, and reviving organisms. [ 1 ]
Advances in genome sequencing technology [ 2 ] and oligonucleotide synthesis paved the way for construction of synthetic genomes based on previously sequenced genomes. Both RNA and DNA viruses can be made using existing methods. RNA viruses have historically been utilized due to the typically small genome size and existing reverse transcription machinery present. [ 3 ] The first man-made infectious viruses generated without any natural template were of the polio virus and the φX174 bacteriophage. [ 4 ] With synthetic live viruses, it is not whole viruses that are synthesized but rather their genome at first, both in the case of DNA and RNA viruses. For many viruses, viral RNA is infectious when introduced into a cell (during infection or after reverse transcription). These organisms are able to sustain an infectious life cycle upon introduction in vivo .
This technology is now being used to investigate novel vaccine strategies. [ 5 ] The ability to synthesize viruses has far-reaching consequences, since viruses can no longer be regarded as extinct, as long as the information of their genome sequence is known and permissive cells are available. As of March 2020, the full-length genome sequences of 9,240 different viruses, including the smallpox virus, are publicly available in an online database maintained by the National Institutes of Health . Synthetic viruses have also been researched as potential gene therapy tools. [ 6 ]
This virus -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Synthetic_virology |
The synthome comprises the set of all reactions that are available to a chemist for the synthesis of small molecules. [ 1 ] The word was coined by Stephen F. Martin .
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Synthome |
The south bank of the Humber Estuary in England is a relatively unpopulated area containing large scale industrial development built from the 1950s onward, including national scale petroleum and chemical plants as well as gigawatt scale gas fired power stations.
Historically the south bank was undeveloped, and mostly unpopulated, excluding the medieval port of Grimsby and lesser havens at Barton upon Humber and Barrow upon Humber . Industrial activity increased from the 19th century onwards, primarily brick and tile works utilising the clay extracted from the banks of the Humber; this plus the addition of chalk extraction at the edge of the Lincolnshire Wolds formed the basis of cement industries. Grimsby expanded during the industrial 19th century, and Immingham Dock was established in 1911, and a large scale cement works established near South Ferriby in 1938. Most of the brick and tile works ceased operation in around the 1950s.
From the 1950s onwards a number of chemical plants were built between Immingham and Grimsby, and two major oil refineries built south of Immingham Dock in the 1960s. Growth and development of the oil and chemical industries took place through the 20th century with some contraction of chemical works occurring in the late 20th century.
At the end of the 20th century and beginning of the 21st century a number of combined cycle gas turbine power stations were built (see also dash for gas ), some of which utilised 'waste' steam to provide nearby petroleum and chemical plants with heat energy. During the same time frame a large area of former clay workings from earlier brick and tile activity was converted into water parks in the Barton area.
The port of Grimsby , [ map 1 ] was a significant local town and market in the medieval period, with fish being the predominant traded good. From around the 14th century the port's importance in international trade diminished, in part due to competition from Hull, Boston, as well as the Hanseatic League ; whilst coastal trade and inland waterway trade became more important. In addition to fish a trade foodstuffs also took place, as well as coals from Newcastle and the export of peat dug in Yorkshire. [ 1 ] Grimsby's population declined from around 1,400 in 1377 to around 750 by 1600 and to around 400 by the early 1700s. In the late 1700s a new dock was built at Grimsby, under the engineer John Rennie , opened 1800. In the 1840s the Sheffield, Ashton-under-Lyne and Manchester Railway constructed a rail line to the town, and a new dock was constructed in the same period; the town redeveloped as a port, and its growth re-initiated. Several new docks constructed between 1850 and 1900 with a third fish dock added in 1934. Rail connections linked the port to South Yorkshire, Lancashire and the Midlands; the net tonnage handled by the port increased from 163,000 in the 1850s to 3,777,000 by 1911. The port was also a major fishing centre, landing around 20% of the total UK catch (1934). [ 2 ] The town's population rose consistently from 1,500 in 1801, to 75,000 in 1901, and to 92,000 in 1931. [ 3 ] Neighbouring Cleethorpes also developed as a residential area for Grimsby as well as a seaside resort during the 19th century. [ 4 ] [ 5 ] In the 20th century, port based industries formed the main economic activities, with fishing being particularly important, influencing other industries in the town, specifically food processing, in particular frozen foods. In the late 1960s around 3,500 were employed directly in the fishing industry; 10,000 were employed in food industries of which 6,000 was fish processing activities; 2,500 in shipbuilding and repair; other lesser employment activities included engineering, and timber related businesses. Most of Grimsby's industries were concentrated on the Dock's estate, and later Pyewipe, west of the main centre. [ 6 ]
In 1911 Immingham Dock was opened, [ map 2 ] constructed for the Great Central Railway , primarily for the export of coal; the new dock was located at a point where the deep water channel of the Humber Estuary swung close to the south bank, with estuary side jetties that and could handle ships up to 30,000 deadweight tonnage . [ 7 ] In the interwar period industry was developed on the north bank of the Humber in out of town locations: petroleum refining at Salt End ( BP Saltend ); smelting and cement manufacture at Melton ( Capper Pass and the Humber Cement Works ); and aircraft at Brough ( Blackburn Aeroplane & Motor Company , later British Aerospace ). [ 8 ]
During the 1970s and early 1980s the fishing industry of Grimsby declined (to less than 15% of 1970 levels by tonnage by 1983 [ 9 ] ) due to fuel costs ( 1973 fuel crisis ), decline in fish stocks, Icelandic exclusion zones (see cod war ), and new EEC fishing limits, though the port's market share remained roughly constant at around 20%, [ 10 ] imported landings from Icelandic ships (as well as from ships of Norway, Faroes, Denmark, Belgium and Holland) became important to the continuation of Grimsby's role as a 'fish port'. [ 11 ] [ 12 ]
After the end of the Second World War the Humber bank and area between Grimsby and Immingham began to be developed by capital intensive industries, main focused on petroleum and chemicals. In addition to access to a modern port (Immingham) the regional advantages were availability of large areas of undeveloped flat land at low cost, with the Humber allowing discharge or effluent. The area was earmarked as being suitable for 'special' industries, such as those dealing with hazardous products or processes. Several large conglomerates or their subsidiaries acquired land banks in the area, and began developing industrial sites. [ 13 ]
Grimsby Corporation acquired 694 acres (281 ha) of land between 1946 and 1953, who then improved the road and rail links, [ note 1 ] and sought industrial developers. [ 14 ] An additional 189 acres (76 ha) was acquired by the corporation in Great Coates in 1960 and developed into a light industrial estate. [ map 3 ] [ note 2 ] [ 15 ] Developers included British Titan Products (1949, titanium dioxide pigment), Fisons (1950, phosphate fertilizer), CIBA Laboratories (1951, pharmaceuticals), Laporte Industries (1953, titanium dioxide pigment), Courtaulds (1957, viscose and acrylic fibres). [ 13 ] [ 16 ] By 1961 developments occupied around 1,100 acres (450 ha) and employed over 4,000 persons. [ 14 ] A limit to development was the fresh water supply available to industry. By the beginning of the 1960s the Fisons, Laporte, CIBA, Titan, and Courtaulds were consuming 10,000,000 imperial gallons (45,000,000 L; 12,000,000 US gal) per day, all of which was acquired from the chalk aquifer , [ 17 ] some from the companies' own boreholes; this combined with Grimsby's water demand gave a total requirement of around 30,000,000 imperial gallons (140,000,000 L; 36,000,000 US gal) per day, which was considered close to what the aquifer could sustainably supply. As a consequence additional sources of supply were sought by the water board. [ 18 ]
In the late 1960s two oil refineries were established near Immingham : ( Total - Fina and Continental Oil ) supplied from an estuary pier beyond Cleethorpes at Tetney . [ 19 ]
Initially rail transport links were good, but road transport infrastructure very poor, essentially rural lanes. [ 20 ] In the late 1960s the government identified the Humber region generally as suitable for large scale industrial development; subsequently development of the road networks on both banks was authorised (see M180 motorway , also M62 ), as well as the construction of the Humber Bridge . [ 19 ]
A number of proposed or potential large scale developments in the latter part of the 20th century were not taken forward: The CEGB acquired a 360 acres (146 ha) site near Killingholme in 1960, and obtained consent for a 4 GW oil fired power station in 1972; the project was abandoned after the 1973 oil crisis ; in 1985 the Killingholme site was listed as a possible NIREX disposal site for low level nuclear waste ; in 1986 the CEGB listed Killingholme as a potential site for a coal fired power station; [ 21 ] [ note 3 ] A plan to reclaim land from the Humber at Pyewipe west of Grimsby using colliery waste was supported by Great Grimsby borough council as a potential source of new development land, interest in a reclamation scheme dated from at least the mid 1970s, and a report in the 1980s found the scheme feasible but expensive, the scheme was not supported by Humberside County Council who had sufficient development land elsewhere; [ 23 ] Dow chemicals also acquired 490 acres (200 ha) of land in the 1970s. [ 24 ]
By 1987 9,000 were employed in the South Humber bank area (excluding Grimsby-Cleethorpes, and rural north Lincolnshire). [ 25 ]
During the 1990s dash for gas several gas turbine powered power stations with heat recovery steam generators were built in the area, including several gigawatt class output units: National Power and Powergen built adjacent 665 and 900 MW combined cycle gas turbine (CCGT) power stations near North Killingholme in the early 1990s; [ 26 ] [ 27 ] Humber Power Ltd. built CCGT plant in two phases (1994–1999), with final power output 1.2 GW; [ 28 ] and ConocoPhillips built a combined heat and power plant using Gas turbine/HRSG/auxiliary boilers in two phases (opened 2004, about 730 MW and 2009, about 480 MW), used to supply heat (steam) to both the Lindsey and Humber refineries. [ 29 ] [ 30 ]
The Humber Sea Terminal at North Killingholme Haven, [ map 4 ] is a modern RO-RO port terminal based on an estuary pier with 25 feet (7.5 m) minimum water depth; as of 2014 the terminal is operated by Simon Group Ltd a subsidiary of C.RO Ports SA . [ 31 ] [ 32 ] [ 33 ] Six Ro-Ro terminals were developed in 2000 (1&2), 2003 (3&4) and 2007 (5&6). [ 34 ] [ 35 ]
Despite these developments the general character of the north Lincolnshire area in 1990 was agricultural, much of it large scale arable farming on high grade land, [ 36 ] a pattern that is unchanged at the beginning of the 21st century. [ 37 ]
A 13+9 MW combined heat and power gas and steam turbine plant (established 2003 [ 40 ] ) owned by Npower remains on site. [ 41 ]
In 1982 Fisons sold its fertilizer business to Norsk Hydro . [ 47 ] In the late 1980s Norsk Hydro built an ammonium nitrate fertilizer plant at Immingham. [ 42 ] In 2000 the company announced it was to close the ammonium nitrate and nitric acid plant at Immingham, resulting in 150 redundancies and ending fertilizer manufacture at the site. [ 48 ]
In 2004 Norsk Hydro's fertilizer business was demerged as Yara International . [ 49 ] As of 2014 Yara operates a dry ice plant at Immingham, [ 50 ] as well as operating a distribution centre for liquid fertilizer products. [ 51 ]
In 1992 Ciba completed a £230 million expansion to the Grimsby plant, including two production units, an 8 MW gas fired CHP power plant, and an effluent treatment plant. [ 54 ] [ 55 ]
In the mid 1990s Allied Colloids (Bradford) established at production facility between the Ciba and Courtlauld's plants near Grimsby. Allied Colloids was acquired by Ciba Specialty Chemicals ( Ciba-Geigy group spin off, 1996) in 1998. [ 56 ]
The Allied Colloids site at Grimsby was included in BASF's 2008 acquisitions. In 2010 BASF Performance Products plc was formed incorporating former Ciba plants; the subsidiary was merged in to BASF plc in 2013. [ 57 ]
In the 1950s Laporte was seeking a site for expansion from its titanium dioxide plant in Kingsway, Luton – the company had acquired 40 acres (16 ha) of land near Grimsby in 1947 for £4,000, but the nearby land was acquired by BTP and the land was sold another site was sought. A 100 acres (40 ha) site near Stallingborough containing a former coastal gun battery was acquired, as a result the plant became known as the 'Battery works'. Construction (contracted to Taylor Woodrow ) began 1950 with 2500 piles driven to stabilise the ground. In addition to the rail connection an estuary pier was also constructed (reconstructed 1955). Simon Carves was contracted to build the 100t per day pyrites fueled sulphuric acid plant. Both the acid and pigment plant became operational in 1953, with a workforce of about 280. Initial planned capacity was 8,000 t per year in two streams; the production capacity was increased by 8 times over the next 15 years, including extension to the acid production, with a sulphur burning plant (Simon Carves) operational by 1958, and a third acid plant built in 1961. [ 58 ]
A research laboratory was opened in 1960. Other production at the site included phthalic anhydride (1966), through a joint venture "Laporte-Synres" with Chemische Industrie Synres (Netherlands); and the synthetic clay laponite (1968). A plant producing titanium diozide pigment by the chloride process was commissioned in 1970, and expansion begun in 1976. By 1977 employment was nearly 1600. [ 58 ]
In 1980/1, in part due to increased energy costs, Laporte announced it was to shut down its 40,000t titanium dioxide pa sulphate process with the loss of 1,000 jobs; later reduced to a halving of production. In 1983/4 Laporte sold its titanium dioxide business to SCM Corporation (USA), the Laponite production facilities were subsequently transferred to Laporte in Widness . Further expansion of the chloride process for titanium dioxide by SCM led to a production capacity of 78,000 pa by 1986, whilst production capacity via the sulphate process was 31,000 t pa. [ 58 ] [ 59 ]
In 1990 SCM announced it was to reduce production by 24,000 t from 110,000 t pa to comply with EEC environmental regulations. [ 60 ] SCM was acquired by Hanson plc (1986), which demerged Millennium Chemicals (1996), [ 61 ] then acquired by Lyondell Chemical Company in 2004. [ 62 ] An expansion of titanium dioxide production in 1995 to 1999 increased capacity to 150,000 t pa. [ 63 ]
In 2007 Millennium Inorganic Chemicals was acquired by Saudi Arabian firm Cristal ( National Titanium Dioxide Company Limited ). [ 64 ] [ 65 ]
In 2009 the plant employed 400 workers; production was halted temporarily after European demand dropped 35% due to a recession. [ 66 ]
In 2019 Cristal was acquired by Tronox . Cristal’s North American TiO2 business was sold to British chemicals firm Ineos as a condition of the acquisition required by the US Federal Trade Commission [ 67 ] [ 68 ]
There have been serious process safety incidents involving titanium tetrachloride at the plant:
In 2010 a vessel containing titanium tetrachloride and hydrochloric acid ruptured injuring three operators with inhalation and chemical burns from the toxic/corrosive substances. One of the operators subsequently died from his injuries. [ 69 ] [ 70 ] In 2012 the Health and Safety Executive stopped production for 3 months after the release of titanium tetrachloride in 2011. [ 71 ]
A twin 6 MW has turbine plus 3 MW steam turbine is operated by NPower Cogen (since 2004, formerly TXU Energy ) at the site. [ 72 ]
Akzo Nobel acquired the plant in 1998, [ 77 ] forming the company Acordis after merging with its own fibre business, which was divested to CVC Capital Partners in 1999.
In 2004 production facilities for Tencel were sold to Lenzing . [ 77 ] As of 2013 the plant had a capacity of 40,000 t per year of Lyocell /Tencell. [ 78 ]
The other production plant (as part of Accordis), entered administration in 2005 at which point employment had been reduced to 475, was restarted as Fibres Worldwide with a workforce of 275, but entered administration in 2006. The plant being acquired by Bluestar Group (china) in late 2006, with the product used as a carbon fibre precursor ( Polyacrylonitrile ). Production ended in 2013 due to loss of demand, [ 77 ]
A 48 MW gas powered CH&P power station at the site was spun off as Humber Energy Ltd. in 2005 whilst the parent was in administration; the firm was acquired by GDF Suez subsidiary Cofely in 2013. [ 79 ] [ 80 ]
In mid 2015 a 1,200,000 square feet (110,000 m 2 ) building space industrial estate was approved for the site. [ 81 ] [ 82 ]
Revertex began production of Lithene (liquid polybutadienes ) in 1974 near Stallingborough. [ 86 ]
In 1963 the Harlow Chemical Company (Harco) was established as a joint venture between Revertex and Hoescht for chemical production. [ 87 ] In 1976 Harco began the construction of a 30,000 t pa resin emulsion plant at a greenfield site near Stallingborough, [ 88 ] [ 89 ] the plant began operations in 1978. [ 90 ] Additional dispersion production transferred from Harlow to Stallingborough in 1991. [ 91 ]
Revertex was acquired by Yule Catto in 1981. [ 92 ] Doverstrand Ltd. (then a Reichhold Chemicals /Yule Catto jv) was renamed Synthomer Ltd. in 1995. [ 93 ] In 2001 Yule Catto took over Harco, acquiring the 50% shareholding of partner Clariant , [ 94 ] and merged the business into its Synthomer subsidiary in 2002, resulting in the merger of the adjacent Synthomer and Harco activities at Stallingborough. [ 95 ] [ map 11 ]
Latex production ended late 2011, and further adhesive chemical production facilities were established at the site c. 2012 . [ 91 ] [ 96 ]
In 2007 construction of a hydrodesulfurization unit and steam Methane reformer was begun. [ 99 ] In 2009 workers at the plant went on strike due to preferential employment of foreign works, leading to a series of sympathy walkouts at other UK chemical, energy and petroleum plants, (see 2009 Lindsey Oil Refinery strikes ). The strike delayed the installation of the desulphurisation unit by 6 months. [ 100 ] A fire and explosion occurred at the plant in 2010, [ 101 ] killing one worker. [ 102 ] The fire further delayed the de-sulphurisation unit. [ 103 ] The de-sulphurisation unit was official inaugurated in 2011. [ 104 ]
In 2010 Total announced it planned to sell the refinery, citing overcapacity; [ 105 ] by late 2011 the company had failed to sell the plant, and halted the sales process. [ 106 ]
The Tetney monobuoy (operational 1971 [ 107 ] ), a SBM in the Humber Estuary is used to discharge oil tankers with the oil stored at the Tetney Oil Terminal , [ map 14 ] and transferred via pipeline. [ 108 ]
A fire and explosion occurred at the plant in 2001. [ 109 ]
The owner Humber Power Limited was a venture of Midland Power , ABB Energy Ventures, Tomen Group , British Energy and TotalFinaElf . Ownership was consolidated in TotalFinaElf, who sold 60% to GB Gas Holdings Ltd., a subsidiary of Centrica (2001). [ 111 ] In 2005 Centrica took 100% ownership of the plant. [ 110 ]
In early 2014 Centrica began to seek buyers for a number of its gas power plants, including its South Humber and Killingholme plants, [ 112 ] in early 2015 it decided to retain the plant, but sought to reduce the output from 1,285 to 540 MW from April 2015. [ 113 ] In July 2015 Centrica announced it was to overhaul the gas turbines at a cost of £63 million, increasing total capacity by 14 MW. [ 114 ]
In 2009 the plant was expanded raising generating capacity from 730 to 1,180 MW, with one 285 MW GE 9FB gas turbine, with a 200 MW Toshiba steam turbine driven via a HRSG. [ 30 ] Energy production at the plant is primarily determined by heat supply requirements. [ 30 ]
In 2013 Vitol acquired the plant through acquisition of Phillips 66 subsidiary Phillips 66 Power Operations Ltd. ; the plant was renamed Immingham CHP . [ 115 ]
In 2000 NRG Energy acquired the plant for £410 million, [ 116 ] and in 2004 Centrica acquired the plant for £142 million after a fall in electricity prices. [ 117 ] [ 118 ]
In early 2014 Centrica began to seek buyers for a number of its gas power plants, including its South Humber and Killingholme plants, [ 119 ] and in early 2015 began discussion on the closure of the plant, having received no acceptable bids for the plant. [ 120 ]
Sometimes referred to as Killingholme A . [ 27 ] [ 121 ]
In 1996 a water cooling system was fitted to the plant, designed to reduce plume formation. [ 122 ] In 2002 the plant was mothballed due to low electricity prices; the plant was restarted in 2005. [ 123 ]
In June 2015 E.On announced it was to close the powerstation. [ 124 ]
Sometimes referred to as Killingholme B . [ 27 ] [ 121 ]
Barton upon Humber dates to the pre- Norman Conquest period, and was the location for a ferry crossing of the Humber from at least that period. [ 125 ] The towns was once an important port, but declined after the establishment of Kingston upon Hull ( c. 1300 ). [ 126 ] The town remained an important port for north Lincolnshire, and in 1801 had a population of around 1,700, more than Grimsby. [ 127 ] Due to the presence of suitable soil, brick and tile making was carried out in the Barton area; in the 1840s one tilery had been established for over a hundred years; chalk was also quarried in the area, from at least 1790. Other industries in 1840 included whiting manufacture, rope making, tanning, plus trade in agricultural produce. [ 128 ]
At Barton upon Humber clay had been extracted for tile making since at least the 18th century. [ 128 ] Several brick and tile manufactures operated during the 19th century, with growth stimulated in part by the end of the Brick tax in 1850. By 1892 works included Ness End , West Field , Humber Brick and Tile , Barton , Morris's , Dinsdale-Ellis-Wilson , Garside's , Blyth's Ing , Burton's , Mackrill's (Briggs) , Pioneer , Hoe Hill and Spencer's . The works extended along most of the Humber bank from Barton Cliff around 1 mile west of barton Haven to Barrow Haven . The works reduced in number during the first half of the 20th century. [ 129 ] [ 130 ] By the 1970 much of the foreshore had been extracted, and the majority of works were no longer active. [ 131 ] Several of the works had industrial railways, generally connecting the workings to the works; in some cases clay was exported directly, such as that supplied to G.T. Earle 's cement works in Wilmington, Kingston upon Hull from the Humber Brick & Tile works ( c. 1893–1900 ). [ 132 ] Many of the Barton brick and tile works closed in the 1950s. [ 133 ] [ 134 ] As of 2009 the Blyth's tile works at Hoe Hill is still operational, producing tiles using non-modern methods at a small scale. [ 129 ]
The clay extraction, and brick and tile industry extended further east along the Humber bank. There were further works at Barrow Haven , New Holland including the Old Ferry Brick Works and Barrow Tileries (Barrow); the Atlas , and New Holland Stock brick works east of Barrow haven, the Quebec Brick & Tile works approximately 1.5 miles east of New Holland, and scattered works eastward on the bank as far as South Killingholme Haven , as well as brick works as South Ferriby and along the New River Ancholme near Ferriby Sluice . [ 135 ] A site at East Halton was used to supply G.T. Earle's cement works in Stoneferry , [ 136 ] whilst the same firm's Wilmington works was supplied with clay from pits near North Killingholme Haven (1909–13), and later from pits between Barton and Barrow on Humber (1913–69). [ 137 ]
In the 1890s George Henry Skelsey used funds from a public listing of his company to build a cement plant, Port Adamant Works , at Barton, west of the Haven, replacing a site he had acquired in 1885 at Morley Street, Stoneferry , Hull in the 1880s. Clay and chalk for the process were sourced on site, with chalk brought from the New Cliff chalk quarry , [ map 19 ] by a short narrow gauge rail line. Initially the plant had a capacity of around 330t per week, using chamber kilns , supplemented by shaft kilns in around 1901 increasing weekly capacity by around 320 t. In 1911 the company became part of British Portland Cement Manufacturers , and a rotary kiln was installed in 1912 replacing the earlier kilns. The plant closed in 1927 after the acquisition and establishment by the parent company of the large Humber Cement Works and Hope Cement Works . [ 138 ] [ 139 ]
Adjacent west of the New Cliff quarry was Barton Cliff Quarry , [ map 20 ] (chalk) also connected by a short rail line to the Humber foreshore; the quarry closed 1915. [ 139 ] To the south west was Leggott's Quarry , [ map 21 ] (also known as "Ferriby Quarry"), also connected by a short rail line to the foreshore. [ 140 ] The two quarries supplied chalk, including to G.T. Earle's Stoneferry and Wilmington plants respectively. [ 136 ] [ 137 ]
In 1938 Eastwoods Ltd established a cement works near South Ferriby, west of Ferriby Sluice. [ map 22 ] The initial plant consisted of a single 200.1 by 8.2 feet (61 by 2.5 m) rotary kiln with an output of around 200t per day by the wet process . Chalk was supplied from a quarry, Middlegate quarry , [ map 23 ] south east of South Ferriby, which was crushed at the quarry and transported to the cement plant by aerial ropeway , whilst clay was supplied from west adjacent west of the plant. In 1962 the plant became part of Rugby Portland Cement Co. Ltd . In 1967 a semi- dry process rotary kiln was installed and the first kiln ceased operating. In 1974 the excavations at the chalk quarry were extended below the chalk through a relatively thin layer of red chalk and carrstone to the underlying clay, which was also extracted for use in the process – a conveyor belt system was installed to transport the materials to plant; the clay extraction west of the plant then ceased. A second rotary kiln was added in 1978. Ownership passed to Rugby Group (1979), RMC (2000), and then to CEMEX 2005. [ 141 ]
A modern tile manufacturer Goxhill Tilieries (as of 2014 part of the Wienerberger group via Sandtoft ) is located east of New Holland and north of Goxhill (near the former Quebec brickyard). [ map 24 ] The company Sandtoft was established in 1904 as brick maker, and started tile production at Goxhill in 1934. Concrete tile manufacturing capacity was expanded during the 20th century. [ 142 ] [ 143 ]
A fertilizer works was established at Barton, near the river bank east of the Haven in 1874 by "The Farmers Company". [ 144 ] In 1968 the owner A.C.C. ( Associated Chemical Companies ) established new chemically based fertilizer production at the site, [ map 25 ] including a 180t per day Nitric acid plant, a 317t per day ammonium nitrate plant, plus a 475t per day fertilizer plant. [ 145 ] In 1965 A.C.C. became a full subsidiary of Albright and Wilson , including the Barton plant. [ 146 ]
The fertilizer business of Albright and Wilson was acquired by ICI in 1983, [ 147 ] Loss of UK market share caused ICI to close the plant in the late 1980s, as well as other fertilizer production facilities. [ 42 ]
Subsequently, the site was sold to Glanford borough, and later redeveloped together with former brick yards as a park Water's Edge .
In 1992 Kimberly-Clark established a large nappy mill outside Barton upon Humber, [ map 26 ] the plant was built at a cost of about £100,000, for the manufacture of Huggies nappies. [ 148 ] The plant was closed in 2013, due to the company ceasing most of its production of nappies in the European market. [ 149 ] In August 2013 Wren Kitchens took over the 180 acres (73 ha) site and began conversion of the 750,000 square feet (70,000 m 2 ) factory space into head offices, plus manufacturing and warehousing. [ 150 ] In April 2020, Wren began an extension project to its facility at the cost of £130 million.
There are also private wharfs at Barton-upon-Humber (Waterside), [ map 27 ] Barrow Haven, [ map 28 ] and New Holland. [ map 29 ] [ 151 ]
At the Barton foreshore directly west of Barton Haven the brick works had been closed and demolished by 1955, and an extension of a fertilizer works, BritAg, was built on the site. After closure the site was acquired by Glanford Borough Council in c. 1990 from ICI for £335,000, [ note 4 ] indemnifying the company from any responsibility for cleaning up the site. Initially the council planned to reclaim and clean up the land, and establish an industrial estate on site. The local authority failed to gain funding for the redevelopment and cleanup, and in 1996 Glanford's successor North Lincolnshire Council inherited the site and terminated the redevelopment plans due to their cost, instead undertaking to clean up the site and create a 'water park'. After remediation of the harmful chemical residues from the fertilizer operation the site was converted into a 86 acres (35 ha) county part, Water's Edge , [ map 30 ] incorporating the worked out clay pits as reed beds. [ 134 ] [ 152 ] [ 153 ]
Tile and brickyards east of Barton Haven which were abandoned in the 1950s now form part of the 100 acres (40 ha) Far Ings National Nature Reserve , [ map 31 ] established in 1983 by the Lincolnshire Wildlife Trust. [ 133 ] [ 154 ]
From 2013/4 Leggott's (or Ferriby) quarry has been reused as an airsoft recreation site. [ 155 ] [ 156 ] [ 157 ]
Download coordinates as: | https://en.wikipedia.org/wiki/Synthomer,_Stallingborough |
In retrosynthetic analysis , a synthon is a hypothetical unit within a target molecule that represents a potential starting reagent in the retroactive synthesis of that target molecule. The term was coined in 1967 by E. J. Corey . [ 1 ] He noted in 1988 that the "word synthon has now come to be used to mean synthetic building block rather than retrosynthetic fragmentation structures". [ 2 ] It was noted in 1998 [ 3 ] that the phrase did not feature very prominently in Corey's 1981 book The Logic of Chemical Synthesis , [ 4 ] as it was not included in the index. Because synthons are charged , when placed into a synthesis an uncharged form is found commercially instead of forming and using the potentially very unstable charged synthons.
In planning the synthesis of phenylacetic acid , two synthons are identified: a nucleophilic "COOH − " group, and an electrophilic "PhCH 2 + " group. Of course, both synthons do not exist by themselves; synthetic equivalents corresponding to the synthons are reacted to produce the desired reactant . In this case, the cyanide anion is the synthetic equivalent for the COOH − synthon, while benzyl bromide is the synthetic equivalent for the benzyl synthon.
The synthesis of phenylacetic acid determined by retrosynthetic analysis is thus:
where Ph stands for phenyl .
This term is also used in the field of gene synthesis—for example "40-base synthetic oligonucleotides are built into 500- to 800-bp synthons". [ 5 ]
Many retrosynthetic disconnections important for organic synthesis planning use carbocationic synthons. Carbon-carbon bonds , for example, exist ubiquitously in organic molecules , and are usually disconnected during a retrosynthetic analysis to yield carbocationic and carbanionic synthons. Carbon- heteroatom bonds, such as those found in alkyl halides , alcohols , and amides , can also be traced backwards retrosynthetically to polar C-X bond disconnections yielding a carbocation on carbon . oxonium and acylium ions are carbocationic synthons for carbonyl compounds such as ketones , aldehydes and carboxylic acid derivatives. An oxonium-type synthon was used in a disconnection en route [ clarification needed ] to the hops ether [ clarification needed ] , [ 6 ] a key component of beer (see fig.1). In the forward direction, the researchers used an intramolecular aldol reaction catalyzed by titanium tetrachloride to form the tetrahydrofuran ring of hops ether.
Another common disconnection that features carbocationic synthons is the Pictet-Spengler reaction . The mechanism of the reaction involves C-C pi-bond attack onto an iminium ion, usually formed in situ from the condensation of an amine and an aldehyde. The Pictet-Spengler reaction has been used extensively for the synthesis of numerous indole and isoquinoline alkaloids. [ 7 ]
Carbanion alkylation is a common strategy used to create carbon-carbon bonds. The alkylating agent is usually an alkyl halide or an equivalent compound with a good leaving group on carbon. Allyl halides are particularly attractive for S N 2 -type reactions due to the increased reactivity added by the allyl system. Celestolide (4-acetyl-6-t-butyl-1,1-dimethylindane, a component of musk perfume) can be synthesized using a benzyl anion alkylation with 3-chloro-2-methylprop-1-ene as an intermediate step. [ 8 ] The synthesis is fairly straightforward, and has been adapted for teaching purposes in an undergraduate laboratory. | https://en.wikipedia.org/wiki/Synthon |
Synthon is a Dutch multinational that produces generic human drugs .
The company was founded in 1991 by Jacques Lemmens and Marijn Oosterbaan, two organic chemists of the Radboud University Nijmegen . Synthon is active in the Netherlands , the Czech Republic , Spain , the United States , Argentina , Chile , Russia , Mexico and South Korea with about 1,500 employees. The company is headquartered in Nijmegen .
Medications made by Synthon include:
The products are marketed by partners of the company. The name Synthon is not mentioned on the packaging.
In 2007 the company started developing biopharmaceuticals . [ 3 ]
In May 2012 Synthon announced that it bought the Biolex LEX System for manufacturing biopharmaceuticals in Lemna . The sale also included two preclinical biologics made with the LEX System, BLX-301, a humanized and glyco-optimized anti- CD20 antibody for non-Hodgkin's B-cell lymphoma and other B-cell malignancies and BLX-155, a direct-acting thrombolytic . The financial terms of the sale were not disclosed. [ 4 ] | https://en.wikipedia.org/wiki/Synthon_(company) |
A syntractrix is a curve of the form
It is the locus of a point on the tangent of a tractrix at a constant distance from the point of tangency, as the point of tangency is moved along the curve. [ 2 ]
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Syntractrix |
In biology , syntrophy , [ 1 ] [ 2 ] [ 3 ] [ 4 ] syntrophism , [ 1 ] [ 5 ] [ 6 ] or cross-feeding [ 1 ] (from Greek syn ' together ' and trophe ' nourishment ' ) is the cooperative interaction between at least two microbial species to degrade a single substrate . [ 2 ] [ 3 ] [ 4 ] [ 7 ] This type of biological interaction typically involves the transfer of one or more metabolic intermediates between two or more metabolically diverse microbial species living in close proximity to each other. [ 3 ] [ 5 ] Thus, syntrophy can be considered an obligatory interdependency and a mutualistic metabolism between different microbial species, wherein the growth of one partner depends on the nutrients , growth factors , or substrates provided by the other(s). [ 8 ] [ 9 ]
Syntrophy is often used synonymously for mutualistic symbiosis especially between at least two different bacterial species. Syntrophy differs from symbiosis in a way that syntrophic relationship is primarily based on closely linked metabolic interactions to maintain thermodynamically favorable lifestyle in a given environment. [ 10 ] [ 11 ] [ 12 ] Syntrophy plays an important role in a large number of microbial processes especially in oxygen limited environments, methanogenic environments and anaerobic systems. [ 13 ] [ 14 ] In anoxic or methanogenic environments such as wetlands, swamps, paddy fields, landfills, digestive tract of ruminants , and anerobic digesters syntrophy is employed to overcome the energy constraints as the reactions in these environments proceed close to thermodynamic equilibrium . [ 9 ] [ 14 ] [ 15 ]
The main mechanism of syntrophy is removing the metabolic end products of one species so as to create an energetically favorable environment for another species. [ 15 ] This obligate metabolic cooperation is required to facilitate the degradation of complex organic substrates under anaerobic conditions. Complex organic compounds such as ethanol, propionate , butyrate , and lactate cannot be directly used as substrates for methanogenesis by methanogens. [ 9 ] On the other hand, fermentation of these organic compounds cannot occur in fermenting microorganisms unless the hydrogen concentration is reduced to a low level by the methanogens. The key mechanism that ensures the success of syntrophy is interspecies electron transfer. [ 16 ] The interspecies electron transfer can be carried out via three ways: interspecies hydrogen transfer , interspecies formate transfer and interspecies direct electron transfer. [ 16 ] [ 17 ] Reverse electron transport is prominent in syntrophic metabolism. [ 13 ]
The metabolic reactions and the energy involved for syntrophic degradation with H 2 consumption: [ 18 ]
A classical syntrophic relationship can be illustrated by the activity of Methanobacillus omelianskii . It was isolated several times from anaerobic sediments and sewage sludge and was regarded as a pure culture of an anaerobe converting ethanol to acetate and methane. In fact, however, the culture turned out to consist of a methanogenic archaeon "organism M.o.H" and a Gram-negative Bacterium "Organism S" which involves the oxidization of ethanol into acetate and methane mediated by interspecies hydrogen transfer . Individuals of organism S are observed as obligate anaerobic bacteria that use ethanol as an electron donor , whereas M.o.H are methanogens that oxidize hydrogen gas to produce methane. [ 18 ] [ 19 ] [ 9 ]
Organism S: 2 Ethanol + 2 H 2 O → 2 Acetate − + 2 H + + 4 H 2 (ΔG°' = +9.6 kJ per reaction)
Strain M.o.H.: 4 H 2 + CO 2 → Methane + 2 H 2 O (ΔG°' = -131 kJ per reaction)
Co-culture: 2 Ethanol + CO 2 → 2 Acetate − + 2 H + + Methane (ΔG°' = -113 kJ per reaction)
The oxidization of ethanol by organism S is made possible thanks to the methanogen M.o.H, which consumes the hydrogen produced by organism S, by turning the positive Gibbs free energy into negative Gibbs free energy. This situation favors growth of organism S and also provides energy for methanogens by consuming hydrogen. Down the line, acetate accumulation is also prevented by similar syntrophic relationship. [ 18 ] Syntrophic degradation of substrates like butyrate and benzoate can also happen without hydrogen consumption. [ 15 ]
An example of propionate and butyrate degradation with interspecies formate transfer carried out by the mutual system of Syntrophomonas wolfei and Methanobacterium formicicum : [ 16 ]
Direct interspecies electron transfer (DIET) which involves electron transfer without any electron carrier such as H 2 or formate was reported in the co-culture system of Geobacter mettalireducens and Methanosaeto or Methanosarcina [ 16 ] [ 20 ]
The defining feature of ruminants , such as cows and goats, is a stomach called a rumen . [ 21 ] The rumen contains billions of microbes, many of which are syntrophic. [ 14 ] [ 22 ] Some anaerobic fermenting microbes in the rumen (and other gastrointestinal tracts) are capable of degrading organic matter to short chain fatty acids , and hydrogen. [ 14 ] [ 9 ] The accumulating hydrogen inhibits the microbe's ability to continue degrading organic matter, but the presence of syntrophic hydrogen-consuming microbes allows continued growth by metabolizing the waste products. [ 22 ] In addition, fermentative bacteria gain maximum energy yield when protons are used as electron acceptor with concurrent H 2 production. Hydrogen-consuming organisms include methanogens , sulfate-reducers, acetogens , and others. [ 23 ]
Some fermentation products, such as fatty acids longer than two carbon atoms, alcohols longer than one carbon atom, and branched chain and aromatic fatty acids, cannot directly be used in methanogenesis . [ 24 ] In acetogenesis processes, these products are oxidized to acetate and H 2 by obligated proton reducing bacteria in syntrophic relationship with methanogenic archaea as low H 2 partial pressure is essential for acetogenic reactions to be thermodynamically favorable (ΔG < 0). [ 25 ]
Syntrophic microbial food webs play an integral role in bioremediation especially in environments contaminated with crude oil and petrol. Environmental contamination with oil is of high ecological importance and can be effectively mediated through syntrophic degradation by complete mineralization of alkane , aliphatic and hydrocarbon chains. [ 26 ] [ 27 ] The hydrocarbons of the oil are broken down after activation by fumarate , a chemical compound that is regenerated by other microorganisms. [ 26 ] Without regeneration, the microbes degrading the oil would eventually run out of fumarate and the process would cease. This breakdown is crucial in the processes of bioremediation and global carbon cycling. [ 26 ]
Syntrophic microbial communities are key players in the breakdown of aromatic compounds , which are common pollutants. [ 27 ] The degradation of aromatic benzoate to methane produces intermediate compounds such as formate , acetate , CO 2 and H 2 . [ 27 ] The buildup of these products makes benzoate degradation thermodynamically unfavorable. These intermediates can be metabolized syntrophically by methanogens and makes the degradation process thermodynamically favorable [ 27 ]
Studies have shown that bacterial degradation of amino acids can be significantly enhanced through the process of syntrophy. [ 28 ] Microbes growing poorly on amino acid substrates alanine , aspartate , serine , leucine , valine , and glycine can have their rate of growth dramatically increased by syntrophic H 2 scavengers. These scavengers, like Methanospirillum and Acetobacterium , metabolize the H 2 waste produced during amino acid breakdown, preventing a toxic build-up. [ 28 ] Another way to improve amino acid breakdown is through interspecies electron transfer mediated by formate. Species like Desulfovibrio employ this method. [ 28 ] Amino acid fermenting anaerobes such as Clostridium species, Peptostreptococcus asacchaarolyticus , Acidaminococcus fermentans were known to breakdown amino acids like glutamate with the help of hydrogen scavenging methanogenic partners without going through the usual Stickland fermentation pathway [ 14 ] [ 28 ]
Effective syntrophic cooperation between propionate oxidizing bacteria, acetate oxidizing bacteria and H 2 /acetate consuming methanogens is necessary to successfully carryout anaerobic digestion to produce biomethane [ 4 ] [ 18 ]
Many symbiogenetic models of eukaryogenesis propose that the first eukaryotic cells were derived from endosymbiosis facilitated by microbial syntrophy between prokaryotic cells. Most of these models involve an archaeon and an alphaproteobacterium , where the dependence of the archaeon on the alphaproteobacterium leads the former to engulf the latter, the alphaproteobacterium then eventually becoming the mitochondria . While these models share the concept of syntrophic interaction as a key driver of endosymbiosis , they often differ on the exact nature of the metabolic interactions involved and the mechanisms by which eukaryogenesis occurred.
In 1998, William F. Martin and Miklós Müller introduced the hydrogen hypothesis, proposing that eukaryotes arose from syntrophic associations based on the transfer of H 2 . [ 29 ] In this model, an syntrophic association arose where a anaerobic autotrophic methanogenic archaeon was dependent on the H 2 made as a byproduct of anaerobic respiration by a facultatively anaerobic alphaproteobacterium . [ 29 ] This syntrophy led the alphaproteobacterium to become an endosymbiont of the archaeon , serving as the precursor to the mitochondria .
Dennis Searcy proposed that the precursors to mitochondria were parasitic bacteria that developed a syntrophy with their hosts based upon the transfer of organic acids, H 2 transfer, and the reciprocal exchange of sulfur compounds. [ 30 ]
The reverse flow model was created based on the metabolic analysis of Asgard archaea , which is thought to be the kingdom from which eukaryotes emerged. [ 31 ] [ 32 ] [ 33 ] This model proposes that a syntrophic association arose where anaerobic ancestral Asgard archaea generated and provided reducing equivalents that facultative anaerobic alphaproteobacteria used in the form of H 2 , small reduced compounds, or by direct electron transfer. [ 31 ]
The Entangle-Engulf-Endogenize (E3) model was created in 2020 based on the isolation of syntrophic archaea from deep sea marine sediment. [ 34 ] Unlike most other symbiogenetic models, the E3 model involves three separate types of microbes: a fermentative archaeon , a facultatively aerobic organotroph (which was acts as the precursor of the mitochondria), and sulfur-reducing bacteria (SRB). [ 34 ] This model proposes that, originally, the fermentative archaeon may have degraded amino acids via syntrophic association with SRB and the facultatively aerobic organotroph . [ 34 ] As oxygen levels began to rise, however, the interaction with the facultatively aerobic organotroph (which is though to have made the archaeon more aerotolerant) became stronger became stronger until it was engulfed (a process facilitated by syntrophic interaction with SRB ). [ 34 ] Additionally, the E3 model suggests that, instead of phagocytizing the facultatively aerobic organotroph , the archaeon used extracellular structures to enhance interactions and engulf the facultatively aerobic organotroph . [ 34 ]
The syntrophy hypothesis was proposed in 2001 by researchers Purificación López-García and David Moreira before being refined in 2020 by the same researchers. [ 35 ] [ 36 ] Similarly to the E3 model, the syntrophy hypothesis suggests that eukaryogenesis involved three different types of microbes: a complex sulfate-reducing deltaproteobacterium (the precursor to the cytoplasm ), an H 2 -producing Asgard archaeon (the precursor to the nucleus ), and a facultatively aerobic sulfide-oxidizing alphaproteobacterium (the precursor to mitochondria ). [ 36 ] In this model, the deltaproteobacteria forms syntrophic associations with both the Asgard archaeon (based on the transfer of H 2 ) and the alphaproteobacterium (based on the redox of sulfur), leading both to become endosymbionts of the deltaproteobacteria . [ 36 ] In this now obligatory symbiosis , organic compounds were degraded in the periplasmic space of the deltaproteobacteria before being moved to the archaeon for further degradation. [ 36 ] This interaction drove the periplasm to develop and expand in close proximity with the archaeon to facilitate molecular exchange, resulting in an endomembrane system , transport channels, and the loss of the archaeal membrane. [ 36 ] Ultimately, the archaeon became the nucleus while the periplasmic endomembrane system became the endoplasmic reticulum. [ 36 ] Meanwhile, the consortium lost the metabolic capability for bacterial sulfate reduction and archaeal energy metabolism as it became more reliant on aerobic respiration performed by the alphaproteobacterium which, ultimately, became the mitochondrion . [ 36 ] | https://en.wikipedia.org/wiki/Syntrophy |
The syphon or siphon recorder is an obsolete electromechanical device used as a receiver for submarine telegraph cables invented by William Thomson, 1st Baron Kelvin in 1867. [ 1 ] It automatically records an incoming telegraph message as a wiggling ink line on a roll of paper tape. [ 2 ] Later a trained telegrapher would read the tape, translating the pulses representing the "dots" and "dashes" of the Morse code to characters of the text message.
The syphon recorder replaced Thomson’s previous invention, the mirror galvanometer as the standard receiving instrument for submarine telegraph cables, allowing long cables to be worked using just a few volts at the sending end. The disadvantage of the mirror galvanometer was that it required two operators, one with a steady eye to read and call off the signal, the other to write down the characters received. [ 3 ] Its use spread to ordinary telegraph lines and radiotelegraphy radio receivers . A major advantage of the syphon recorder was that no operator has to monitor the line constantly waiting for messages to come in. The paper tape preserved a record of the actual message before translation to text, so errors in translation could be checked.
The siphon recorder works on the principle of a d'Arsonval galvanometer . A light coil of wire is suspended between the poles of a permanent magnet so it can turn freely. [ 4 ] The coil is attached via two wire linkages to the metal plate siphon support, which pivots on a horizontal suspension thread. From this plate a narrow glass siphon tube hangs down vertically with its end almost touching a paper tape. The paper tape is pulled by motorized rollers at a constant speed under the siphon pen. Ink is drawn up from a reservoir into the tube by siphon action and comes out a tiny orifice in the end of the siphon tube, drawing a line down the moving paper tape. In order not to affect the motion of the coil, the siphon tube itself never touches the paper, only the ink. [ 5 ]
The current from the telegraph line is applied to the coil. The pulses of current representing the Morse code "dots" and "dashes" flowing through the coil create a magnetic field which interacts with the magnetic field of the magnet, creating a torque which causes the coil to rotate slightly about its vertical suspension axis. The wire linkages cause the siphon support plate to rotate about its horizontal axis, swinging the siphon tube across the paper tape. This draws a displacement in the ink line on the tape as long as the current is present in the coil. Thus the ink line on the tape forms a graph of the current in the telegraph line, with displacements representing the "dots" and "dashes" of the Morse code. An operator knowing Morse code later translates the line on the tape to characters of the text message, and types them onto a telegram form.
The siphon and an ink reservoir are together supported by an ebonite bracket, separate from the rest of the instrument, and insulated from it. This separation permits the ink to be electrified to a high potential while the body of the instrument, including the paper and metal writing tablet, are grounded, and at low potential. The tendency of a charged body is to move from a place of higher potential to a place of lower potential, and consequently the ink tends to flow downwards to the writing tablet. The only avenue of escape for it is by the fine glass siphon, and through this it rushes accordingly and discharges itself upon the paper. The natural repulsion between its like-electrified particles causes the shower to issue in spray. As the paper moves over the pulleys a delicate hair line is marked, straight when the syphon is stationary, but curved when the siphon is pulled from side to side by the oscillations of the signal coil.
Power to pull the roll of paper tape through the syphon recorder was usually supplied by one Froment's mouse mill motors . These also drove an electrostatic machine to generate the electricity to power the syphon.
A simpler mechanism, operating quite differently, was developed by Alexander Muirhead . This used a vibrating pen to avoid the same problem of the ink sticking to the paper. [ 5 ] The recording pen was suspended on a thin wire, vibrated by an electromagnet mechanism similar to that of an electric bell , to break contact with the paper. | https://en.wikipedia.org/wiki/Syphon_recorder |
A Syracuse dish or Syracuse watch glass is a shallow, circular, flat-bottomed dish of thick glass . Usually, it is 67 mm in outer diameter and 52 mm in inner diameter. [ 1 ]
Nathan Cobb , one of the pioneers of nematology in the United States, was the first who suggested using the syracuse dish for counting nematodes in 1918. [ 2 ]
It is used as laboratory equipment in biology for either storage or culturing. [ 3 ]
This microbiology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Syracuse_dish |
The Syria–Turkey barrier is a border wall and fence under construction along the Syria–Turkey border built in an attempt at preventing illegal crossings and smuggling from Syria into Turkey . [ 1 ] [ 2 ]
The barrier on the Syrian border is the third longest wall in the world after the Great Wall of China and the Mexico-United States border wall . [ 3 ]
According to Turkish officials the border wall was built in an attempt to increase border security, combat smuggling and reduce illegal border crossings due to the Syrian civil war . [ 4 ]
Ankara had launched the construction project in 2015 to increase border security. [ 4 ]
The 828 km (515-mile) [ 5 ] [ 4 ] wall is being built by TOKI , Turkey's state-owned construction enterprise, [ 6 ] and will comprise Turkey's entire border with Syria. It will be made of seven-tonne concrete blocks topped with razor wire and stand three metres (9.8 feet) high and two metres (6.6 feet) wide; [ 7 ] it will include 120 border towers in critical locations and a security road [ 1 ] with regular military patrols. [ 7 ] With construction having begun in 2014, [ 8 ] 781 km of the border wall has been completed as of December 2017. [ 9 ] In June 2018, the wall was proclaimed to be finished with a length of 764-kilometer (475-mile) out of the 911 km Syrian-Turkish border. [ 4 ] In 2017, The Syrian government accused Turkey of building a separation wall , referring to this barrier. [ 10 ]
The physical specifics are such as :
The border security includes :
Electronic devices are used, such as :
The advanced technology layer includes :
The barrier was expected to include 120 border towers in critical locations. [ 1 ]
The construction of Turkey's armored Cobra II military vehicles, which are now being used to patrol the border to Syria, has been funded by the European Union . [ 11 ]
781 km of the border wall has been completed as of December 2017, the whole 911 km is expected to be completed by Spring 2018. [ 9 ]
In 2017, the Syrian government accused Turkey of building a separation wall , referring to the barrier. [ 10 ] Syrian Foreign Ministry officials claimed Turkish forces and border control guards brought in heavy machines and trucks into Syrian territories, particularly in the northern countryside of Hasakah province, making a dirt road and digging a trench while installing cement pillars to build a separation wall. Turkish forces were also claimed to enter the Syrian territory at a depth of 250 meters in the northern countryside of Aleppo province. The Turkish forces also repeated the move in the northwestern province of Idlib, saying that the Turkish forces captured 2.4 hectares of lands with the same aim to build the wall. Syrian regime officials have stated that any unilateral international actions without the consent of the Syrian government will be dealt with as violations to Syria's sovereignty. [ 10 ] | https://en.wikipedia.org/wiki/Syria–Turkey_barrier |
In cooking , syrup (less commonly sirup ; from Arabic : شراب ; sharāb , beverage, wine and Latin : sirupus ) [ 1 ] is a condiment that is a thick, viscous liquid consisting primarily of a solution of sugar in water, containing a large amount of dissolved sugars but showing little tendency to deposit crystals . In its concentrated form, its consistency is similar to that of molasses . The viscosity arises from the multiple hydrogen bonds between the dissolved sugar, which has many hydroxyl (OH) groups.
There are a range of syrups used in food production, including:
A variety of beverages call for sweetening to offset the tartness of some juices used in the drink recipes. Granulated sugar does not dissolve easily in cold drinks or ethyl alcohol. Since syrups are liquids, they are easily mixed with other liquids in mixed drinks , making them superior alternatives to granulated sugar.
Simple syrup (also known as sugar syrup, or bar syrup) is a basic sugar-and-water syrup. It is used by bartenders as a sweetener to make cocktails, and as a yeast feeding agent in ethanol fermentation .
The ratio of sugar to water is 1:1 by volume for normal simple syrup, but can get up to 2:1 for rich simple syrup. [ 6 ] For pure sucrose the saturation limit is about 5:1 (500 grams (18 oz) sucrose to 100 millilitres (3.5 imp fl oz; 3.4 US fl oz) water).
Combining demerara sugar , a type of natural brown sugar, with water in this process produces demerara syrup. Sugar substitutes such as honey or agave nectar can also be used to make syrups. Spices can be added to the ingredients during the process, resulting in a spiced simple syrup.
Gomme syrup (or gum syrup ; gomme is French for "gum") is a boiled mixture of sugar and water, made with the highest ratio of sugar to water possible. [ 7 ] In old recipes, gum arabic is added, [ 8 ] in the belief that it prevents the sugar from crystallizing and adds a smooth texture. [ 7 ] Some recipes omit the gum arabic, [ 9 ] thus are just simple syrup, considering the gum undesired, [ 7 ] or to reduce cost. [ 10 ]
Gomme syrup is an ingredient commonly used in mixed drinks . [ 7 ]
In Japan, liquid sweeteners for iced coffee are called gum syrup , although they are actually simple syrup which contains no gum arabic. [ 11 ] Ingredients vary by brand; some are glucose–fructose syrup , [ 12 ] some are sugar, or blends of both. [ 13 ]
Flavored syrups are made by infusing simple syrups with flavoring agents during the cooking process. A wide variety of flavoring agents can be used, often in combination with each other, such as herbs, spices, or aromatics. For instance, syrups' aromatics is prepared by adding certain quantities of orange flavorings and cinnamon water to simple syrup. This type of syrup is commonly used at coffee bars , especially in the United States , to make flavored drinks. Infused simple syrups can be used to create desserts, or add sweetness and depth of flavor to cocktails.
Glucose syrups rating over 90 DE ( dextrose equivalent ) are used in industrial fermentation . [ 14 ]
Syrups can be made by dissolving sugar in water or by reducing naturally sweet juices such as cane juice, sorghum juice, or maple sap. Corn syrup is made from corn starch using an enzymatic process that converts it to sugars.
A must weight -type refractometer is used to determine the sugar content in the solution. | https://en.wikipedia.org/wiki/Syrup |
SysML Partners is a consortium of software tool vendors and industry leaders organized in 2003 to create the Systems Modeling Language (SysML), a dialect of UML customized for systems engineering. [ 1 ] The consortium was founded and organized by Cris Kobryn , who previously chaired the UML 1.1 and UML 2.0 specification teams, and Sandy Friedenthal, chair of the OMG Systems Engineering Special Interest Group. The SysML Partners defined SysML as an open source specification, and their specifications include an open source license for distribution and use.
The SysML Partners completed their SysML v. 1.0a specification draft and submitted it to the Object Management Group in November 2005. In recognition of their contributions to modeling, the SysML Partners were named a winner in the "Modeling Category" of the SD Times 100 for 2007. [ 2 ]
This Unified Modeling Language article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/SysML_Partners |
Windows Sysinternals is a website that offers technical resources and utilities to manage, diagnose, troubleshoot, and monitor a Microsoft Windows environment. [ 1 ] Originally, the Sysinternals website (formerly known as ntinternals [ 2 ] ) was created in 1996 and was operated by the company Winternals Software LP , [ 1 ] which was located in Austin, Texas . It was started by software developers Bryce Cogswell and Mark Russinovich . [ 1 ] Microsoft acquired Winternals and its assets on July 18, 2006. [ 3 ]
The website featured several freeware tools to administer and monitor computers running Microsoft Windows. The software can now be found at Microsoft. The company also sold data recovery utilities and professional editions of their freeware tools.
Winternals Software LP was founded by Cogswell and Russinovich, who sparked the 2005 Sony BMG CD copy protection scandal in an October 2005 posting to the Sysinternals blog. [ 4 ]
On July 18, 2006, Microsoft Corporation acquired the company and its assets. Russinovich explained that Sysinternals will remain active until Microsoft agrees on a method of distributing the tools provided there. [ 5 ] However, NT Locksmith, a Windows password recovery utility, was immediately removed. [ citation needed ] Most of the source that Sysinternals provided was also removed. Currently, [ when? ] the Sysinternals website is moved to the Windows Sysinternals website and is a part of Microsoft Docs . [ 1 ]
In late 2010, Cogswell retired from Sysinternals. [ 6 ]
Windows Sysinternals supplies users with numerous free utilities, most of which are being actively developed by Mark Russinovich and Bryce Cogswell, [ 7 ] such as Process Explorer , an advanced version of Windows Task Manager , [ 8 ] Autoruns, which Windows Sysinternals claims is the most advanced manager of startup applications, [ 9 ] RootkitRevealer , a rootkit detection utility, [ 10 ] Contig , PageDefrag and a total of 65 other utilities. [ 11 ] NTFSDOS , which allowed NTFS volumes to be read by Microsoft's MS-DOS operating system, is now discontinued and is no longer available for download. [ 11 ] A larger number of these utilities are nowadays bundled by the publishers for the sake of simpler downloading of all, or most, current versions in the so-called Sysinternals Suite.
Previously available for download was the Winternals Administrator Pak which contained ERD Commander 2005, Remote Recover 3.0, NTFSDOS Professional 5.0, Crash Analyzer Wizard, FileRestore 1.0, Filemon Enterprise Edition 2.0, Regmon Enterprise Edition 2.0, AD Explorer Insight for Active Directory 2.0, and TCP Tools.
On May 18, 2010, Sysinternals released its first new utility since its acquisition by Microsoft. Named RAMMap, it is a diagnostic utility similar to the memory tab of Windows Resource monitor, but more advanced. RAMMap runs only on Windows Vista and later. [ 12 ] A system event monitoring tool, Sysmon, was released in 2014, which can collect and publish system events that are helpful for security analysis into the Windows Event Log. [ 13 ] [ 14 ]
In November 2018, Microsoft confirmed it is porting Sysinternals tools, including ProcDump and ProcMon , to Linux . [ 15 ]
In April 2006, Geek Squad , a tech support company working in cooperation with Best Buy , was accused of using unlicensed versions of the ERD Commander software. Winternals supplied Best Buy with copies of its software so that Best Buy could evaluate the software while conducting contract negotiations for using it on a permanent basis. When contract talks broke down Best Buy did not notify its Geek Squad Agents to stop using the software and discard all copies. A judge granted a restraining order on April 14, requiring that use of all unlicensed software be stopped, and forcing Best Buy to turn over all copies of Winternals software within 20 days. [ 16 ] After settlement, a version of the Winternals software was released to be used by Geek Squad. [ 17 ] | https://en.wikipedia.org/wiki/Sysinternals |
Sysload Software , was a computer software company specializing in systems measurement , performance and capacity management solutions [ buzzword ] for servers and data centers , based in Créteil , France . It has been acquired in September 2009 by ORSYP , a computer software company specialist in workload scheduling and IT Operations Management, based in La Défense , France .
Sysload was created in 1999 as a result of the split of Groupe Loan System into two distinct entities: Loan Solutions, a developer of financial software and Sysload Software, a developer of performance management and monitoring software.
As of March 31, 2022, all Sysload products are in end of life. [ 1 ]
The following products are developed by Sysload:
Sysload products are based on a 3-tiered (user interfaces, management modules and collection and analysis modules) architecture metric collection technology that provides detailed information on large and complex environments. Sysload software products are available for various virtualized and physical platforms including: VMware , Windows , AIX , HP-UX , Solaris , Linux , IBM i , PowerVM , etc. | https://en.wikipedia.org/wiki/Sysload_Software |
The Sysmex XE-2100 is a haematology automated analyser , used to quickly perform full blood counts and reticulocyte counts. It is made by the Sysmex Corporation .
It can be run on its own, or connected to a blood film making and staining unit. Racks of blood go in on a tray on the right, and come out the left side. The racks hold ten 4.5 mL tubes, and have a notch so they can only go in one way.
As the tubes go through the machine, two are picked up and inverted five times to mix, the first one is sampled. They are put down again, the rack moves along one space and two more are picked up and mixed five times, this assures that each tube is inverted ten times before being sampled.
The caps are left on the tubes as they go through the machine. A piercer takes a sample through the rubber centre while the tube is upside down. EDTA (lavender) tubes are usually used, although citrate (blue top) tubes will also work (although the result must be corrected because of dilution).
Paediatric and oversized tubes can be put through manually via a sampler on the left-hand side of the machine.
Data from the XE-2100 can be viewed with a computer program.
This machine can be purchased for around US$107,000
Blood is sampled and diluted, and moves through a tube thin enough that cells pass by one at a time. Characteristics about the cell are measured using lasers (fluorescence flow cytometry ) or electrical impedance .
Because not everything about the cells can be measured at the same time, blood is separated into a number of different channels. In the XE-2100 there are five different channels: WBC/BASO, DIFF, IMI, RET and NRBC. | https://en.wikipedia.org/wiki/Sysmex_XE-2100 |
The envsys framework is a kernel -level hardware monitoring sensors framework in NetBSD . As of 4 March 2019 [update] , the framework is used by close to 85 device drivers to export various environmental monitoring sensors, as evidenced by references of the sysmon_envsys_register [ 1 ] symbol within the sys path of NetBSD; with temperature sensors, ENVSYS_STEMP , [ 2 ] being the most likely type to be exported by any given driver. [ 3 ] : 32 Sensors are registered with the kernel through sysmon_envsys(9) API. [ 4 ] Consumption and monitoring of sensors from the userland is performed with the help of envstat utility through proplib(3) through ioctl(2) against the /dev/sysmon pseudo-device file, [ 5 ] the powerd power management daemon that responds to kernel events by running scripts from /etc/powerd/scripts/ , [ 6 ] [ 7 ] as well as third-party tools like symon and GKrellM from pkgsrc .
The framework allows the user to amend the monitoring limits specified by the driver, and for the driver to perform monitoring of the sensors in kernel space, or even to programme a hardware chip to do the monitoring for the system automatically. [ 3 ] : §7.1 Two levels of limits are defined: critical and warning , both of which additionally extend to an over and an under categorisation. [ 3 ] : §7.1 If limit thresholds are crossed, a kernel event may be generated, which can be caught in the userland by powerd to execute a pre-defined user script. [ 6 ] [ 7 ] By comparison, in OpenBSD's hw.sensors , the monitoring of user-defined values is performed in userspace by sensorsd .
As of 2019 [update] , the framework itself does not facilitate computer fan control , although the drivers could still implement interfacing with the fan-controlling capabilities of their chips through other means, for example, through a driver-specific sysctl interface, which is the approach taken by the dbcool(4) driver. [ 8 ] However, the drivers for the most popular Super I/O chips like lm(4) and itesio(4) do not implement any fan control at all (in fact, historically, in all of OpenBSD, NetBSD and DragonFly, these drivers don't even report the duty cycle of the fans — only the actual RPM values are reported). [ 9 ] [ 10 ]
The framework undergone two major revisions: the first version of envsys.h was committed on 15 December 1999 ; 25 years ago ( 1999-12-15 ) ; with envsys.4 man page following on 27 February 2000 ; 25 years ago ( 2000-02-27 ) . Between 2000 and 2007, the manual page for envsys(4) in NetBSD stated that the "API is experimental", and that the "entire API should be replaced by a sysctl(8)", "should one be developed"; [ 11 ] [ 12 ] it can be noted that in 2003 this was the exact approach taken by OpenBSD with sysctl hw.sensors when some of the envsys(4) drivers were ported to OpenBSD. [ 3 ] : §6.1
The second revision came about on 1 July 2007 ; 17 years ago ( 2007-07-01 ) . The serialisation with userland was reimplemented using property lists with the help of NetBSD's new proplib(3) library (the underlying transport layer between the kernel and userland still being done through ioctl ). [ 13 ] [ 3 ]
The envsys framework was the precursor to OpenBSD's sysctl hw.sensors framework in 2003, and many drivers, as well as some sensor types, have been ported back and forth between NetBSD and OpenBSD. Support for sensors of drive type has been added to NetBSD on 1 May 2007 , similar to drive type in OpenBSD , which was at the same time when bio(4) and bioctl were ported from OpenBSD to NetBSD. [ 3 ] : §7.1 | https://en.wikipedia.org/wiki/Sysmon_envsys |
sysstat ( sys tem stat istics ) is a collection of performance monitoring tools for Linux . It is available on Unix and Unix-like operating systems . [ 2 ]
Software included in sysstat package:
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sysstat |
systat is a BSD UNIX console application for displaying system statistics in fullscreen mode using ncurses / curses . It is available on, and by default ships in the base systems of, FreeBSD , NetBSD , OpenBSD and DragonFly BSD . [ 1 ] [ 2 ] [ 3 ] [ 4 ] It was first released as part of 4.3BSD in 1986 ; 39 years ago ( 1986 ) . [ 5 ]
Both internally and in the interface of the user the utility consists of several distinct modules and tabs , referred to as "displays" in FreeBSD, NetBSD and DragonFly, and "views" in OpenBSD, which are automatically refreshed every specified number of seconds. [ 5 ] These modules cover all system components, including statistics resembling vmstat , iostat and netstat in all of the BSDs, as well as pf and sensors views in some of the BSDs. [ 6 ] [ 7 ] The systat utility is notably absent from OS X , where a GUI-based Activity Monitor performs similar functions.
This Unix -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Systat_(BSD) |
System-level simulation (SLS) is a collection of practical methods used in the field of systems engineering , in order to simulate, with a computer , the global behavior of large cyber-physical systems.
Cyber-physical systems (CPS) are systems composed of physical entities regulated by computational elements (e.g. electronic controllers).
System-level simulation is mainly characterized by:
These two characteristics have several implications in terms of modeling choices (see further ).
System-level simulation has some other characteristics, that it shares with CPS simulation in general:
SLS is mainly about computing the evolution over time of the physical quantities that characterize the system of interest, but other aspects can be added like failure modeling or requirement verification .
The main motivation for SLS is the application of the holistic principle to computer simulation, which would state that simulating the system as a whole tells more than simulating parts of the system separately.
Indeed, simulating the different parts of a complex system separately means neglecting all the possible effects of their mutual interactions.
In many applications, these interactions cannot be ignored because of strong dependencies between the parts. For instance, many CPSs contain feedbacks that cannot be broken without modifying the system behavior. Feedbacks can be found in most modern industrial systems, which generally include one or more control systems . Another example of benefits from system level simulations is reflected in the high degree of accuracy (e.g. less than 1% cumulative validation error over 6 months of operation) of such simulations in the case of a solar thermal system. [ 2 ]
On the other hand, simply connecting existing simulation tools, each built specifically to simulate one of the system parts, is not possible for large systems since it would lead to unacceptable computation times.
SLS aims at developing new tools and choosing relevant simplifications in order to be able to simulate the whole cyber-physical system.
SLS has many benefits compared to detailed co-simulation of the system sub-parts.
The results of a simulation at the system level are not as accurate as those of simulations at a finer level of detail but, with adapted simplifications, it is possible to simulate at an early stage, even when the system is not fully specified yet. Early bugs or design flaws can then be detected more easily.
SLS is also useful as a common tool for cross-discipline experts, engineers and managers and can consequently enhance the cooperative efforts and communication.
Improving the quality of exchanges reduces the risk of miscommunication or misconception between engineers and managers, which are known to be major sources of design errors in complex system engineering. [ 3 ]
More generally SLS must be contemplated for all applications whenever only the simulation of the whole system is meaningful, while the computation times are constrained.
For instance, simulators for plant operators training must imitate the behavior of the whole plant while the simulated time must run faster than real time.
Cyber-physical systems are hybrid systems , i.e. they exhibit a mix of discrete and continuous dynamics.
The discrete dynamics mostly originates from digital sensing or computational sub-systems (e.g. controllers, computers, signal converters ).
The adopted models must consequently be capable of modeling such a hybrid behavior.
It is common in SLS to use 0D —sometimes 1D— equations to model physical phenomena with space variables, instead of 2D or 3D equations. The reason for such a choice is the size of the simulated systems, which is generally too large (i.e. too many elements and/or too large space extension) for the simulation to be computationally tractable. Another reason is that 3D models require the detailed geometry of each part to be modeled. This detailed knowledge might not be known to the modeler, especially if the modelling is done at an early step in the development process.
The complexity of large CPSs make them difficult to describe and visualize. A representation that can be arranged so that its structure looks like the structure of the original system
is a great help in terms of legibility and ease of comprehension. Therefore, acausal modeling is generally preferred to causal block-diagram modeling. [ 4 ] Acausal modeling is also
preferred because component models can be reused, contrary to models developed as block diagrams. [ 4 ]
System-level simulation is used in various domains like:
In an early stage of the development cycle, SLS can be used for dimensioning or to test different designs.
For instance, in automotive applications, "engineers use simulation to refine the specification before building a physical test vehicle" . [ 16 ] Engineers run simulations with this system-level model to verify performance against requirements and to optimize tunable parameters.
System-level simulation is used to test controllers connected to the simulated system instead of the real one.
If the controller is a hardware controller like an ECU , the method is called hardware-in-the-loop . If the controller is run as a computer program on an ordinary PC, the method is called software-in-the-loop. Software-in-the-loop is faster to deploy and releases the constraint of real time imposed by the use of a hardware controller. [ 17 ]
SLS is used to build plant models that can be simulated fast enough to be integrated in an operator training simulator or in an MPC controller. [ 18 ] Systems with a faster dynamics can also be simulated, like a vehicle in a driving simulator. [ 19 ]
Another example of SLS use is to couple the system-level simulation to a CFD simulation.
The system-level model provides the boundary conditions of the fluid domain in the CFD model. [ 20 ]
Specific languages are used to model specification and requirement modeling, like SysML or FORM-L. [ 21 ] They are not meant to model the system physics but tools exist that can combine specification models and multi-physics models written in hybrid system modeling languages like Modelica. [ 22 ]
If a model is too complex or too large to be simulated in a reasonable time, mathematical techniques can be utilized to simplify the model. For instance, model order reduction gives an approximate model, which has a lower accuracy but can be computed in a shorter time.
Reduced order models can be obtained from finite element models, [ 23 ] and have been successfully used for system-level simulation of MEMS . [ 24 ]
SLS can benefit from parallel computing architectures.
For instance, existing algorithms to generate code from high-level modeling languages can be adapted to multi-core processors like GPUs . [ 25 ] Parallel co-simulation is another approach to enable numerical integration speed-ups. [ 26 ] In this approach, the global system is partitioned into sub-systems. The subsystems are integrated independently of each other and are synchronized at discrete synchronization points. Data exchange between subsystems occurs only at the synchronization points. This results in a loose coupling between the sub-systems.
Optimization can be used to identify unknown system parameters, i.e. to calibrate CPS model , matching the performance to actual system operation. [ 27 ] In cases when exact physical equations governing the processes are unknown, approximate empirical equations can be derived, e.g. using multiple linear regression. [ 28 ]
If the simulation can be deployed on a supercomputing architecture, many of the modeling choices that are commonly adopted today (see above ) might become obsolete.
For instance, the future supercomputers might be able to "move beyond the loosely coupled, forward-simulation paradigm" . [ 29 ] In particular, "exascale computing will enable a more holistic treatment of complex problems" . [ 29 ] To exploit exascale computers, it will however be necessary to rethink the design of today's simulation algorithms.
For embedded system applications, safety considerations will probably lead the evolution of SLS. For instance, unlike synchronous languages , the modeling languages currently used for SLS (see above ) are not predictable and may exhibit unexpected behaviors. It is then not possible to use them in a safety-critical context.
The languages should be rigorously formalized first. [ 30 ] Some recent languages combine the syntax of synchronous languages for programming discrete components with the syntax of equation-based languages for writing ODEs . [ 31 ] | https://en.wikipedia.org/wiki/System-level_simulation |
System-specific Impulse, I ssp is a measure that describes performance of jet propulsion systems. A reference number is introduced, which defines the total impulse, I tot , delivered by the system, divided by the system mass, m PS :
Because of the resulting dimension, - delivered impulse per kilogram of system mass m PS , this number is called ‘System-specific Impulse’. In SI units, impulse is measured in newton-seconds (N·s) and I ssp in N·s/kg.
The I ssp allows a more accurate determination of the propulsive performance of jet propulsion systems than the commonly used Specific Impulse, I sp , which only takes into account the propellant and the thrust engine performance characteristics. Therefore, the I ssp permits an objective and comparative performance evaluation of systems of different designs and with different propellants .
The I ssp can be derived directly from actual jet propulsion systems by determining the total impulse delivered by the mass of contained propellant, divided by the known total (wet) mass of the propulsion system. This allows a quantitative comparison of for example, built systems.
In addition, the I ssp can be derived analytically, for example for spacecraft propulsion systems, in order to facilitate a preliminary selection of systems (chemical, electrical) for spacecraft missions of given impulse and velocity-increment requirements. A more detailed presentation of derived mathematical formulas for I ssp and their applications for spacecraft propulsion are given in the cited references. [ 1 ] [ 2 ] [ 3 ] In 2019 Koppel and others used I SSP as a criterion in selection of electric thrusters . [ 4 ] | https://en.wikipedia.org/wiki/System-specific_impulse |
The HP LX System Manager is the application manager and GUI for HP LX-series Palmtop computers .
The App Manager page is made up of 2 rows of 8 icons, with an additional shorter row on the next page down by default. (More applications can be added as the user wishes.) The menu bar options that are available can be opened (on a HP 200LX ) by using the Menu key or the Alt key. These include task management, booting out of the GUI into DOS and opening help for the palmtop.
One of the major flaws in the System Manager is its limited icon space in Application Manager. You can put only 32 icons there. You can delete some default icons to get space but some are undeletable. [ 1 ] Another item of interest that some people have referred to as a flaw is that the HEXCALC built-in application is missing from the System Manager by default. To add the program to the list, it is necessary to manually add an entry with the following fields:
This microcomputer - or microprocessor -related article is a stub . You can help Wikipedia by expanding it .
This operating-system -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_Manager_(HP_LX) |
The System Power Management Interface ( SPMI ) [ 1 ] is a high-speed, low- latency , bi-directional , two-wire serial bus suitable for real-time control of voltage and frequency scaled multi-core application processors and its power management of auxiliary components. SPMI obsoletes a number of legacy, custom point-to-point interfaces and provides a low pin count, high-speed control bus for up to 4 master and 16 slave devices . [ 2 ] SPMI is specified by the MIPI Alliance (Mobile Industry Process Interface Alliance).
This computer hardware article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_Power_Management_Interface |
The System Security Services Daemon ( SSSD ) is software originally developed for the Linux operating system (OS) that provides a set of daemons to manage access to remote directory services and authentication mechanisms. [ 1 ] The beginnings of SSSD lie in the open-source software project FreeIPA (Identity, Policy and Audit). [ 2 ] The purpose of SSSD is to simplify system administration of authenticated and authorised user access involving multiple distinct hosts. [ 3 ] [ 4 ] It is intended to provide single sign-on capabilities to networks based on Unix-like OSs that are similar in effect to the capabilities provided by Microsoft Active Directory Domain Services to Microsoft Windows networks. [ 5 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_Security_Services_Daemon |
System Wide Information Management ( SWIM ) is a global Air Traffic Management (ATM) industry initiative to harmonize the exchange of Aeronautical, Weather and Flight information for all Airspace Users and Stakeholders. SWIM is an integral part of the International Civil Aviation Organization (ICAO) Global Air Navigation Plan (GANP) . The GANP defines 4 Performance Improvement Areas (PIA), SWIM resides in PIA 2: Globally interoperable systems and data, where its implementation is further defined in Aviation System Block Upgrades (ASBU) B1-SWIM and B2-SWIM. ASBU B1-SWIM defines SWIM as a “a net-centric operation where the air traffic management (ATM) network is considered as a series of nodes, including the aircraft, providing or using information.” it goes on to say “The sharing of information of the required quality and timeliness in a secure environment is an essential enabler to the ATM target concept.” [ 1 ]
ICAO Annex 3 defines what IWXXM capability is required at different time frames. These capabilities can also be considered in context of the ICAO SWIM-concept (Doc 10039, Manual on System Wide Information Management (SWIM) Concept). [ 2 ]
Eurocontrol initially presented the SWIM System concept to the Federal Aviation Administration (FAA) in 1997 and in 2005, the ICAO Global ATM Operational Concept adopted the SWIM concept to promote information-based ATM integration. SWIM is now part of development projects in the United States ( Next Generation Air Transportation System , or NextGen ), the Middle East ( GCAA SWIM Gateway Archived 2017-08-31 at the Wayback Machine ) and the European Union ( Single European Sky ATM Research ).
Within the FAA, the FAA SWIM program is an advanced technology program designed to facilitate greater sharing of ATM system information, such as airport operational status, weather information, flight data, status of special use airspace, and National Airspace System (NAS) restrictions. SWIM will support current and future NAS programs by providing a flexible and secure information management architecture for sharing NAS information. SWIM will use commercial off-the-shelf hardware and software to support a Service Oriented Architecture (SOA) that will facilitate the addition of new systems and data exchanges and increase common situational awareness. SWIM is part of FAA's NextGen , an umbrella term for the ongoing evolution of the United States' NAS from a ground-based system of air traffic control (ATC) to a satellite-based system of air traffic management. The transformation to NextGen requires programs and technologies that provide more efficient operations, including streamlined communications capabilities. The SWIM program is an integral part of that transformation that will connect FAA systems. The SWIM program will also enable interaction with other members of the decision-making community including other government agencies, air navigation service providers, and airspace users. The SWIM program will lead to a variety of benefits. SWIM will help improve aviation safety through increased common situational awareness by allowing more decision makers to access the same information. This will provide consistent information to different users (pilots, controllers, dispatchers) that supports proactive decision-making.
SWIM as a concept is essential to providing the most efficient use of airspace, managing air traffic around weather, and increasing common situational awareness on the ground. SWIM core services will enable systems to request and receive information when they need it, subscribe for automatic receipt, and publish information and services as appropriate. This will provide for sharing of information across different systems. This will allow airspace users and controllers to access the most current information that may be affecting their area of responsibility in a more efficient manner. SWIM will improve decision-making and streamline information sharing for improved planning and execution.
SWIM will also help reduce infrastructure costs by decreasing the number of unique interfaces between systems. Initially, SWIM will provide a common interface framework, reducing the operation and maintenance costs of current interfaces. New systems will interface with each other via SWIM-compliant interfaces, thereby reducing future data interface development costs. Ultimately, redundant data sources will no longer be needed, and associated systems will be decommissioned.
SWIM is one of the key component of the SESAR ( Single European Sky ATM Research ) programme managed by the SESAR Joint Undertaking | https://en.wikipedia.org/wiki/System_Wide_Information_Management |
A system accident (or normal accident ) is an "unanticipated interaction of multiple failures" in a complex system . [ 1 ] This complexity can either be of technology or of human organizations and is frequently both. A system accident can be easy to see in hindsight, but extremely difficult in foresight because there are simply too many action pathways to seriously consider all of them. Charles Perrow first developed these ideas in the mid-1980s. [ 2 ] Safety systems themselves are sometimes the added complexity which leads to this type of accident. [ 3 ]
Pilot and author William Langewiesche used Perrow's concept in his analysis of the factors at play in a 1996 aviation disaster. He wrote in The Atlantic in 1998: "the control and operation of some of the riskiest technologies require organizations so complex that serious failures are virtually guaranteed to occur." [ 4 ] [ a ]
In 2012 Charles Perrow wrote, "A normal accident [system accident] is where everyone tries very hard to play safe, but unexpected interaction of two or more failures (because of interactive complexity), causes a cascade of failures (because of tight coupling)." Perrow uses the term normal accident to emphasize that, given the current level of technology, such accidents are highly likely over a number of years or decades. [ 5 ] James Reason extended this approach with human reliability [ 6 ] and the Swiss cheese model , now widely accepted in aviation safety and healthcare.
These accidents often resemble Rube Goldberg devices in the way that small errors of judgment, flaws in technology, and insignificant damages combine to form an emergent disaster. Langewiesche writes about, "an entire pretend reality that includes unworkable chains of command, unlearnable training programs, unreadable manuals, and the fiction of regulations, checks, and controls." [ 4 ] The more formality and effort to get it exactly right, at times can actually make failure more likely. [ 4 ] [ b ] For example, employees are more likely to delay reporting any changes, problems, and unexpected conditions, wherever organizational procedures involved in adjusting to changing conditions are complex, difficult, or laborious.
A contrasting idea is that of the high reliability organization . [ 7 ] In his assessment of the vulnerabilities of complex systems, Scott Sagan , for example, discusses in multiple publications their robust reliability, especially regarding nuclear weapons. The Limits of Safety (1993) provided an extensive review of close calls during the Cold War that could have resulted in a nuclear war by accident. [ 8 ]
The Apollo 13 Review Board stated in the introduction to chapter five of their report: [emphasis added] [ 9 ]
... It was found that the accident was not the result of a chance malfunction in a statistical sense, but rather resulted from an unusual combination of mistakes, coupled with a somewhat deficient and unforgiving design ...
Perrow considered the Three Mile Island accident normal : [ 10 ]
It resembled other accidents in nuclear plants and in other high risk, complex and highly interdependent operator-machine systems; none of the accidents were caused by management or operator ineptness or by poor government regulation, though these characteristics existed and should have been expected. I maintained that the accident was normal, because in complex systems there are bound to be multiple faults that cannot be avoided by planning and that operators cannot immediately comprehend.
On May 11, 1996, Valujet Flight 592 , a regularly scheduled ValuJet Airlines flight from Miami International to Hartsfield–Jackson Atlanta, crashed about 10 minutes after taking off as a result of a fire in the cargo compartment caused by improperly stored and labeled hazardous cargo. All 110 people on board died. The airline had a poor safety record before the crash. The accident brought widespread attention to the airline's management problems, including inadequate training of employees in proper handling of hazardous materials. The maintenance manual for the MD-80 aircraft documented the necessary procedures and was "correct" in a sense. However, it was so huge that it was neither helpful nor informative. [ 4 ]
In a 2014 monograph, economist Alan Blinder stated that complicated financial instruments made it hard for potential investors to judge whether the price was reasonable. In a section entitled "Lesson # 6: Excessive complexity is not just anti-competitive, it's dangerous", he further stated, "But the greater hazard may come from opacity. When investors don't understand the risks that inhere in the securities they buy (examples: the mezzanine tranche of a CDO-Squared ; a CDS on a synthetic CDO ...), big mistakes can be made–especially if rating agencies tell you they are triple-A, to wit, safe enough for grandma. When the crash comes, losses may therefore be much larger than investors dreamed imaginable. Markets may dry up as no one knows what these securities are really worth. Panic may set in. Thus complexity per se is a source of risk." [ 11 ]
Despite a significant increase in airplane safety since 1980s, there is concern that automated flight systems have become so complex that they both add to the risks that arise from overcomplication and are incomprehensible to the crews who must work with them. As an example, professionals in the aviation industry note that such systems sometimes switch or engage on their own; crew in the cockpit are not necessarily privy to the rationale for their auto-engagement, causing perplexity. Langewiesche cites industrial engineer Nadine Sarter who writes about "automation surprises," often related to system modes the pilot does not fully understand or that the system switches to on its own. In fact, one of the more common questions asked in cockpits today is, "What's it doing now?" In response to this, Langewiesche points to the fivefold increase in aviation safety and writes, "No one can rationally advocate a return to the glamour of the past." [ 12 ]
In an article entitled "The Human Factor", Langewiesche discusses the 2009 crash of Air France Flight 447 over the mid-Atlantic. He points out that, since the 1980s when the transition to automated cockpit systems began, safety has improved fivefold. Langwiesche writes, "In the privacy of the cockpit and beyond public view, pilots have been relegated to mundane roles as system managers." He quotes engineer Earl Wiener who takes the humorous statement attributed to the Duchess of Windsor that one can never be too rich or too thin, and adds "or too careful about what you put into a digital flight-guidance system." Wiener says that the effect of automation is typically to reduce the workload when it is light, but to increase it when it's heavy.
Boeing Engineer Delmar Fadden said that once capacities are added to flight management systems, they become impossibly expensive to remove because of certification requirements. But if unused, may in a sense lurk in the depths unseen. [ 12 ]
Human factors in the implementation of safety procedures play a role in overall effectiveness of safety systems. Maintenance problems are common with redundant systems. Maintenance crews can fail to restore a redundant system to active status. They may be overworked, or maintenance deferred due to budget cuts, because managers know that they system will continue to operate without fixing the backup system. [ 3 ] Steps in procedures may be changed and adapted in practice, from the formal safety rules, often in ways that seem appropriate and rational, and may be essential in meeting time constraints and work demands. In a 2004 Safety Science article, reporting on research partially supported by National Science Foundation and NASA, Nancy Leveson writes: [ 13 ]
However, instructions and written procedures are almost never followed exactly as operators strive to become more efficient and productive and to deal with time pressures ... even in such highly constrained and high-risk environments as nuclear power plants, modification of instructions is repeatedly found and the violation of rules appears to be quite rational, given the actual workload and timing constraints under which the operators must do their job. In these situations, a basic conflict exists between error as seen as a deviation from the normative procedure and error as seen as a deviation from the rational and normally used effective procedure .
Charles Perrow's thinking is more difficult for pilots like me to accept. Perrow came unintentionally to his theory about normal accidents after studying the failings of large organizations. His point is not that some technologies are riskier than others, which is obvious, but that the control and operation of some of the riskiest technologies require organizations so complex that serious failures are virtually guaranteed to occur . Those failures will occasionally combine in unforeseeable ways, and if they induce further failures in an operating environment of tightly interrelated processes, the failures will spin out of control, defeating all interventions. | https://en.wikipedia.org/wiki/System_accident |
System analysis in the field of electrical engineering characterizes electrical systems and their properties. System analysis can be used to represent almost anything from population growth to audio speakers; electrical engineers often use it because of its direct relevance to many areas of their discipline, most notably signal processing , communication systems and control systems .
A system is characterized by how it responds to input signals . In general, a system has one or more input signals and one or more output signals. Therefore, one natural characterization of systems is by how many inputs and outputs they have:
It is often useful (or necessary) to break up a system into smaller pieces for analysis. Therefore, we can regard a SIMO system as multiple SISO systems (one for each output), and similarly for a MIMO system. By far, the greatest amount of work in system analysis has been with SISO systems, although many parts inside SISO systems have multiple inputs (such as adders).
Signals can be continuous or discrete in time, as well as continuous or discrete in the values they take at any given time:
With this categorization of signals, a system can then be characterized as to which type of signals it deals with:
Another way to characterize systems is by whether their output at any given time depends only on the input at that time or perhaps on the input at some time in the past (or in the future!).
Analog systems with memory may be further classified as lumped or distributed . The difference can be explained by considering the meaning of memory in a system. Future output of a system with memory depends on future input and a number of state variables, such as values of the input or output at various times in the past. If the number of state variables necessary to describe future output is finite, the system is lumped; if it is infinite, the system is distributed.
Finally, systems may be characterized by certain properties which facilitate their analysis:
There are many methods of analysis developed specifically for linear time-invariant ( LTI ) deterministic systems. Unfortunately, in the case of analog systems, none of these properties are ever perfectly achieved. Linearity implies that operation of a system can be scaled to arbitrarily large magnitudes, which is not possible. By definition of time-invariance, it is violated by aging effects that can change the outputs of analog systems over time (usually years or even decades). Thermal noise and other random phenomena ensure that the operation of any analog system will have some degree of stochastic behavior. Despite these limitations, however, it is usually reasonable to assume that deviations from these ideals will be small.
As mentioned above, there are many methods of analysis developed specifically for Linear time-invariant systems (LTI systems). This is due to their simplicity of specification. An LTI system is completely specified by its transfer function (which is a rational function for digital and lumped analog LTI systems). Alternatively, we can think of an LTI system being completely specified by its frequency response . A third way to specify an LTI system is by its characteristic linear differential equation (for analog systems) or linear difference equation (for digital systems). Which description is most useful depends on the application.
The distinction between lumped and distributed LTI systems is important. A lumped LTI system is specified by a finite number of parameters, be it the zeros and poles of its transfer function, or the coefficients of its differential equation, whereas specification of a distributed LTI system requires a complete function , or partial differential equations. | https://en.wikipedia.org/wiki/System_analysis |
A system anatomy is simple visual description of a system, focusing on the dependencies between system capabilities.
The system anatomy was first used in a project at Ericsson , and Jack Järkvik is considered the inventor of the concept. After that, the system anatomy and the similar project anatomy (also integration anatomies ) have been used widely in different Ericsson projects and now they are being spread to other major companies with complex system development as well.
Anatomies can be said to be a human centric way of describing a system, since they are used as a means to obtain a common view of the system under development.
The anatomies are especially useful in development of large complex systems in incremental and integration driven development , and as a means to coordinate agile development teams.
The system anatomy, unlike its siblings ( project anatomy , integration anatomy), is actually just another view of the system, different from product structure modeling , UML diagrams and flowcharts . By focusing on the system's capabilities – both internal and money making ones – and the dependencies between those, the development team gets a common picture of the system to be developed, that is easier to grasp than many other models.
One of the key features of the system anatomy is its simplicity. That, of course, means that the anatomy cannot replace other models or design tools. It is only another way of describing the design, in SW tools, on paper or in the heads of the system engineers.
The system anatomy can be used as a starting point when creating an integration anatomy (aka project anatomy) that has more use as a project management tool.
The following example is a simplified example of a system anatomy for an issue management system. This anatomy is drawn with the most basic capabilities at the top. The notation is that the capability at the end of the arrow is a dependent of the capability at the beginning. In this example the anatomy also shows development progress (blue boxes). | https://en.wikipedia.org/wiki/System_anatomy |
System appreciation is an activity often included in the maintenance phase of software engineering projects. Key deliverables from this phase include documentation that describes what the system does in terms of its functional features, and how it achieves those features in terms of its architecture and design . Software architecture recovery is often the first step within System appreciation. [ 1 ]
This software-engineering -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_appreciation |
A system camera or camera body is a camera with interchangeable components that constitutes the core of a system. Early representatives include Leica I Schraubgewinde (1930), Exakta (1936) and the Nikon F (1959). System cameras are often single-lens reflex (SLR) or twin-lens reflex (TLR) but can also be rangefinder cameras or, more recently, mirrorless interchangeable-lens cameras . Voice coil motors (VCMs) are used to control the lens movement to achieve fast and accurate autofocus. [ 1 ] The VCM moves the lens elements to focus the light onto the sensor with high precision. [ 2 ]
Systems are usually named for the lens mount , such Nikon F-mount, Canon EF mount , and M42 mount (a non-proprietary mount using a 42 mm × 1 mm screw thread).
Even point-and-shoot cameras usually include a tripod socket . A system camera includes at the very least a camera body and separate, interchangeable lenses , whence the alternative name interchangeable-lens camera ( ILC ). In addition it often includes:
While some early mechanical interfaces are standardized across brands, optical and electronic interfaces are often proprietary . Hot shoes have a common interface for basic flash functions, but often contain proprietary contacts inside for advanced flashes and data modules.
This camera-related article is a stub . You can help Wikipedia by expanding it .
This systems -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_camera |
A system configuration ( SC ) [ 1 ] in systems engineering defines the computers, processes, and devices that compose the system and its boundary. More generally, the system configuration is the specific definition of the elements that define and/or prescribe what a system is composed of.
Alternatively, the term "system configuration" can be used to relate to a model (declarative) for abstract generalized systems. In this sense, the usage of the configuration information is not tailored to any specific usage, but stands alone as a data set.
A properly-configured system avoids resource-conflict problems, and makes it easier to upgrade a system with new equipment.
The following is a basic SC XML System Configuration:
Description :
This provides information about a single "site" (MyHouse) and specifies that there is one host with user-setup and mysql-db components. The host must have an account on it for a user named mysql , with appropriate parameters. Notice that the configuration schema requires no XML tags that are Windows - or UNIX -specific. It simply presents data as standalone information – with no pretense for how the data is to be used.
This is the hallmark for a good system configuration model.
The above model can be extended. For example, the user could have more attributes like "preferences" and "password". The components could depend on other components. Properties can be defined that are passed into sub-elements. The extensions can be endless (WATCHOUT: complexity ) and must be managed and well-thought-out to prevent "breaking" the idea of the system configuration.
The usage for the model in practical terms falls into several categories: documentation , deployment & operations .
One use of the configuration is to simply record what a system is . This documentation could in turn become quite extensive, thus complicating the data model. It is important to distinguish between configuration data and descriptive data. Of course comments can be applied at any level, even in most tools, however the bloating of the data can reduce its usefulness. For example, the system configuration is not a place to record historical changes, or descriptions of design and intent for the various elements. The configuration data is simply to be "what it is" or "what we want it to be".
Deployment involves interpreting a configuration data set and acting on that data to realize the configure the system accordingly. This may simply be a validation of what's there to confirm that the configuration is in effect.
Examples include a Perl library launched from the command line to read the configuration and begin launching processes on the local or remote hosts to install components. Also while the system is running, there may be a SystemConfiguration service that provides an interface (i.e. CORBA IDL interfaces) for other system applications to use to access the configuration data, and perform deployment-like actions.
When the system is in operation, there may be uses for the configuration data by specific kinds of services in the system. For example, a Secnager may access the configuration to acquire the MD5 passwords for the user accounts that are allowed to log into hosts remotely. A system monitor service (see: system monitoring ) may use the data to determine "what to monitor" and "how to monitor" the system elements. A PresentationManager might use the data to access menu-items and views based on user access privileges. | https://en.wikipedia.org/wiki/System_configuration |
A computer terminal is an electronic or electromechanical hardware device that can be used for entering data into, and transcribing [ 1 ] data from, a computer or a computing system. [ 2 ] Most early computers only had a front panel to input or display bits and had to be connected to a terminal to print or input text through a keyboard. Teleprinters were used as early-day hard-copy terminals [ 3 ] [ 4 ] and predated the use of a computer screen by decades. The computer would typically transmit a line of data which would be printed on paper, and accept a line of data from a keyboard over a serial or other interface. Starting in the mid-1970s with microcomputers such as the Sphere 1 , Sol-20 , and Apple I , display circuitry and keyboards began to be integrated into personal and workstation computer systems, with the computer handling character generation and outputting to a CRT display such as a computer monitor or, sometimes, a consumer TV, but most larger computers continued to require terminals.
Early terminals were inexpensive devices but very slow compared to punched cards or paper tape for input; with the advent of time-sharing systems, terminals slowly pushed these older forms of interaction from the industry. Related developments were the improvement of terminal technology and the introduction of inexpensive video displays . Early Teletypes only printed out with a communications speed of only 75 baud or 10 5-bit characters per second, and by the 1970s speeds of video terminals had improved to 2400 or 9600 2400 bit/s . Similarly, the speed of remote batch terminals had improved to 4800 bit/s at the beginning of the decade and 19.6 kbps by the end of the decade, with higher speeds possible on more expensive terminals.
The function of a terminal is typically confined to transcription and input of data; a device with significant local, programmable data-processing capability may be called a "smart terminal" or fat client . A terminal that depends on the host computer for its processing power is called a " dumb terminal " [ 5 ] or a thin client . [ 6 ] [ 7 ] In the era of serial ( RS-232 ) terminals there was a conflicting usage of the term "smart terminal" as a dumb terminal with no user-accessible local computing power but a particularly rich set of control codes for manipulating the display; this conflict was not resolved before hardware serial terminals became obsolete.
The use of terminals decreased over time as computing shifted from command line interface (CLI) to graphical user interface (GUI) and from time-sharing on large computers to personal computers and handheld devices . Today, users generally interact with a server over high-speed networks using a Web browser and other network-enabled GUI applications. Today, a terminal emulator application provides the capabilities of a physical terminal – allowing interaction with the operating system shell and other CLI applications.
The console of Konrad Zuse 's Z3 had a keyboard in 1941, as did the Z4 in 1942–1945. However, these consoles could only be used to enter numeric inputs and were thus analogous to those of calculating machines; programs, commands, and other data were entered via paper tape. Both machines had a row of display lamps for results.
In 1956, the Whirlwind Mark I computer became the first computer equipped with a keyboard-printer combination with which to support direct input [ 4 ] of data and commands and output of results. That device was a Friden Flexowriter , which would continue to serve this purpose on many other early computers well into the 1960s.
Early user terminals connected to computers were, like the Flexowriter, electromechanical teleprinters /teletypewriters (TeleTYpewriter, TTY), such as the Teletype Model 33 , originally used for telegraphy ; early Teletypes were typically configured as Keyboard Send-Receive (KSR) or Automatic Send-Receive (ASR). Some terminals, such as the ASR Teletype models, included a paper tape reader and punch which could record output such as a program listing. The data on the tape could be re-entered into the computer using the tape reader on the teletype, or printed to paper. Teletypes used the current loop interface that was already used in telegraphy. A less expensive Read Only (RO) configuration was available for the Teletype.
Custom-designs keyboard/printer terminals that came later included the IBM 2741 (1965) [ 8 ] and the DECwriter (1970). [ 9 ] Respective top speeds of teletypes, IBM 2741 and the LA30 (an early DECwriter) were 10, 15 and 30
characters per second. Although at that time "paper was king" [ 9 ] [ 10 ] the speed of interaction was relatively limited.
The DECwriter was the last major printing-terminal product. It faded away after 1980 under pressure from video display units (VDUs), with the last revision (the DECwriter IV of 1982) abandoning the classic teletypewriter form for one more resembling a desktop printer.
A video display unit (VDU) displays information on a screen rather than printing text to paper and typically uses a cathode-ray tube (CRT). VDUs in the 1950s were typically designed for displaying graphical data rather than text and were used in, e.g., experimental computers at institutions such as MIT ; computers used in academia, government and business, sold under brand names such as DEC , ERA , IBM and UNIVAC ; military computers supporting specific defence applications such as ballistic missile warning systems and radar/air defence coordination systems such as BUIC and SAGE .
Two early landmarks in the development of the VDU were the Univac Uniscope [ 11 ] [ 12 ] [ 13 ] and the IBM 2260 , [ 14 ] both in 1964. These were block-mode terminals designed to display a page at a time, using proprietary protocols; in contrast to character-mode devices, they enter data from the keyboard into a display buffer rather than transmitting them immediately. In contrast to later character-mode devices, the Uniscope used synchronous serial communication over an EIA RS-232 interface to communicate between the multiplexer and the host, while the 2260 used either a channel connection or asynchronous serial communication between the 2848 and the host. The 2265, related to the 2260, also used asynchronous serial communication.
The Datapoint 3300 from Computer Terminal Corporation , announced in 1967 and shipped in 1969, was a character-mode device that emulated a Model 33 Teletype . This reflects the fact that early character-mode terminals were often deployed to replace teletype machines as a way to reduce operating costs.
The next generation of VDUs went beyond teletype emulation with an addressable cursor that gave them the ability to paint two-dimensional displays on the screen. Very early VDUs with cursor addressibility included the VT05 and the Hazeltine 2000 operating in character mode, both from 1970. Despite this capability, early devices of this type were often called "Glass TTYs". [ 15 ] Later, the term "glass TTY" tended to be restrospectively narrowed to devices without full cursor addressibility.
The classic era of the VDU began in the early 1970s and was closely intertwined with the rise of time sharing computers . Important early products were the ADM-3A , VT52 , and VT100 . These devices used no complicated CPU , instead relying on individual logic gates , LSI chips, or microprocessors such as the Intel 8080 . This made them inexpensive and they quickly became extremely popular input-output devices on many types of computer system, often replacing earlier and more expensive printing terminals.
After 1970 several suppliers gravitated to a set of common standards:
The experimental era of serial VDUs culminated with the VT100 in 1978. By the early 1980s, there were dozens of manufacturers of terminals, including Lear-Siegler , ADDS , Data General, DEC , Hazeltine Corporation , Heath/Zenith , Hewlett-Packard , IBM, TeleVideo , Volker-Craig, and Wyse , many of which had incompatible command sequences (although many used the early ADM-3 as a starting point).
The great variations in the control codes between makers gave rise to software that identified and grouped terminal types so the system software would correctly display input forms using the appropriate control codes; In Unix-like systems the termcap or terminfo files, the stty utility, and the TERM environment variable would be used; in Data General's Business BASIC software, for example, at login-time a sequence of codes were sent to the terminal to try to read the cursor's position or the 25th line's contents using a sequence of different manufacturer's control code sequences, and the terminal-generated response would determine a single-digit number (such as 6 for Data General Dasher terminals, 4 for ADM 3A/5/11/12 terminals, 0 or 2 for TTYs with no special features) that would be available to programs to say which set of codes to use.
The great majority of terminals were monochrome, manufacturers variously offering green, white or amber and sometimes blue screen phosphors. (Amber was claimed to reduce eye strain). Terminals with modest color capability were also available but not widely used; for example, a color version of the popular Wyse WY50, the WY350, offered 64 shades on each character cell.
VDUs were eventually displaced from most applications by networked personal computers, at first slowly after 1985 and with increasing speed in the 1990s. However, they had a lasting influence on PCs. The keyboard layout of the VT220 terminal strongly influenced the Model M shipped on IBM PCs from 1985, and through it all later computer keyboards.
Although flat-panel displays were available since the 1950s, cathode-ray tubes continued to dominate the market until the personal computer had made serious inroads into the display terminal market. By the time cathode-ray tubes on PCs were replaced by flatscreens after the year 2000, the hardware computer terminal was nearly obsolete.
A character-oriented terminal is a type of computer terminal that communicates with its host one character at a time, as opposed to a block-oriented terminal that communicates in blocks of data. It is the most common type of data terminal, because it is easy to implement and program. Connection to the mainframe computer or terminal server is achieved via RS-232 serial links, Ethernet or other proprietary protocols .
Character-oriented terminals can be "dumb" or "smart". Dumb terminals [ 5 ] are those that can interpret a limited number of control codes (CR, LF, etc.) but do not have the ability to process special escape sequences that perform functions such as clearing a line, clearing the screen, or controlling cursor position. In this context dumb terminals are sometimes dubbed glass Teletypes , for they essentially have the same limited functionality as does a mechanical Teletype. This type of dumb terminal is still supported on modern Unix-like systems by setting the environment variable TERM to dumb . Smart or intelligent terminals are those that also have the ability to process escape sequences, in particular the VT52, VT100 or ANSI escape sequences.
A text terminal , or often just terminal (sometimes text console ) is a serial computer interface for text entry and display. Information is presented as an array of pre-selected formed characters . When such devices use a video display such as a cathode-ray tube , they are called a " video display unit " or "visual display unit" (VDU) or "video display terminal" (VDT).
The system console is often [ 16 ] a text terminal used to operate a computer. Modern computers have a built-in keyboard and display for the console. Some Unix-like operating systems such as Linux and FreeBSD have virtual consoles to provide several text terminals on a single computer.
The fundamental type of application running on a text terminal is a command-line interpreter or shell , which prompts for commands from the user and executes each command after a press of Return . [ 17 ] This includes Unix shells and some interactive programming environments. In a shell, most of the commands are small applications themselves.
Another important application type is that of the text editor . A text editor typically occupies the full area of display, displays one or more text documents, and allows the user to edit the documents. The text editor has, for many uses, been replaced by the word processor , which usually provides rich formatting features that the text editor lacks. The first word processors used text to communicate the structure of the document, but later word processors operate in a graphical environment and provide a WYSIWYG simulation of the formatted output. However, text editors are still used for documents containing markup such as DocBook or LaTeX .
Programs such as Telix and Minicom control a modem and the local terminal to let the user interact with remote servers. On the Internet , telnet and ssh work similarly.
In the simplest form, a text terminal is like a file. Writing to the file displays the text and reading from the file produces what the user enters. In Unix-like operating systems, there are several character special files that correspond to available text terminals. For other operations, there are special escape sequences , control characters and termios functions that a program can use, most easily via a library such as ncurses . For more complex operations, the programs can use terminal specific ioctl system calls. For an application, the simplest way to use a terminal is to simply write and read text strings to and from it sequentially. The output text is scrolled, so that only the last several lines (typically 24) are visible. Unix systems typically buffer the input text until the Enter key is pressed, so the application receives a ready string of text. In this mode, the application need not know much about the terminal. For many interactive applications this is not sufficient. One of the common enhancements is command-line editing (assisted with such libraries as readline ); it also may give access to command history. This is very helpful for various interactive command-line interpreters.
Even more advanced interactivity is provided with full-screen applications. Those applications completely control the screen layout; also they respond to key-pressing immediately. This mode is very useful for text editors, file managers and web browsers . In addition, such programs control the color and brightness of text on the screen, and decorate it with underline, blinking and special characters (e.g. box-drawing characters ). To achieve all this, the application must deal not only with plain text strings, but also with control characters and escape sequences, which allow moving the cursor to an arbitrary position, clearing portions of the screen, changing colors and displaying special characters, and also responding to function keys. The great problem here is that there are many different terminals and terminal emulators, each with its own set of escape sequences. In order to overcome this, special libraries (such as curses ) have been created, together with terminal description databases, such as Termcap and Terminfo.
A block-oriented terminal or block mode terminal is a type of computer terminal that communicates with its host in blocks of data, as opposed to a character-oriented terminal that communicates with its host one character at a time. A block-oriented terminal may be card-oriented, display-oriented, keyboard-display, keyboard-printer, printer or some combination.
The IBM 3270 is perhaps the most familiar implementation of a block-oriented display terminal, [ 18 ] but most mainframe computer manufacturers and several other companies produced them. The description below is in terms of the 3270, but similar considerations apply to other types.
Block-oriented terminals typically incorporate a buffer which stores one screen or more of data, and also stores data attributes, not only indicating appearance (color, brightness, blinking, etc.) but also marking the data as being enterable by the terminal operator vs. protected against entry, as allowing the entry of only numeric information vs. allowing any characters, etc. In a typical application the host sends the terminal a preformatted panel containing both static data and fields into which data may be entered. The terminal operator keys data, such as updates in a database entry, into the appropriate fields. When entry is complete (or ENTER or PF key pressed on 3270s), a block of data, usually just the data entered by the operator (modified data), is sent to the host in one transmission. The 3270 terminal buffer (at the device) could be updated on a single character basis, if necessary, because of the existence of a "set buffer address order" (SBA), that usually preceded any data to be written/overwritten within the buffer. A complete buffer could also be read or replaced using the READ BUFFER command or WRITE command (unformatted or formatted in the case of the 3270).
Block-oriented terminals cause less system load on the host and less network traffic than character-oriented terminals. They also appear more responsive to the user, especially over slow connections, since editing within a field is done locally rather than depending on echoing from the host system.
Early terminals had limited editing capabilities – 3270 terminals, for example, only could check entries as valid numerics. [ 19 ] Subsequent "smart" or "intelligent" terminals incorporated microprocessors and supported more local processing.
Programmers of block-oriented terminals often used the technique of storing context information for the transaction in progress on the screen, possibly in a hidden field, rather than depending on a running program to keep track of status. This was the precursor of the HTML technique of storing context in the URL as data to be passed as arguments to a CGI program.
Unlike a character-oriented terminal, where typing a character into the last position of the screen usually causes the terminal to scroll down one line, entering data into the last screen position on a block-oriented terminal usually causes the cursor to wrap — move to the start of the first enterable field. Programmers might "protect" the last screen position to prevent inadvertent wrap. Likewise a protected field following an enterable field might lock the keyboard and sound an audible alarm if the operator attempted to enter more data into the field than allowed.
A graphical terminal can display images as well as text. Graphical terminals [ 23 ] are divided into vector-mode terminals, and raster mode .
A vector-mode display directly draws lines on the face of a cathode-ray tube under control of the host computer system. The lines are continuously formed, but since the speed of electronics is limited, the number of concurrent lines that can be displayed at one time is limited. Vector-mode displays were historically important but are no longer used.
Practically all modern graphic displays are raster-mode, descended from the picture scanning techniques used for television , in which the visual elements are a rectangular array of pixels . Since the raster image is only perceptible to the human eye as a whole for a very short time, the raster must be refreshed many times per second to give the appearance of a persistent display. The electronic demands of refreshing display memory meant that graphic terminals were developed much later than text terminals, and initially cost much more. [ 24 ] [ 25 ]
Most terminals today [ when? ] are graphical; that is, they can show images on the screen. The modern term for graphical terminal is " thin client ". [ citation needed ] A thin client typically uses a protocol such as X11 for Unix terminals, or RDP for Microsoft Windows. The bandwidth needed depends on the protocol used, the resolution, and the color depth .
Modern graphic terminals allow display of images in color, and of text in varying sizes, colors, and fonts (type faces). [ clarification needed ]
In the early 1990s, an industry consortium attempted to define a standard, AlphaWindows , that would allow a single CRT screen to implement multiple windows, each of which was to behave as a distinct terminal. Unfortunately, like I2O , this suffered from being run as a closed standard: non-members were unable to obtain even minimal information and there was no realistic way a small company or independent developer could join the consortium. [ citation needed ]
An intelligent terminal [ 26 ] does its own processing, usually implying a microprocessor is built in, but not all terminals with microprocessors did any real processing of input: the main computer to which it was attached would have to respond quickly to each keystroke. The term "intelligent" in this context dates from 1969. [ 27 ]
Notable examples include the IBM 2250 , predecessor to the IBM 3250 and IBM 5080, and IBM 2260 , [ 28 ] predecessor to the IBM 3270 , introduced with System/360 in 1964.
Most terminals were connected to minicomputers or mainframe computers and often had a green or amber screen. Typically terminals communicate with the computer via a serial port via a null modem cable, often using an EIA RS-232 or RS-422 or RS-423 or a current loop serial interface. IBM systems typically communicated over a Bus and Tag channel, a coaxial cable using a proprietary protocol, a communications link using Binary Synchronous Communications or IBM's SNA protocol, but for many DEC, Data General and NCR (and so on) computers there were many visual display suppliers competing against the computer manufacturer for terminals to expand the systems. In fact, the instruction design for the Intel 8008 was originally conceived at Computer Terminal Corporation as the processor for the Datapoint 2200 .
From the introduction of the IBM 3270 , and the DEC VT100 (1978), the user and programmer could notice significant advantages in VDU technology improvements, yet not all programmers used the features of the new terminals ( backward compatibility in the VT100 and later TeleVideo terminals, for example, with "dumb terminals" allowed programmers to continue to use older software).
Some dumb terminals had been able to respond to a few escape sequences without needing microprocessors: they used multiple printed circuit boards with many integrated circuits ; the single factor that classed a terminal as "intelligent" was its ability to process user-input within the terminal—not interrupting the main computer at each keystroke—and send a block of data at a time (for example: when the user has finished a whole field or form). Most terminals in the early 1980s, such as ADM-3A, TVI912, Data General D2, DEC VT52 , despite the introduction of ANSI terminals in 1978, were essentially "dumb" terminals, although some of them (such as the later ADM and TVI models) did have a primitive block-send capability. Common early uses of local processing power included features that had little to do with off-loading data processing from the host computer but added useful features such as printing to a local printer, buffered serial data transmission and serial handshaking (to accommodate higher serial transfer speeds), and more sophisticated character attributes for the display, as well as the ability to switch emulation modes to mimic competitor's models, that became increasingly important selling features during the 1980s especially, when buyers could mix and match different suppliers' equipment to a greater extent than before.
The advance in microprocessors and lower memory costs made it possible for the terminal to handle editing operations such as inserting characters within a field that may have previously required a full screen-full of characters to be re-sent from the computer, possibly over a slow modem line. Around the mid-1980s most intelligent terminals, costing less than most dumb terminals would have a few years earlier, could provide enough user-friendly local editing of data and send the completed form to the main computer. Providing even more processing possibilities, workstations such as the TeleVideo TS-800 could run CP/M-86 , blurring the distinction between terminal and Personal Computer.
Another of the motivations for development of the microprocessor was to simplify and reduce the electronics required in a terminal. That also made it practicable to load several "personalities" into a single terminal, so a Qume QVT-102 could emulate many popular terminals of the day, and so be sold into organizations that did not wish to make any software changes. Frequently emulated terminal types included:
The ANSI X3.64 escape code standard produced uniformity to some extent, but significant differences remained. For example, the VT100 , Heathkit H19 in ANSI mode, Televideo 970, Data General D460, and Qume QVT-108 terminals all followed the ANSI standard, yet differences might exist in codes from function keys , what character attributes were available, block-sending of fields within forms, "foreign" character facilities, and handling of printers connected to the back of the screen.
In the 21st century,
the term Intelligent Terminal can now refer to a retail Point of Sale computer. [ 29 ]
Even though the early IBM PC looked somewhat like a terminal with a green monochrome monitor , it is not classified a terminal since it provides local computing instead of interacting with a server at a character level. With terminal emulator software, a PC can, however, provide the function of a terminal to interact with a mainframe or minicomputer. Eventually, personal computers greatly reduced market demand for conventional terminals.
In and around the 1990s, thin client and X terminal technology combined the relatively economical local processing power with central, shared computer facilities to leverage advantages of terminals over personal computers.
In a GUI environment, such as the X Window System , the display can show multiple programs – each in its own window – rather than a single stream of text associated with a single program. As a terminal emulator runs in a GUI environment to provide command-line access, it alleviates the need for a physical terminal and allows for multiple windows running separate emulators.
One meaning of system console , computer console , root console , operator's console , or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader , the kernel , from the init system and from the system logger . It is a physical device consisting of a keyboard and a printer or screen, and traditionally is a text terminal , but may also be a graphical terminal .
Another, older, meaning of system console, computer console, hardware console , operator's console or simply console is a hardware component used by an operator to control the hardware, typically some combination of front panel , keyboard/printer and keyboard/display.
Prior to the development of alphanumeric CRT system consoles, some computers such as the IBM 1620 had console typewriters and front panels while the very first electronic stored-program computer , the Manchester Baby , used a combination of electromechanical switches and a CRT to provide console functions—the CRT displaying memory contents in binary by mirroring the machine's Williams-Kilburn tube CRT-based RAM.
Some early operating systems supported either a single keyboard/print or keyboard/display device for controlling the OS. Some also supported a single alternate console, and some supported a hardcopy console for retaining a record of commands, responses and other console messages. However, in the late 1960s it became common for operating systems to support many more consoles than 3, and operating systems began appearing in which the console was simply any terminal with a privileged user logged on.
On early minicomputers , the console was a serial console , an RS-232 serial link to a terminal such as a ASR-33 or, later, a terminal from Digital Equipment Corporation (DEC), e.g., DECWriter , VT100 . This terminal was usually kept in a secured room since it could be used for certain privileged functions such as halting the system or selecting which media to boot from. Large midrange systems , e.g. those from Sun Microsystems , Hewlett-Packard and IBM , [ citation needed ] still use serial consoles. In larger installations, the console ports are attached to multiplexers or network-connected multiport serial servers that let an operator connect a terminal to any of the attached servers. Today, serial consoles are often used for accessing headless systems , usually with a terminal emulator running on a laptop . Also, routers, enterprise network switches and other telecommunication equipment have RS-232 serial console ports.
On PCs and workstations , the computer's attached keyboard and monitor have the equivalent function. Since the monitor cable carries video signals, it cannot be extended very far. Often, installations with many servers therefore use keyboard/video multiplexers ( KVM switches ) and possibly video amplifiers to centralize console access. In recent years, KVM/IP devices have become available that allow a remote computer to view the video output and send keyboard input via any TCP/IP network and therefore the Internet .
Some PC BIOSes , especially in servers, also support serial consoles, giving access to the BIOS through a serial port so that the simpler and cheaper serial console infrastructure can be used. Even where BIOS support is lacking, some operating systems , e.g. FreeBSD and Linux , can be configured for serial console operation either during bootup, or after startup.
Starting with the IBM 9672 , IBM large systems have used a Hardware Management Console (HMC), consisting of a PC and a specialized application, instead of a 3270 or serial link. Other IBM product lines also use an HMC, e.g., System p .
It is usually possible to log in from the console. Depending on configuration, the operating system may treat a login session from the console as being more trustworthy than a login session from other sources.
A terminal emulator is a piece of software that emulates a text terminal. In the past, before the widespread use of local area networks and broadband internet access, many computers would use a serial access program to communicate with other computers via telephone line or serial device.
When the first Macintosh was released, a program called MacTerminal [ 30 ] was used to communicate with many computers, including the IBM PC .
The Win32 console on Windows does not emulate a physical terminal that supports escape sequences [ 31 ] [ dubious – discuss ] so SSH and Telnet programs (for logging in textually to remote computers) for Windows, including the Telnet program bundled with some versions of Windows, often incorporate their own code to process escape sequences.
The terminal emulators on most Unix-like systems—such as, for example, gnome-terminal , Konsole , QTerminal, xterm , and Terminal.app —do emulate physical terminals including support for escape sequences; e.g., xterm can emulate the VT220 and Tektronix 4010 hardware terminals.
Terminals can operate in various modes, relating to when they send input typed by the user on the keyboard to the receiving system (whatever that may be):
There is a distinction between the return and the ↵ Enter keys. In some multiple-mode terminals, that can switch between modes, pressing the ↵ Enter key when not in block mode does not do the same thing as pressing the return key. Whilst the return key will cause an input line to be sent to the host in line-at-a-time mode, the ↵ Enter key will rather cause the terminal to transmit the contents of the character row where the cursor is currently positioned to the host, host-issued prompts and all. [ 34 ] Some block-mode terminals have both an ↵ Enter and local cursor moving keys such as Return and New Line .
Different computer operating systems require different degrees of mode support when terminals are used as computer terminals. The POSIX terminal interface , as provided by Unix and POSIX-compliant operating systems, does not accommodate block-mode terminals at all, and only rarely requires the terminal itself to be in line-at-a-time mode, since the operating system is required to provide canonical input mode , where the terminal device driver in the operating system emulates local echo in the terminal, and performs line editing functions at the host end. Most usually, and especially so that the host system can support non-canonical input mode , terminals for POSIX-compliant systems are always in character-at-a-time mode. In contrast, IBM 3270 terminals connected to MVS systems are always required to be in block mode. [ 36 ] [ 37 ] [ 38 ] [ 39 ] | https://en.wikipedia.org/wiki/System_console |
A system context diagram in engineering is a diagram that defines the boundary between the system , or part of a system, and its environment, showing the entities that interact with it. [ 2 ] This diagram is a high level view of a system . It is similar to a block diagram .
System context diagrams show a system, as a whole and its inputs and outputs from/to external factors. According to Kossiakoff and Sweet (2011): [ 3 ]
System Context Diagrams ... represent all external entities that may interact with a system ... Such a diagram pictures the system at the center, with no details of its interior structure, surrounded by all its interacting systems, environments and activities. The objective of the system context diagram is to focus attention on external factors and events that should be considered in developing a complete set of systems requirements and constraints.
System context diagrams are used early in a project to get agreement on the scope under investigation. [ 4 ] Context diagrams are typically included in a requirements document. These diagrams must be read by all project stakeholders and thus should be written in plain language, so the stakeholders can understand items within the document.
Context diagrams can be developed with the use of two types of building blocks:
For example, "customer places order." Context diagrams can also use many different drawing types to represent external entities. They can use ovals , stick figures , pictures , clip art or any other representation to convey meaning. Decision trees and data storage are represented in system flow diagrams.
A context diagram can also list the classifications of the external entities as one of a set of simple categories [ 5 ] (Examples: [ 6 ] ), which add clarity to the level of involvement of the entity with regards to the system. These categories include:
The best system context diagrams are used to display how a system interoperates at a very high level, or how systems operate and interact logically. The system context diagram is a necessary tool in developing a baseline interaction between systems and actors; actors and a system or systems and systems. Alternatives to the system context diagram are:
Most of these diagrams work well as long as a limited number of interconnects will be shown. Where twenty or more interconnects must be displayed, the diagrams become quite complex and can be difficult to read. [ 7 ] | https://en.wikipedia.org/wiki/System_context_diagram |
In computing, a system crash screen , error screen or screen of death is a visual indicator that appears when an operating system , software application , or hardware encounters a severe issue that prevents normal operation. These screens typically serve as a last-resort mechanism to inform users and system administrators of a critical failure. An error screen may display technical information such as error messages , diagnostic codes , memory dumps , or troubleshooting instructions. They can occur due to hardware malfunctions, corrupted system files, software crashes, overheating, or other critical failures. Error screens vary by operating system and device, with some of the most well-known examples being the Blue Screen of Death (BSOD) in Windows, the Sad Mac in classic Macintosh computers, and the Kernel Panic in Unix-based systems like Linux and macOS. Game consoles may also have notable crash screens, such as the PlayStation 2 and the Nintendo Wii. | https://en.wikipedia.org/wiki/System_crash_screen |
The deployment of a mechanical device , electrical system , computer program , etc., is its assembly or transformation from a packaged form to an operational working state. [ 1 ]
Deployment implies moving a product from a temporary or development state to a permanent or desired state.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_deployment |
In the systems sciences system equivalence is the behavior of a parameter or component of a system in a way similar to a parameter or component of a different system. Similarity means that mathematically the parameters and components will be indistinguishable from each other. Equivalence can be very useful in understanding how complex systems work.
Examples of equivalent systems are first- and second- order (in the independent variable ) translational , electrical , torsional , fluidic , and caloric systems.
Equivalent systems can be used to change large and expensive mechanical, thermal, and fluid systems into a simple, cheaper electrical system. Then the electrical system can be analyzed to validate that the system dynamics will work as designed. This is a preliminary inexpensive way for engineers to test that their complex system performs the way they are expecting.
This testing is necessary when designing new complex systems that have many components. Businesses do not want to spend millions of dollars on a system that does not perform the way that they were expecting. Using the equivalent system technique, engineers can verify and prove to the business that the system will work. This lowers the risk factor that the business is taking on the project.
The following is a chart of equivalent variables for the different types of systems:
q v = volume flow rate
The equivalents shown in the chart are not the only way to form mathematical analogies. In fact there are any number of ways to do this. A common requirement for analysis is that the analogy correctly models energy storage and flow across energy domains. To do this, the equivalences must be compatible. A pair of variables whose product is power (or energy ) in one domain must be equivalent to a pair of variables in the other domain whose product is also power (or energy). These are called power conjugate variables. The thermal variables shown in the chart are not power conjugates and thus do not meet this criterion. See mechanical–electrical analogies for more detailed information on this. Even specifying power conjugate variables does not result in a unique analogy and there are at least three analogies of this sort in use. At least one more criterion is needed to uniquely specify the analogy, such as the requirement that impedance is equivalent in all domains as is done in the impedance analogy .
All the fundamental variables of these systems have the same functional form.
The system equivalence method may be used to describe systems of two types: "vibrational" systems (which are thus described - approximately - by harmonic oscillation) and "translational" systems (which deal with "flows"). These are not mutually exclusive; a system may have features of both. Similarities also exist; the two systems can often be analysed by the methods of Euler, Lagrange and Hamilton, so that in both cases the energy is quadratic in the relevant degree(s) of freedom, provided they are linear.
Vibrational systems are often described by some sort of wave (partial differential) equation, or oscillator (ordinary differential) equation. Furthermore, these sorts of systems follow the capacitor or spring analogy, in the sense that the dominant degree of freedom in the energy is the generalized position. In more physical language, these systems are predominantly characterised by their potential energy. This often works for solids, or (linearized) undulatory systems near equilibrium.
On the other hand, flow systems may be easier described by the hydraulic analogy or the diffusion equation. For example, Ohm's law was said to be inspired by Fourier's law (as well as the work of C.-L. Navier). [ 2 ] [ 3 ] [ 4 ] Other laws include Fick's laws of diffusion and generalized transport problems. The most important idea is the flux, or rate of transfer of some important physical quantity considered (like electric or magnetic fluxes). In these sorts of systems, the energy is dominated by the derivative of the generalized position (generalized velocity). In physics parlance, these systems tend to be kinetic energy-dominated. Field theories, in particular electromagnetism, draw heavily from the hydraulic analogy. | https://en.wikipedia.org/wiki/System_equivalence |
A system file in computers is a critical computer file without which a computer system may not operate correctly. These files may come as part of the operating system , a third-party device driver or other sources. Microsoft Windows and MS-DOS mark their more valuable system files with a "system" attribute to protect them against accidental deletion. (Although the system attribute can be manually put on any arbitrary file; these files do not become system files.)
Specific example of system files include the files with .sys filename extension in MS-DOS. In Windows NT family, the system files are mainly under the folder C:\Windows\System32 . In Mac OS they are in the System suitcase . And in Linux system the system files are located under folders /boot (the kernel itself), /usr/sbin ( system utilities ) and /usr/lib/modules (kernel device drivers ).
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_file |
The Canadian Securities Administrators ( CSA ; French : Autorités canadiennes en valeurs mobilières , ACVM) is an umbrella organization of Canada's provincial and territorial securities regulators whose objective is to improve, coordinate, and harmonize regulation of the Canadian capital markets . [ 1 ] [ 2 ]
The CSA's national systems include the National Registration Database ( NRD ), a web-based database that allows security dealers and investment advisors to file registration forms electronically; [ 3 ] the System for Electronic Disclosure by Insiders ( SEDI ), an online, browser-based service for the filing and viewing of insider trading reports; [ 4 ] and the System for Electronic Document Analysis and Retrieval ( SEDAR ), a publicly-accessible database that contains all the required non-confidential filings related to publicly traded Canadian companies. [ 5 ] [ 6 ]
The CSA serves a regulatory function comparable to that of the Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) in the United States. [ 5 ]
As an informal body, the CSA originally functioned primarily through meetings, conference calls, and day-to-day collaborations with the various territorial and provincial securities regulatory authorities. In 2003, the CSA was restructured into a more formal organization, in which the chair and vice-chair were elected by members to 2-year terms. [ 5 ]
In 2004, the CSA established a permanent secretariat , located in Montreal . Among other things, the secretariat monitors and coordinates the work of various CSA committees on policy initiatives. [ 6 ]
In 2016, the CSA adopted a passport system through which market participants can access to markets in all passport jurisdictions by dealing only with its principal regulator and complying with one set of harmonized laws. [ 7 ] While the CSA co-ordinates initiatives on a cross-Canada basis, the 10 provinces and 3 territories in Canada are responsible for securities regulations and their enforcement. This provides a more direct and efficient service since each regulator is closer to its local investors and market participants. [ citation needed ]
The CSA established a permanent Secretariat in March 2004 in Montreal to provide the organizational stability necessary for CSA to function efficiently. The CSA Secretariat also monitors and coordinates the work of various CSA committees on policy initiatives. [ 8 ]
The CSA consists of the securities regulators of the 10 provincial and 3 territorial governments of Canada. [ 8 ] The CSA Chairs are the respective chairs of the securities regulators of the 10 provinces and 3 territories of Canada. [ 9 ] They meet quarterly in person. A chair and vice-chair of the CSA are elected by members for two year terms. [ 8 ]
Each CSA member has its own staff who work on policy development and deliver regulatory programs through their participation in CSA Committees, which can include permanent committees and project committees. The latter are formed to work on specific policy projects, and can deal with subjects such as short- and long-form prospectuses , continuous disclosure, proportionate regulation, and investor confidence.
Standing committees include: Executive Directors, Enforcement, Market Oversight, Registrant Regulation, Investment Funds and Investor Education. [ 8 ]
In August 2003, the CSA established the Policy Coordination Committee, which oversees CSA's policy development initiatives, facilitates decision-making, as well as acting as a forum for resolution of policy development issues, monitors ongoing issues, and provides recommendations to the CSA chairs for their resolution. [ 8 ]
Its members are the chairs of eight regulators—Alberta, British Columbia, Manitoba, New Brunswick, Nova Scotia, Ontario, Québec, and Saskatchewan. As of May 2021 [update] , the committee's chair is Grant Vingoe, the acting chair and Chief Executive Officer of the Ontario Securities Commission . [ 8 ]
The CSA maintains several databases that contain information on public companies ' disclosure and insider reporting, the registration of dealers and advisers, and a cease trade order database (CTO).
The National Registration Database ( NRD ) is a Canadian web-based database that allows security dealers and investment advisors to file registration forms electronically. (An individual or company in Canada who trades or underwrites securities, or provides investment advice, must register annually with one or more provincial securities regulators.) Created to replace the original paper form system, the NRD increases the efficiency of information filing and sharing between provincial security regulators. [ 3 ]
In 2001, the CSA commissioned a study that estimated the total economic benefits of such a database to the Canadian financial services industry would be $85 million over a 5-year period. The NRD was subsequently launched in 2003, initiated by the CSA and the Investment Industry Regulatory Organization of Canada (IIROC). [ 3 ]
The NRD has two websites: the NRD Information Website, which contains information for the public; and the National Registration Database, for use by authorized representatives. [ 3 ] The National Registration Search (NRS) contains the names of all registrants (individuals and firms) in Canada. [ 12 ]
The System for Electronic Document Analysis and Retrieval ( SEDAR ) [ 10 ] is a mandatory, electronic document filing and retrieval system that allows listed Canadian public companies to report their securities-related information with Canada's securities regulation authorities. It is operated and administered by CGI Information Systems and Management Consultants Inc. (CGI), the filing service contractor appointed by the Canadian Securities Administrators. [ 14 ]
It is similar to EDGAR , the filing system operated by the U.S. Securities and Exchange Commission for American public companies. [ 5 ] [ 14 ]
Through registered filing agents, public companies file documents such as prospectuses , financial statements , and material change reports. In the interest of transparency and full disclosure, these documents are accessible to the public. The reports that are available on SEDAR indicate the fiscal health of public companies and investment funds, and investors who want specific materials on a specific company can search the SEDAR database for the company or investment fund. These searches can be made by company name, industry group, document type or date filed; its results are rendered in PDF format . [ 14 ]
The SEDAR database was established by the CSA in 1997. Documents filed with regulators prior to the implementation of SEDAR may be available from the individual securities commissions; however, in the case of the British Columbia Securities Commission , historical filings are unretrievable and may have been destroyed.
On July 25, 2023, the CSA transitioned from SEDAR to SEDAR+. [ 15 ] SEDAR+ improves on the original database by consolidating it with the national Cease Trade Order and Disciplined List (DL) databases. Other updated features include extended operating hours, an integrated fee calculator, and a portal for filing exemptive relief applications.
Regulatory cooperation occurs both at national and international levels for the CSA.
Among themselves, CSA members work closely in the development of new policy initiatives and the continuous improvement of the regulatory framework for securities. They further coordinate its regulatory initiatives with the Joint Forum of Financial Services. [ 16 ]
Various CSA members are also active in international organizations such the North American Securities Administrators Association (NASAA), Council of Securities Regulators of the Americas (COSRA), and the International Organization of Securities Commissions (IOSCO), representing North-American, Pan-American, and international securities regulators respectively. [ 17 ]
Passport is a regulatory system that provides market participants automatic access to the capital markets in all Canadian jurisdictions, except Ontario , by registering only with its principal regulator and meeting the requirements of one set of harmonized laws. In brief, the system provides market participants with streamlined access to Canada's capital markets. [ 18 ]
Participants of the Passport system are the governments of all Canadian provinces and territories, excluding Ontario. To access the market in Ontario, non-Ontario market participants may use an interface system in which the Ontario Securities Commission (OSC) makes its own decisions, but generally relies on the review by the principal regulator. To achieve maximum efficiency for the benefit of the market, the passport regulators accept the OSC's decisions under Passport. [ 18 ]
The Joint Forum of Financial Market Regulators coordinates and streamlines the regulation of products and services in the Canadian financial markets . [ 18 ]
The Forum includes representatives of the CSA, along with those of the Canadian Council of Insurance Regulators (CCIR) and the Canadian Association of Pension Supervisory Authorities (CAPSA); [ 18 ] it also includes representation from the Canadian Insurance Services Regulatory Organizations (CISRO). [ 19 ]
The Joint Forum was founded in 1999 by CSA, CCIR, and CAPSA. [ 19 ]
The CSA brings together provincial and territorial securities regulators to share information and design policies and regulations that are consistent across the country, ensuring the smooth operation of Canada's securities industry . By collaborating on rules, regulations, and other programs, the CSA helps avoid duplication of work and streamlines the regulatory process for companies seeking to raise investment capital and others working in the investment industry. [ 20 ]
As a result of the coordination efforts of the CSA, securities markets are governed by harmonized national or multilateral instruments, which are assigned uniform 5-digit numbers, starting with a number that represents one of the 9 subject matter categories: [ 20 ]
The CSA's impact on most Canadians comes through its efforts to help educate Canadians about the securities industry, the stock markets and how to protect investors from investment scams. The CSA provides a wide variety of educational materials on securities and investing. It has produced brochures and booklets explaining various topics such as how to choose a financial adviser, mutual funds, and investing via the internet. The CSA coordinates various annual investor education initiatives, including the Financial Literacy Month in November and the Fraud Prevention Month in March.
The CSA publishes on its website a wide range of communications and public information tools, including news releases regarding newly adopted or proposed national or multilateral rules and regulations, investor education materials, enforcement reports and materials and investor research studies.
Canada
Other | https://en.wikipedia.org/wiki/System_for_Electronic_Document_Analysis_and_Retrieval |
A system in a package ( SiP ) or system-in-package is a number of integrated circuits (ICs) enclosed in one chip carrier package or encompassing an IC package substrate that may include passive components and perform the functions of an entire system. The ICs may be stacked using package on package , placed side by side, and/or embedded in the substrate. [ 1 ] The SiP performs all or most of the functions of an electronic system , and is typically used when designing components for mobile phones , digital music players , etc. [ 2 ] Dies containing integrated circuits may be stacked vertically on the package substrate. They are internally connected by fine wires that are bonded to the package substrate. Alternatively, with a flip chip technology, solder bumps are used to join stacked chips together and to the package substrate, or even both techniques can be used in a single package. SiPs are like systems on a chip (SoCs) but less tightly integrated and not on a single semiconductor die . [ 3 ]
SIPs can be used either to reduce the size of a system, improve performance or to reduce costs. [ 4 ] [ 5 ] The technology evolved from multi chip module (MCM) technology, the difference being that SiPs also use die stacking , which stacks several chips or dies on top of each other. [ 6 ] [ 7 ]
SiP dies can be stacked vertically or tiled horizontally, with techniques like chiplets or quilt packaging . SiPs connect the dies with standard off-chip wire bonds or solder bumps, unlike slightly denser three-dimensional integrated circuits which connect stacked silicon dies with conductors running through the die using through-silicon vias . Many different 3D packaging techniques have been developed for stacking many fairly standard chip dies into a compact area. [ 8 ]
SiPs can contain several chips or dies—such as a specialized processor , DRAM , flash memory —combined with passive components — resistors and capacitors —all mounted on the same substrate . This means that a complete functional unit can be built in a single package, so that few external components need to be added to make it work. This is particularly valuable in space constrained environments like MP3 players and mobile phones as it reduces the complexity of the printed circuit board and overall design. Despite its benefits, this technique decreases the yield of fabrication since any defective chip in the package will result in a non-functional packaged integrated circuit, even if all other modules in that same package are functional.
SiPs are in contrast to the common system on a chip (SoC) integrated circuit architecture which integrates components based on function into a single circuit die . An SoC will typically integrate a CPU, graphics and memory interfaces, hard-disk and USB connectivity, random-access and read-only memories , and secondary storage and/or their controllers on a single die. In comparison an SiP would connect these modules as discrete components in one or more chip packages or dies. An SiP resembles the common traditional motherboard -based PC architecture , as it separates components based on function and connects them through a central interfacing circuit board. An SiP has a lower grade of integration in comparison to an SoC. Hybrid integrated circuits (HICs) are somewhat similar to SiPs, however they tend to handle analog signals [ 9 ] whereas SiPs usually handle digital signals, [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] because of this HICs use older or less advanced technology (tend to use single layer circuit boards or substrates, not use die stacking, do not use flip chip or BGA for connecting components or dies, use only wire bonding for connecting dies or Small outline integrated circuit packages, use Dual in-line packages, or Single in-line packages for interfacing outside the Hybrid IC instead of BGA, etc.). [ 15 ]
SiP technology is primarily being driven by early market trends in wearables , mobile devices and the internet of things which do not demand the high numbers of produced units as in the established consumer and business SoC market. As the internet of things becomes more of a reality and less of a vision, there is innovation going on at the system on a chip and SiP level so that microelectromechanical (MEMS) sensors can be integrated on a separate die and control the connectivity. [ 16 ]
SiP solutions may require multiple packaging technologies, such as flip chip , wire bonding , wafer-level packaging , Through-silicon vias (TSVs), chiplets and more. [ 17 ] [ 7 ] | https://en.wikipedia.org/wiki/System_in_a_package |
System integration is defined in engineering as the process of bringing together the component sub- systems into one system (an aggregation of subsystems cooperating so that the system is able to deliver the overarching functionality) and ensuring that the subsystems function together as a system, [ 1 ] and in information technology [ 2 ] as the process of linking together different computing systems and software applications physically or functionally, [ 3 ] to act as a coordinated whole.
The system integrator integrates discrete systems utilizing a variety of techniques such as computer networking , enterprise application integration , business process management or manual programming . [ 4 ]
System integration involves integrating existing, often disparate systems in such a way "that focuses on increasing value to the customer" [ 5 ] (e.g., improved product quality and performance) while at the same time providing value to the company (e.g., reducing operational costs and improving response time). [ 5 ] In the modern world connected by Internet , the role of system integration engineers is important: more and more systems are designed to connect, both within the system under construction and to systems that are already deployed. [ 6 ]
Vertical integration (as opposed to " horizontal integration ") is the process of integrating subsystems according to their functionality by creating functional entities also referred to as silos . [ 7 ] The benefit of this method is that the integration is performed quickly and involves only the necessary vendors, therefore, this method is cheaper in the short term. On the other hand, cost-of-ownership can be substantially higher than seen in other methods, since in case of new or enhanced functionality, the only possible way to implement (scale the system) would be by implementing another silo. Reusing subsystems to create another functionality is not possible. [ 8 ]
Star integration , also known as spaghetti integration , is a process of systems integration where each system is interconnected to each of the remaining subsystems. When observed from the perspective of the subsystem which is being integrated, the connections are reminiscent of a star, but when the overall diagram of the system is presented, the connections look like spaghetti, hence the name of this method. The cost varies because of the interfaces that subsystems are exporting. In a case where the subsystems are exporting heterogeneous or proprietary interfaces, the integration cost can substantially rise. Time and costs needed to integrate the systems increase exponentially when adding additional subsystems. From the feature perspective, this method often seems preferable, due to the extreme flexibility of the reuse of functionality. [ 8 ]
Horizontal integration or Enterprise Service Bus (ESB) is an integration method in which a specialized subsystem is dedicated to communication between other subsystems. This allows cutting the number of connections (interfaces) to only one per subsystem which will connect directly to the ESB. The ESB is capable of translating the interface into another interface. This allows cutting the costs of integration and provides extreme flexibility. With systems integrated using this method, it is possible to completely replace one subsystem with another subsystem which provides similar functionality but exports different interfaces, all this completely transparent for the rest of the subsystems. The only action required is to implement the new interface between the ESB and the new subsystem. [ 8 ]
The horizontal scheme can be misleading, however, if it is thought that the cost of intermediate data transformation or the cost of shifting responsibility over business logic can be avoided. [ 8 ]
Industrial lifecycle integration is a system integration process that considers four categories or stages of integration: initial system implementation, engineering and design, project services, and operations. [ 9 ] This approach incorporates the requirements of each lifecycle stage of the industrial asset when integrating systems and subsystems. The key output is a standardized data architecture that can function throughout the life of the asset.
A common data format is an integration method to avoid every adapter having to convert data to/from every other applications' formats, Enterprise application integration (EAI) systems usually stipulate an application-independent (or common) data format. [ 10 ] The EAI system usually provides a data transformation service as well to help convert between application-specific and common formats. This is done in two steps: the adapter converts information from the application's format to the bus' common format. Then, semantic transformations are applied on this (converting zip codes to city names, splitting/merging objects from one application into objects in the other applications, and so on).
System integration can be challenging for organizations and these challenges can diminish their overall return on investment after implementing new software solutions. Some of these challenges include lack of trust and willingness to share data with other companies, unwillingness to outsource various operations to a third party, lack of clear communication and responsibilities, disagreement from partners on where functionality should reside, high cost of integration, difficulty finding good talents, data silos , and common API standards. [ 11 ] These challenges result in creating hurdles that "prevent or slow down business systems integration within and among companies". [ 12 ] Clear communication and simplified information exchange are key elements in building long term system integrations that can support business requirements.
On the other hand, system integration projects can be incredibly rewarding. For out-of-date, legacy systems, different forms of integration offer the ability to enable real-time data sharing. This can enable, for example, publisher-subscriber data distribution models, consolidated databases, event-driven architectures , reduce manual user data entry (which can also help reduce errors), refresh or modernize the application's front-end, and offload querying and reporting from expensive operational systems to cheaper commodity systems (which can save costs, enable scalability, and free up processing power on the main operational system). Usually, an extensive cost-benefit analysis is undertaken to help determine whether an integration project is worth the effort. | https://en.wikipedia.org/wiki/System_integration |
System integration testing ( SIT ) involves the overall testing of a complete system of many subsystem components or elements. The system under test may be composed of electromechanical or computer hardware, or software , or hardware with embedded software , or hardware/software with human-in-the-loop testing. SIT is typically performed on a larger integrated system of components and subassemblies that have previously undergone subsystem testing .
SIT consists, initially, of the "process of assembling the constituent parts of a system in a logical, cost-effective way, comprehensively checking system execution (all nominal and exceptional paths), and including a full functional check-out." [ 1 ] Following integration, system test is a process of " verifying that the system meets its requirements, and validating that the system performs in accordance with the customer or user expectations." [ 1 ]
In technology product development , the beginning of system integration testing is often the first time that an entire system has been assembled such that it can be tested as a whole. In order to make system testing most productive, the many constituent assemblies and subsystems will have typically gone through a subsystem test and successfully verified that each subsystem meets its requirements at the subsystem interface level.
In the context of software systems and software engineering , system integration testing is a testing process that exercises a software system's coexistence with others. With multiple integrated systems, assuming that each have already passed system testing, [ 2 ] SIT proceeds to test their required interactions.
Following this, the deliverables are passed on to acceptance testing. [ clarification needed ]
For software SIT is part of the software testing life cycle for collaborative projects. Usually, a round of SIT precedes the user acceptance test (UAT) round. Software providers usually run a pre-SIT round of tests before consumers run their SIT test cases.
For example, if an integrator (company) is providing an enhancement to a customer's existing solution, then they integrate the new application layer and the new database layer with the customer's existing application and database layers.
After the integration is complete, users use both the new part (extended part) and old part (pre-existing part) of the integrated application to update data. A process should exist to exchange data imports and exports between the two data layers. This data exchange process should keep both systems up-to-date. The purpose of system integration testing is to ensure all parts of these systems successfully co-exist and exchange data where necessary. [ citation needed ]
There may be more parties in the integration, for example the primary customer (consumer) can have their own customers; there may be also multiple providers. [ citation needed ] | https://en.wikipedia.org/wiki/System_integration_testing |
In telecommunications , the term system integrity has the following meanings:
This systems -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_integrity |
In mathematics , a system of bilinear equations is a special sort of system of polynomial equations , where each equation equates a bilinear form with a constant (possibly zero). More precisely, given two sets of variables represented as coordinate vectors x and y , then each equation of the system can be written y T A i x = g i , {\displaystyle y^{T}A_{i}x=g_{i},} where, i is an integer whose value ranges from 1 to the number of equations, each A i {\displaystyle A_{i}} is a matrix , and each g i {\displaystyle g_{i}} is a real number . Systems of bilinear equations arise in many subjects including engineering , biology , and statistics .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_of_bilinear_equations |
In mathematics, a system of differential equations is a finite set of differential equations . Such a system can be either linear or non-linear . Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations . [ 1 ]
A first-order linear system of ODEs is a system in which every equation is first order and depends on the unknown functions linearly. Here we consider systems with an equal number of unknown functions and equations. These may be written as
d x j d t = a j 1 ( t ) x 1 + … + a j n ( t ) x n + g j ( t ) , j = 1 , … , n {\displaystyle {\frac {dx_{j}}{dt}}=a_{j1}(t)x_{1}+\ldots +a_{jn}(t)x_{n}+g_{j}(t),\qquad j=1,\ldots ,n}
where n {\displaystyle n} is a positive integer, and a j i ( t ) , g j ( t ) {\displaystyle a_{ji}(t),g_{j}(t)} are arbitrary functions of the independent variable t.
A first-order linear system of ODEs may be written in matrix form:
d d t [ x 1 x 2 ⋮ x n ] = [ a 11 … a 1 n a 21 … a 2 n ⋮ … ⋮ a n 1 a n n ] [ x 1 x 2 ⋮ x n ] + [ g 1 g 2 ⋮ g n ] , {\displaystyle {\frac {d}{dt}}{\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{11}&\ldots &a_{1n}\\a_{21}&\ldots &a_{2n}\\\vdots &\ldots &\vdots \\a_{n1}&&a_{nn}\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}}+{\begin{bmatrix}g_{1}\\g_{2}\\\vdots \\g_{n}\end{bmatrix}},}
or simply
x ˙ ( t ) = A ( t ) x ( t ) + g ( t ) {\displaystyle \mathbf {\dot {x}} (t)=\mathbf {A} (t)\mathbf {x} (t)+\mathbf {g} (t)} .
A linear system is said to be homogeneous if g j ( t ) = 0 {\displaystyle g_{j}(t)=0} for each j {\displaystyle j} and for all values of t {\displaystyle t} , otherwise it is referred to as non-homogeneous. Homogeneous systems have the property that if x 1 , … , x p {\displaystyle \mathbf {x_{1}} ,\ldots ,\mathbf {x_{p}} } are linearly independent solutions to the system, then any linear combination of these, C 1 x 1 + … + C p x p {\displaystyle C_{1}\mathbf {x_{1}} +\ldots +C_{p}\mathbf {x_{p}} } , is also a solution to the linear system where C 1 , … , C p {\displaystyle C_{1},\ldots ,C_{p}} are constant.
The case where the coefficients a j i ( t ) {\displaystyle a_{ji}(t)} are all constant has a general solution: x = C 1 v 1 e λ 1 t + … + C n v n e λ n t {\displaystyle \mathbf {x} =C_{1}\mathbf {v_{1}} e^{\lambda _{1}t}+\ldots +C_{n}\mathbf {v_{n}} e^{\lambda _{n}t}} , where λ i {\displaystyle \lambda _{i}} is an eigenvalue of the matrix A {\displaystyle \mathbf {A} } with corresponding eigenvectors v i {\displaystyle \mathbf {v} _{i}} for 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} . This general solution only applies in cases where A {\displaystyle \mathbf {A} } has n distinct eigenvalues, cases with fewer distinct eigenvalues must be treated differently.
For an arbitrary system of ODEs, a set of solutions x 1 ( t ) , … , x n ( t ) {\displaystyle \mathbf {x_{1}} (t),\ldots ,\mathbf {x_{n}} (t)} are said to be linearly-independent if:
C 1 x 1 ( t ) + … + C n x n = 0 ∀ t {\displaystyle C_{1}\mathbf {x_{1}} (t)+\ldots +C_{n}\mathbf {x_{n}} =0\quad \forall t} is satisfied only for C 1 = … = C n = 0 {\displaystyle C_{1}=\ldots =C_{n}=0} .
A second-order differential equation x ¨ = f ( t , x , x ˙ ) {\displaystyle {\ddot {x}}=f(t,x,{\dot {x}})} may be converted into a system of first order linear differential equations by defining y = x ˙ {\displaystyle y={\dot {x}}} , which gives us the first-order system:
{ x ˙ = y y ˙ = f ( t , x , y ) {\displaystyle {\begin{cases}{\dot {x}}&=&y\\{\dot {y}}&=&f(t,x,y)\end{cases}}}
Just as with any linear system of two equations, two solutions may be called linearly-independent if C 1 x 1 + C 2 x 2 = 0 {\displaystyle C_{1}\mathbf {x} _{1}+C_{2}\mathbf {x} _{2}=\mathbf {0} } implies C 1 = C 2 = 0 {\displaystyle C_{1}=C_{2}=0} , or equivalently that | x 1 x 2 x ˙ 1 x ˙ 2 | {\displaystyle {\begin{vmatrix}x_{1}&x_{2}\\{\dot {x}}_{1}&{\dot {x}}_{2}\end{vmatrix}}} is non-zero. This notion is extended to second-order systems, and any two solutions to a second-order ODE are called linearly-independent if they are linearly-independent in this sense.
Like any system of equations, a system of linear differential equations is said to be overdetermined if there are more equations than the unknowns. For an overdetermined system to have a solution, it needs to satisfy the compatibility conditions . [ 2 ] For example, consider the system:
Then the necessary conditions for the system to have a solution are:
See also: Cauchy problem and Ehrenpreis's fundamental principle .
Perhaps the most famous example of a nonlinear system of differential equations is the Navier–Stokes equations . Unlike the linear case, the existence of a solution of a nonlinear system is a difficult problem (cf. Navier–Stokes existence and smoothness .)
Other examples of nonlinear systems of differential equations include the Lotka–Volterra equations .
A differential system is a means of studying a system of partial differential equations using geometric ideas such as differential forms and vector fields.
For example, the compatibility conditions of an overdetermined system of differential equations can be succinctly stated in terms of differential forms (i.e., for a form to be exact, it needs to be closed). See integrability conditions for differential systems for more.
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_of_differential_equations |
In mathematics , a set of simultaneous equations , also known as a system of equations or an equation system , is a finite set of equations for which common solutions are sought. An equation system is usually classified in the same manner as single equations , namely as a: | https://en.wikipedia.org/wiki/System_of_equations |
In mathematics , a system of linear equations (or linear system ) is a collection of two or more linear equations involving the same variables . [ 1 ] [ 2 ] For example,
is a system of three equations in the three variables x , y , z . A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple ( x , y , z ) = ( 1 , − 2 , − 2 ) , {\displaystyle (x,y,z)=(1,-2,-2),} since it makes all three equations valid.
Linear systems are a fundamental part of linear algebra , a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra , and play a prominent role in engineering , physics , chemistry , computer science , and economics . A system of non-linear equations can often be approximated by a linear system (see linearization ), a helpful technique when making a mathematical model or computer simulation of a relatively complex system .
Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers , but the theory and algorithms apply to coefficients and solutions in any field . For other algebraic structures , other theories have been developed. For coefficients and solutions in an integral domain , such as the ring of integers , see Linear equation over a ring . For coefficients and solutions that are polynomials, see Gröbner basis . For finding the "best" integer solutions among many, see Integer linear programming . For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry .
The system of one equation in one unknown
has the solution
However, most interesting linear systems have at least two equations.
The simplest kind of nontrivial linear system involves two equations and two variables:
One method for solving such a system is as follows. First, solve the top equation for x {\displaystyle x} in terms of y {\displaystyle y} :
Now substitute this expression for x into the bottom equation:
This results in a single equation involving only the variable y {\displaystyle y} . Solving gives y = 1 {\displaystyle y=1} , and substituting this back into the equation for x {\displaystyle x} yields x = 3 2 {\displaystyle x={\frac {3}{2}}} . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra .)
A general system of m linear equations with n unknowns and coefficients can be written as
where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} are the unknowns, a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\dots ,a_{mn}} are the coefficients of the system, and b 1 , b 2 , … , b m {\displaystyle b_{1},b_{2},\dots ,b_{m}} are the constant terms. [ 3 ]
Often the coefficients and unknowns are real or complex numbers , but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure .
One extremely helpful view is that each unknown is a weight for a column vector in a linear combination .
This allows all the language and theory of vector spaces (or more generally, modules ) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side (LHS) is called their span , and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension ) cannot be larger than m or n , but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side (RHS), and otherwise not guaranteed.
The vector equation is equivalent to a matrix equation of the form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } where A is an m × n matrix, x is a column vector with n entries, and b is a column vector with m entries. [ 4 ]
A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] , x = [ x 1 x 2 ⋮ x n ] , b = [ b 1 b 2 ⋮ b m ] . {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}},\quad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}.} The number of vectors in a basis for the span is now expressed as the rank of the matrix.
A solution of a linear system is an assignment of values to the variables x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} such that each of the equations is satisfied. The set of all possible solutions is called the solution set . [ 5 ]
A linear system may behave in any one of three possible ways:
For a system involving two variables ( x and y ), each linear equation determines a line on the xy - plane . Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set .
For three variables, each linear equation determines a plane in three-dimensional space , and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points. [ 6 ]
For n variables, each linear equation determines a hyperplane in n -dimensional space . The solution set is the intersection of these hyperplanes, and is a flat , which may have any dimension lower than n .
In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations.
In the first case, the dimension of the solution set is, in general, equal to n − m , where n is the number of variables and m is the number of equations.
The following pictures illustrate this trichotomy in the case of two variables:
The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point.
It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point).
A system of linear equations behave differently from the general case if the equations are linearly dependent , or if it is inconsistent and has no more equations than unknowns.
The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence .
For example, the equations
are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations.
For a more complicated example, the equations
are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point.
A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent . [ 7 ] When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 = 1 .
For example, the equations
are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1 . The graphs of these equations on the xy -plane are a pair of parallel lines.
It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations
are inconsistent. Adding the first two equations together gives 3 x + 2 y = 2 , which can be subtracted from the third equation to yield 0 = 1 . Any two of these equations have a common solution. The same phenomenon can occur for any number of equations.
In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent.
Putting it another way, according to the Rouché–Capelli theorem , any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix . If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1.
Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set.
There are several algorithms for solving a system of linear equations.
When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example ( x = 3 , y = − 2 , z = 6 ) {\displaystyle (x=3,\;y=-2,\;z=6)} . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like ( 3 , − 2 , 6 ) {\displaystyle (3,\,-2,\,6)} for the previous example.
To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent , or as parameters ), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables.
For example, consider the following system:
The solution set to this system can be described by the following equations:
Here z is the free variable, while x and y are dependent on z . Any point in the solution set can be obtained by first choosing a value for z , and then computing the corresponding values for x and y .
Each free variable gives the solution space one degree of freedom , the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z . An infinite solution of higher order may describe a plane, or higher-dimensional set.
Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows:
Here x is the free variable, and y and z are dependent.
The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows:
For example, consider the following system:
Solving the first equation for x gives x = 5 + 2 z − 3 y {\displaystyle x=5+2z-3y} , and plugging this into the second and third equation yields
Since the LHS of both of these equations equal y , equating the RHS of the equations. We now have:
Substituting z = 2 into the second or third equation gives y = 8, and the values of y and z into the first equation yields x = −15. Therefore, the solution set is the ordered triple ( x , y , z ) = ( − 15 , 8 , 2 ) {\displaystyle (x,y,z)=(-15,8,2)} .
In row reduction (also known as Gaussian elimination ), the linear system is represented as an augmented matrix [ 8 ]
This matrix is then modified using elementary row operations until it reaches reduced row echelon form . There are three types of elementary row operations: [ 8 ]
Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original.
There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination . The following computation shows Gauss–Jordan elimination applied to the matrix above:
The last matrix is in reduced row echelon form, and represents the system x = −15 , y = 8 , z = 2 . A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down.
Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants . [ 9 ] For example, the solution to the system
is given by
For each variable, the denominator is the determinant of the matrix of coefficients , while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms.
Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.)
Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. [ citation needed ]
If the equation system is expressed in the matrix form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n = m columns) and has full rank (all m rows are independent), then the system has a unique solution given by
where A − 1 {\displaystyle A^{-1}} is the inverse of A . More generally, regardless of whether m = n or not and regardless of the rank of A , all solutions (if any exist) are given using the Moore–Penrose inverse of A , denoted A + {\displaystyle A^{+}} , as follows:
where w {\displaystyle \mathbf {w} } is a vector of free parameters that ranges over all possible n ×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using w = 0 {\displaystyle \mathbf {w} =\mathbf {0} } satisfy A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } — that is, that A A + b = b . {\displaystyle AA^{+}\mathbf {b} =\mathbf {b} .} If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, A + {\displaystyle A^{+}} simply equals A − 1 {\displaystyle A^{-1}} and the general solution equation simplifies to
as previously stated, where w {\displaystyle \mathbf {w} } has completely dropped out of the solution, leaving only a single solution. In other cases, though, w {\displaystyle \mathbf {w} } remains and hence an infinitude of potential values of the free parameter vector w {\displaystyle \mathbf {w} } give an infinitude of solutions of the equation.
While systems of three or four equations can be readily solved by hand (see Cracovian ), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting . Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A . This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b .
If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition . Levinson recursion is a fast method for Toeplitz matrices . Special methods exist also for matrices with many zero elements (so-called sparse matrices ), which appear often in applications.
A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods . For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. [ 10 ] One example of an iterative method is the Jacobi method , where the matrix A {\displaystyle A} is split into its diagonal component D {\displaystyle D} and its non-diagonal component L + U {\displaystyle L+U} . An initial guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation:
When the difference between guesses x ( k ) {\displaystyle {\mathbf {x}}^{(k)}} and x ( k + 1 ) {\displaystyle {\mathbf {x}}^{(k+1)}} is sufficiently small, the algorithm is said to have converged on the solution. [ 11 ]
There is also a quantum algorithm for linear systems of equations . [ 12 ]
A system of linear equations is homogeneous if all of the constant terms are zero:
A homogeneous system is equivalent to a matrix equation of the form
where A is an m × n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries.
Every homogeneous system has at least one solution, known as the zero (or trivial ) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix ( det( A ) ≠ 0 ) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties:
These are exactly the properties required for the solution set to be a linear subspace of R n . In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A .
There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system:
Specifically, if p is any specific solution to the linear system A x = b , then the entire solution set can be described as
Geometrically, this says that the solution set for A x = b is a translation of the solution set for A x = 0 . Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p .
This reasoning only applies if the system A x = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A . | https://en.wikipedia.org/wiki/System_of_linear_equations |
A system of polynomial equations (sometimes simply a polynomial system ) is a set of simultaneous equations f 1 = 0, ..., f h = 0 where the f i are polynomials in several variables, say x 1 , ..., x n , over some field k .
A solution of a polynomial system is a set of values for the x i s which belong to some algebraically closed field extension K of k , and make all equations true. When k is the field of rational numbers , K is generally assumed to be the field of complex numbers , because each solution belongs to a field extension of k , which is isomorphic to a subfield of the complex numbers.
This article is about the methods for solving, that is, finding all solutions or describing them. As these methods are designed for being implemented in a computer, emphasis is given on fields k in which computation (including equality testing) is easy and efficient, that is the field of rational numbers and finite fields .
Searching for solutions that belong to a specific set is a problem which is generally much more difficult, and is outside the scope of this article, except for the case of the solutions in a given finite field. For the case of solutions of which all components are integers or rational numbers, see Diophantine equation .
A simple example of a system of polynomial equations is
Its solutions are the four pairs ( x , y ) = (1, 2), (2, 1), (-1, -2), (-2, -1) . These solutions can easily be checked by substitution, but more work is needed for proving that there are no other solutions.
The subject of this article is the study of generalizations of such an examples, and the description of the methods that are used for computing the solutions.
A system of polynomial equations, or polynomial system is a collection of equations
where each f h is a polynomial in the indeterminates x 1 , ..., x m , with integer coefficients, or coefficients in some fixed field , often the field of rational numbers or a finite field . [ 1 ] Other fields of coefficients, such as the real numbers , are less often used, as their elements cannot be represented in a computer (only approximations of real numbers can be used in computations, and these approximations are always rational numbers).
A solution of a polynomial system is a tuple of values of ( x 1 , ..., x m ) that satisfies all equations of the polynomial system. The solutions are sought in the complex numbers , or more generally in an algebraically closed field containing the coefficients. In particular, in characteristic zero , all complex solutions are sought. Searching for the real or rational solutions are much more difficult problems that are not considered in this article.
The set of solutions is not always finite; for example, the solutions of the system
are a point ( x , y ) = (1,1) and a line x = 0 . [ 2 ] Even when the solution set is finite, there is, in general, no closed-form expression of the solutions (in the case of a single equation, this is Abel–Ruffini theorem ).
The Barth surface , shown in the figure is the geometric representation of the solutions of a polynomial system reduced to a single equation of degree 6 in 3 variables. Some of its numerous singular points are visible on the image. They are the solutions of a system of 4 equations of degree 5 in 3 variables. Such an overdetermined system has no solution in general (that is if the coefficients are not specific). If it has a finite number of solutions, this number is at most 5 3 = 125 , by Bézout's theorem . However, it has been shown that, for the case of the singular points of a surface of degree 6, the maximum number of solutions is 65, and is reached by the Barth surface.
A system is overdetermined if the number of equations is higher than the number of variables. A system is inconsistent if it has no complex solution (or, if the coefficients are not complex numbers, no solution in an algebraically closed field containing the coefficients). By Hilbert's Nullstellensatz this means that 1 is a linear combination (with polynomials as coefficients) of the first members of the equations. Most but not all overdetermined systems, when constructed with random coefficients, are inconsistent. For example, the system x 3 – 1 = 0, x 2 – 1 = 0 is overdetermined (having two equations but only one unknown), but it is not inconsistent since it has the solution x = 1 .
A system is underdetermined if the number of equations is lower than the number of the variables. An underdetermined system is either inconsistent or has infinitely many complex solutions (or solutions in an algebraically closed field that contains the coefficients of the equations). This is a non-trivial result of commutative algebra that involves, in particular, Hilbert's Nullstellensatz and Krull's principal ideal theorem .
A system is zero-dimensional if it has a finite number of complex solutions (or solutions in an algebraically closed field). This terminology comes from the fact that the algebraic variety of the solutions has dimension zero. A system with infinitely many solutions is said to be positive-dimensional .
A zero-dimensional system with as many equations as variables is sometimes said to be well-behaved . [ 3 ] Bézout's theorem asserts that a well-behaved system whose equations have degrees d 1 , ..., d n has at most d 1 ⋅⋅⋅ d n solutions. This bound is sharp. If all the degrees are equal to d , this bound becomes d n and is exponential in the number of variables. (The fundamental theorem of algebra is the special case n = 1 .)
This exponential behavior makes solving polynomial systems difficult and explains why there are few solvers that are able to automatically solve systems with Bézout's bound higher than, say, 25 (three equations of degree 3 or five equations of degree 2 are beyond this bound). [ citation needed ]
The first thing to do for solving a polynomial system is to decide whether it is inconsistent, zero-dimensional or positive dimensional. This may be done by the computation of a Gröbner basis of the left-hand sides of the equations. The system is inconsistent if this Gröbner basis is reduced to 1. The system is zero-dimensional if, for every variable there is a leading monomial of some element of the Gröbner basis which is a pure power of this variable. For this test, the best monomial order (that is the one which leads generally to the fastest computation) is usually the graded reverse lexicographic one (grevlex).
If the system is positive-dimensional , it has infinitely many solutions. It is thus not possible to enumerate them. It follows that, in this case, solving may only mean "finding a description of the solutions from which the relevant properties of the solutions are easy to extract". There is no commonly accepted such description. In fact there are many different "relevant properties", which involve almost every subfield of algebraic geometry .
A natural example of such a question concerning positive-dimensional systems is the following: decide if a polynomial system over the rational numbers has a finite number of real solutions and compute them . A generalization of this question is find at least one solution in each connected component of the set of real solutions of a polynomial system . The classical algorithm for solving these question is cylindrical algebraic decomposition , which has a doubly exponential computational complexity and therefore cannot be used in practice, except for very small examples.
For zero-dimensional systems, solving consists of computing all the solutions. There are two different ways of outputting the solutions. The most common way is possible only for real or complex solutions, and consists of outputting numeric approximations of the solutions. Such a solution is called numeric . A solution is certified if it is provided with a bound on the error of the approximations, and if this bound separates the different solutions.
The other way of representing the solutions is said to be algebraic . It uses the fact that, for a zero-dimensional system, the solutions belong to the algebraic closure of the field k of the coefficients of the system. There are several ways to represent the solution in an algebraic closure, which are discussed below. All of them allow one to compute a numerical approximation of the solutions by solving one or several univariate equations. For this computation, it is preferable to use a representation that involves solving only one univariate polynomial per solution, because computing the roots of a polynomial which has approximate coefficients is a highly unstable problem .
A trigonometric equation is an equation g = 0 where g is a trigonometric polynomial . Such an equation may be converted into a polynomial system by expanding the sines and cosines in it (using sum and difference formulas ), replacing sin( x ) and cos( x ) by two new variables s and c and adding the new equation s 2 + c 2 – 1 = 0 .
For example, because of the identity
solving the equation
is equivalent to solving the polynomial system
For each solution ( c 0 , s 0 ) of this system, there is a unique solution x of the equation such that 0 ≤ x < 2 π .
In the case of this simple example, it may be unclear whether the system is, or not, easier to solve than the equation. On more complicated examples, one lacks systematic methods for solving directly the equation, while software are available for automatically solving the corresponding system.
When solving a system over a finite field k with q elements, one is primarily interested in the solutions in k . As the elements of k are exactly the solutions of the equation x q – x = 0 , it suffices, for restricting the solutions to k , to add the equation x i q – x i = 0 for each variable x i .
The elements of an algebraic number field are usually represented as polynomials in a generator of the field which satisfies some univariate polynomial equation. To work with a polynomial system whose coefficients belong to a number field, it suffices to consider this generator as a new variable and to add the equation of the generator to the equations of the system. Thus solving a polynomial system over a number field is reduced to solving another system over the rational numbers.
For example, if a system contains 2 {\displaystyle {\sqrt {2}}} , a system over the rational numbers is obtained by adding the equation r 2 2 – 2 = 0 and replacing 2 {\displaystyle {\sqrt {2}}} by r 2 in the other equations.
In the case of a finite field, the same transformation allows always supposing that the field k has a prime order.
The usual way of representing the solutions is through zero-dimensional regular chains. Such a chain consists of a sequence of polynomials f 1 ( x 1 ) , f 2 ( x 1 , x 2 ) , ..., f n ( x 1 , ..., x n ) such that, for every i such that 1 ≤ i ≤ n
To such a regular chain is associated a triangular system of equations
The solutions of this system are obtained by solving the first univariate equation, substituting the solutions in the other equations, then solving the second equation which is now univariate, and so on. The definition of regular chains implies that the univariate equation obtained from f i has degree d i and thus that the system has d 1 ... d n solutions, provided that there is no multiple root in this resolution process ( fundamental theorem of algebra ).
Every zero-dimensional system of polynomial equations is equivalent (i.e. has the same solutions) to a finite number of regular chains. Several regular chains may be needed, as it is the case for the following system which has three solutions.
There are several algorithms for computing a triangular decomposition of an arbitrary polynomial system (not necessarily zero-dimensional) [ 4 ] into regular chains (or regular semi-algebraic systems ).
There is also an algorithm which is specific to the zero-dimensional case and is competitive, in this case, with the direct algorithms. It consists in computing first the Gröbner basis for the graded reverse lexicographic order (grevlex) , then deducing the lexicographical Gröbner basis by FGLM algorithm [ 5 ] and finally applying the Lextriangular algorithm. [ 6 ]
This representation of the solutions are fully convenient for coefficients in a finite field. However, for rational coefficients, two aspects have to be taken care of:
The first issue has been solved by Dahan and Schost: [ 7 ] [ 8 ] Among the sets of regular chains that represent a given set of solutions, there is a set for which the coefficients are explicitly bounded in terms of the size of the input system, with a nearly optimal bound. This set, called equiprojectable decomposition , depends only on the choice of the coordinates. This allows the use of modular methods for computing efficiently the equiprojectable decomposition. [ 9 ]
The second issue is generally solved by outputting regular chains of a special form, sometimes called shape lemma , for which all d i but the first one are equal to 1 . For getting such regular chains, one may have to add a further variable, called separating variable , which is given the index 0 . The rational univariate representation , described below, allows computing such a special regular chain, satisfying Dahan–Schost bound, by starting from either a regular chain or a Gröbner basis.
The rational univariate representation or RUR is a representation of the solutions of a zero-dimensional polynomial system over the rational numbers which has been introduced by F. Rouillier. [ 10 ]
A RUR of a zero-dimensional system consists in a linear combination x 0 of the variables, called separating variable , and a system of equations [ 11 ]
where h is a univariate polynomial in x 0 of degree D and g 0 , ..., g n are univariate polynomials in x 0 of degree less than D .
Given a zero-dimensional polynomial system over the rational numbers, the RUR has the following properties.
For example, for the system in the previous section, every linear combination of the variable, except the multiples of x , y and x + y , is a separating variable. If one chooses t = x – y / 2 as a separating variable, then the RUR is
The RUR is uniquely defined for a given separating variable, independently of any algorithm, and it preserves the multiplicities of the roots. This is a notable difference with triangular decompositions (even the equiprojectable decomposition), which, in general, do not preserve multiplicities. The RUR shares with equiprojectable decomposition the property of producing an output with coefficients of relatively small size.
For zero-dimensional systems, the RUR allows retrieval of the numeric values of the solutions by solving a single univariate polynomial and substituting them in rational functions. This allows production of certified approximations of the solutions to any given precision.
Moreover, the univariate polynomial h ( x 0 ) of the RUR may be factorized, and this gives a RUR for every irreducible factor. This provides the prime decomposition of the given ideal (that is the primary decomposition of the radical of the ideal). In practice, this provides an output with much smaller coefficients, especially in the case of systems with high multiplicities.
Contrarily to triangular decompositions and equiprojectable decompositions, the RUR is not defined in positive dimension.
The general numerical algorithms which are designed for any system of nonlinear equations work also for polynomial systems. However the specific methods will generally be preferred, as the general methods generally do not allow one to find all solutions. In particular, when a general method does not find any solution, this is usually not an indication that there is no solution.
Nevertheless, two methods deserve to be mentioned here.
This is a semi-numeric method which supposes that the number of equations is equal to the number of variables. This method is relatively old but it has been dramatically improved in the last decades. [ 13 ]
This method divides into three steps. First an upper bound on the number of solutions is computed. This bound has to be as sharp as possible. Therefore, it is computed by, at least, four different methods and the best value, say N {\displaystyle N} , is kept.
In the second step, a system g 1 = 0 , … , g n = 0 {\displaystyle g_{1}=0,\,\ldots ,\,g_{n}=0} of polynomial equations is generated which has exactly N {\displaystyle N} solutions that are easy to compute. This new system has the same number n {\displaystyle n} of variables and the same number n {\displaystyle n} of equations and the same general structure as the system to solve, f 1 = 0 , … , f n = 0 {\displaystyle f_{1}=0,\,\ldots ,\,f_{n}=0} .
Then a homotopy between the two systems is considered. It consists, for example, of the straight line between the two systems, but other paths may be considered, in particular to avoid some singularities, in the system
The homotopy continuation consists in deforming the parameter t {\displaystyle t} from 0 to 1 and following the N {\displaystyle N} solutions during this deformation. This gives the desired solutions for t = 1 {\displaystyle t=1} . Following means that, if t 1 < t 2 {\displaystyle t_{1}<t_{2}} , the solutions for t = t 2 {\displaystyle t=t_{2}} are deduced from the solutions for t = t 1 {\displaystyle t=t_{1}} by Newton's method. The difficulty here is to well choose the value of t 2 − t 1 : {\displaystyle t_{2}-t_{1}:} Too large, Newton's convergence may be slow and may even jump from a solution path to another one. Too small, and the number of steps slows down the method.
To deduce the numeric values of the solutions from a RUR seems easy: it suffices to compute the roots of the univariate polynomial and to substitute them in the other equations. This is not so easy because the evaluation of a polynomial at the roots of another polynomial is highly unstable.
The roots of the univariate polynomial have thus to be computed at a high precision which may not be defined once for all. There are two algorithms which fulfill this requirement.
There are at least four software packages which can solve zero-dimensional systems automatically (by automatically, one means that no human intervention is needed between input and output, and thus that no knowledge of the method by the user is needed). There are also several other software packages which may be useful for solving zero-dimensional systems. Some of them are listed after the automatic solvers.
The Maple function RootFinding[Isolate] takes as input any polynomial system over the rational numbers (if some coefficients are floating point numbers, they are converted to rational numbers) and outputs the real solutions represented either (optionally) as intervals of rational numbers or as floating point approximations of arbitrary precision. If the system is not zero dimensional, this is signaled as an error.
Internally, this solver, designed by F. Rouillier computes first a Gröbner basis and then a Rational Univariate Representation from which the required approximation of the solutions are deduced. It works routinely for systems having up to a few hundred complex solutions.
The rational univariate representation may be computed with Maple function Groebner[RationalUnivariateRepresentation] .
To extract all the complex solutions from a rational univariate representation, one may use MPSolve , which computes the complex roots of univariate polynomials to any precision. It is recommended to run MPSolve several times, doubling the precision each time, until solutions remain stable, as the substitution of the roots in the equations of the input variables can be highly unstable.
The second solver is PHCpack, [ 13 ] [ 16 ] written under the direction of J. Verschelde. PHCpack implements the homotopy continuation method. This solver computes the isolated complex solutions of polynomial systems having as many equations as variables.
The third solver is Bertini, [ 17 ] [ 18 ] written by D. J. Bates, J. D. Hauenstein, A. J. Sommese, and C. W. Wampler. Bertini uses numerical homotopy continuation with adaptive precision. In addition to computing zero-dimensional solution sets, both PHCpack and Bertini are capable of working with positive dimensional solution sets.
The fourth solver is the Maple library RegularChains , written by Marc Moreno-Maza and collaborators. It contains various functions for solving polynomial systems by means of regular chains . | https://en.wikipedia.org/wiki/System_of_polynomial_equations |
A system of record ( SOR ) or source system of record ( SSoR ) is a data management term for an information storage system (commonly implemented on a computer system running a database management system ) that is the authoritative data source for a given data element or piece of information, like for example a row (or record) in a table . In data vault it is referred to as the record source . [ 1 ]
The need to identify systems of record can become acute in organizations where management information systems have been built by taking output data from multiple source systems, re-processing this data, and then re-presenting the result for a new business use.
In these cases, multiple information systems may disagree about the same piece of information. These disagreements may stem from semantic differences, differences in opinion, use of different sources, differences in the timing of the extract, transform, load processes that create the data they report against, or may simply be the result of bugs.
The integrity and validity of any data set is open to question when there is no traceable connection to a good source, and listing a source system of record is a solution to this. Where the integrity of the data is vital, if there is an agreed system of record, the data element must either be linked to, or extracted directly from it. In other cases, the provenance and estimated data quality should be documented.
The "system of record" approach is a good fit for environments where both:
In diverse environments, one instead needs to support the presence of multiple opinions. Consumers may accept different authorities or may differ on what constitutes an authoritative source -- researchers may prefer carefully vetted data, while tactical military systems may require the most recent credible report.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_of_record |
System of systems engineering ( SoSE ) is a set of developing processes, tools, and methods for designing, re-designing and deploying solutions to system-of-systems challenges.
System of Systems Engineering (SoSE) methodology is heavily used in U.S. Department of Defense applications, but is increasingly being applied to non-defense related problems such as architectural design of problems in air and auto transportation, healthcare, global communication networks, search and rescue, space exploration, industry 4.0 [ 1 ] and many other System of Systems application domains. SoSE is more than systems engineering of monolithic, complex systems because design for System-of-Systems problems is performed under some level of uncertainty in the requirements and the constituent systems, and it involves considerations in multiple levels and domains. [ 2 ] [ 3 ] Whereas systems engineering focuses on building the system right, SoSE focuses on choosing the right system(s) and their interactions to satisfy the requirements.
System-of-Systems Engineering and Systems Engineering are related but different fields of study. Whereas systems engineering addresses the development and operations of monolithic products, SoSE addresses the development and operations of evolving programs. In other words, traditional systems engineering seeks to optimize an individual system (i.e., the product), while SoSE seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. SoSE should enable the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time; thus, effective SoSE methodology should prepare decision-makers to design informed architectural solutions for System-of-Systems problems.
Due to varied methodology and domains of applications in existing literature, there does not exist a single unified consensus for processes involved in System-of-Systems Engineering. One of the proposed SoSE frameworks, by Dr. Daniel A. DeLaurentis, recommends a three-phase method where a SoS problem is defined (understood), abstracted, modeled and analyzed for behavioral patterns. [ 3 ] More information on this method and other proposed methods can be found in the listed SoSE focused organizations and SoSE literature in the subsequent sections. | https://en.wikipedia.org/wiki/System_of_systems_engineering |
System on TPTP is an online interface to several automated theorem proving systems and other automated reasoning tools.
It allows users to run the systems either on problems from the latest releases from the TPTP problem library or on user-supplied problems in the TPTP syntax.
The system is maintained by Geoff Sutcliffe at the University of Miami . In November 2010, it featured more than 50 systems, including both theorem provers and model finders. [ 1 ] System on TPTP can either run user-selected systems, or pick systems automatically based on problem features, and run them in parallel. [ 2 ]
This computer science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_on_TPTP |
System reconfiguration attacks modify settings on a user's PC for malicious purposes . For example: URLs in a favorites file might be modified to direct users to look-alike websites : e.g., a bank website URL may be changed from "bankofabc.com" to "bancofabc.com". [ 1 ]
This computer security article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_reconfiguration_attacks |
System requirements in spacecraft systems are the specific system requirements needed to design and operate a spacecraft or a spacecraft subsystem .
Spacecraft systems are normally developed under the responsibility of space agencies as NASA , ESA etc. In the space area standardized terms and processes have been introduced to allow for unambiguous communication between all partners and efficient usage of all documents. For instance the life cycle of space systems is divided in phases: [ citation needed ]
At the end of phase B the system requirements together with a statement of work are sent out requesting proposals from industry.
Both technical and nontechnical system requirements are contained in the statement of work.
The technical system requirements documented in the System Specification stay on mission level: System functions and performances, Orbit, Launch vehicle, etc. [ citation needed ] Non-technical system (task) requirements: Cost and progress reporting, Documentation maintenance, etc.
The customer (requirements) specification is answered by the contractor by a design-to specification.
For example, the requirement "Columbus shall be launched by the Space Shuttle." is detailed in the contractor system specification "Columbus shall be a cylindrical pressurized module with max. length of 6.9 meters and 4.5 meters diameter as agreed in the Shuttle/Columbus ICD."
The spacecraft's systems specification, according to David Michael Harland (2005), usually also defines the operation environment of the spacecraft. It mostly is defined "as a model - often provide by the scientific community from available data - in the form of a set of curves, numerical tables, or software, usually with a nominal expectation and the minimal and maximum profiles which the environment is not expected to exceed". [ 3 ]
A typical industry generated system specification for a spacecraft has the following structure (e.g. Columbus Design Spec (COL-RIBRE-SPE-0028, iss.10/F, 06.25.2004):
Each requirement paragraph consists of the requirement to be fulfilled by the product to be delivered and the verification requirement (Review of design, analysis, test, inspection).
The spacecraft system specification defines also the subsystems of the spacecraft e.g.: Structure, Data management subsystem incl. software, Electrical Power, Mechanical, etc. [ citation needed ] For each subsystem a subsystem specification is prepared by the Prime Contractor with the same specification structure shown above including references to the parent paragraph in the system specification. In the same way the subsystem contractor prepares an assembly or unit specification. All these specifications are listed in a so-called specification tree showing all specifications and their linkage as well as the issue / date of each specification. | https://en.wikipedia.org/wiki/System_requirements_(spacecraft_system) |
A System Requirements Specification (SysRS) (abbreviated SysRS to be distinct from a software requirements specification (SRS) ) is a structured collection of information that embodies the requirements of a system. [ 1 ]
A business analyst (BA), sometimes titled system analyst , is responsible for analyzing the business needs of their clients and stakeholders to help identify business problems and propose solutions. Within the systems development life cycle domain, the BA typically performs a liaison function between the business side of an enterprise and the information technology department or external service providers. | https://en.wikipedia.org/wiki/System_requirements_specification |
In computing , a system resource , or simply resource , is any physical or virtual component of limited availability that is accessible to a computer . All connected devices and internal system components are resources. Virtual system resources include files (concretely file handles ), network connections (concretely network sockets ), and memory areas.
Managing resources is referred to as resource management , and includes both preventing resource leaks (not releasing a resource when a process has finished using it) and dealing with resource contention (when multiple processes wish to access a limited resource). Computing resources are used in cloud computing to provide services through networks.
Some resources, notably memory and storage space, have a notion of "location", and one can distinguish contiguous allocations from non-contiguous allocations. For example, allocating 1 GB of memory in a single block, versus allocating it in 1,024 blocks each of size 1 MB. The latter is known as fragmentation , and often severely impacts performance, so contiguous free space is a subcategory of the general resource of storage space.
One can also distinguish compressible resources from incompressible resources. [ 1 ] Compressible resources, generally throughput ones such as CPU and network bandwidth, can be throttled benignly: the user will be slowed proportionally to the throttling, but will otherwise proceed normally. Other resources, generally storage ones such as memory, cannot be throttled without either causing failure (if a process cannot allocate enough memory, it typically cannot run) or severe performance degradation, such as due to thrashing (if a working set does not fit into memory and requires frequent paging, progress will slow significantly). The distinction is not always sharp; as mentioned, a paging system can allow main memory (primary storage) to be compressed (by paging to hard drive (secondary storage)), and some systems allow discardable memory for caches, which is compressible without disastrous performance impact. Electrical power is to some degree compressible: without power (or without sufficient voltage) an electrical device cannot run, and will stop or crash, but some devices, notably mobile phones, can allow degraded operation at reduced power consumption, or can allow the device to be suspended but not terminated, with much lower power consumption. | https://en.wikipedia.org/wiki/System_resource |
The system safety concept calls for a risk management strategy based on identification, analysis of hazards and application of remedial controls using a systems-based approach. [ 1 ] This is different from traditional safety strategies which rely on control of conditions and causes of an accident based either on the epidemiological analysis or as a result of investigation of individual past accidents. [ 2 ] The concept of system safety is useful in demonstrating adequacy of technologies when difficulties are faced with probabilistic risk analysis . [ 3 ] The underlying principle is one of synergy : a whole is more than sum of its parts. Systems-based approach to safety requires the application of scientific, technical and managerial skills to hazard identification, hazard analysis , and elimination, control, or management of hazards throughout the life-cycle of a system, program, project or an activity or a product. [ 1 ] " Hazop " is one of several techniques available for identification of hazards.
A system is defined as a set or group of interacting, interrelated or interdependent elements or parts, that are organized and integrated to form a collective unity or a unified whole, to achieve a common objective. [ 4 ] [ 5 ] This definition lays emphasis on the interactions between the parts of a system and the external environment to perform a specific task or function in the context of an operational environment. This focus on interactions is to take a view on the expected or unexpected demands (inputs) that will be placed on the system and see whether necessary and sufficient resources are available to process the demands. These might take form of stresses. These stresses can be either expected, as part of normal operations, or unexpected, as part of unforeseen acts or conditions that produce beyond-normal (i.e., abnormal) stresses. This definition of a system, therefore, includes not only the product or the process but also the influences that the surrounding environment (including human interactions) may have on the product’s or process’s safety performance. Conversely, system safety also takes into account the effects of the system on its surrounding environment. Thus, a correct definition and management of interfaces becomes very important. [ 4 ] [ 5 ] Broader definitions of a system are the hardware, software, human systems integration, procedures and training. Therefore, system safety as part of the systems engineering process should systematically address all of these domains and areas in engineering and operations in a concerted fashion to prevent, eliminate and control hazards.
A “system", therefore, has implicit as well as explicit definition of boundaries to which the systematic process of hazard identification, hazard analysis and control is applied. The system can range in complexity from a crewed spacecraft to an autonomous machine tool. The system safety concept helps the system designer(s) to model, analyse, gain awareness about, understand and eliminate the hazards, and apply controls to achieve an acceptable level of safety. Ineffective decision making in safety matters is regarded as the first step in the sequence of hazardous flow of events in the "Swiss cheese" model of accident causation. [ 6 ] Communications regarding system risk have an important role to play in correcting risk perceptions by creating, analysing and understanding information model to show what factors create and control the hazardous process. [ 3 ] For almost any system, product, or service, the most effective means of limiting product liability and accident risks is to implement an organized system safety function, beginning in the conceptual design phase and continuing through to its development, fabrication, testing, production, use and ultimate disposal. The aim of the system safety concept is to gain assurance that a system and associated functionality behaves safely and is safe to operate. This assurance is necessary. Technological advances in the past have produced positive as well as negative effects. [ 1 ]
A root cause analysis identifies the set of multiple causes that together might create a potential accident. Root cause techniques have been successfully borrowed from other disciplines and adapted to meet the needs of the system safety concept, most notably the tree structure from fault tree analysis, which was originally an engineering technique. [ 7 ] The root cause analysis techniques can be categorised into two groups: a) tree techniques, and b) check list methods. There are several root causal analysis techniques, e.g. Management Oversight and Risk Tree (MORT) analysis. [ 2 ] [ 8 ] [ 9 ] Others are Event and Causal Factor Analysis (ECFA), Multilinear Events Sequencing, Sequentially Timed Events Plotting Procedure, and Savannah River Plant Root Cause Analysis System. [ 7 ]
Safety engineering describes some methods used in nuclear and other industries. Traditional safety engineering techniques are focused on the consequences of human error and do not investigate the causes or reasons for the occurrence of human error. System safety concept can be applied to this traditional field to help identify the set of conditions for safe operation of the system. Modern and more complex systems in military and NASA with computer application and controls require functional hazard analyses and a set of detailed specifications at all levels that address safety attributes to be inherent in the design. The process following a system safety program plan, preliminary hazard analyses, functional hazard assessments and system safety assessments are to produce evidence based documentation that will drive safety systems that are certifiable and that will hold up in litigation. The primary focus of any system safety plan, hazard analysis and safety assessment is to implement a comprehensive process to systematically predict or identify the operational behavior of any safety-critical failure condition or fault condition or human error that could lead to a hazard and potential mishap. This is used to influence requirements to drive control strategies and safety attributes in the form of safety design features or safety devices to prevent, eliminate and control (mitigation) safety risk. In the distant past hazards were the focus for very simple systems, but as technology and complexity advanced in the 1970s and 1980s more modern and effective methods and techniques were invented using holistic approaches. Modern system safety is comprehensive and is risk based, requirements based, functional based and criteria based with goal structured objectives to yield engineering evidence to verify safety functionality is deterministic and acceptable risk in the intended operating environment. Software intensive systems that command, control and monitor safety-critical functions require extensive software safety analyses to influence detail design requirements, especially in more autonomous or robotic systems with little or no operator intervention. Systems of systems, such as a modern military aircraft or fighting ship with multiple parts and systems with multiple integration, sensor fusion, networking and interoperable systems will require much partnering and coordination with multiple suppliers and vendors responsible for ensuring safety is a vital attribute planned in the overall system.
Weapon System Safety is an important application of the system safety field, due to the potentially destructive effects of a system failure or malfunction. A healthy skeptical attitude towards the system, when it is at the requirements definition and drawing-board stage, by conducting functional hazard analyses, would help in learning about the factors that create hazards and mitigations that control the hazards. A rigorous process is usually formally implemented as part of systems engineering to influence the design and improve the situation before the errors and faults weaken the system defences and cause accidents. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
Typically, weapons systems pertaining to ships , land vehicles, guided missiles and aircraft differ in hazards and effects; some are inherent, such as explosives, and some are created due to the specific operating environments (as in, for example, aircraft sustaining flight). In the military aircraft industry safety-critical functions are identified and the overall design architecture of hardware, software and human systems integration are thoroughly analyzed and explicit safety requirements are derived and specified during proven hazard analysis process to establish safeguards to ensure essential functions are not lost or function correctly in a predictable manner. Conducting comprehensive hazard analyses and determining credible faults, failure conditions, contributing influences and causal factors, that can contribute to or cause hazards, are an essentially part of the systems engineering process. Explicit safety requirements must be derived, developed, implemented, and verified with objective safety evidence and ample safety documentation showing due diligence. Highly complex software intensive systems with many complex interactions affecting safety-critical functions requires extensive planning, special know-how, use of analytical tools, accurate models, modern methods and proven techniques. Prevention of mishaps is the objective. | https://en.wikipedia.org/wiki/System_safety |
The system size expansion , also known as van Kampen's expansion or the Ω-expansion , is a technique pioneered by Nico van Kampen [ 1 ] used in the analysis of stochastic processes . Specifically, it allows one to find an approximation to the solution of a master equation with nonlinear transition rates. The leading order term of the expansion is given by the linear noise approximation , in which the master equation is approximated by a Fokker–Planck equation with linear coefficients determined by the transition rates and stoichiometry of the system.
Less formally, it is normally straightforward to write down a mathematical description of a system where processes happen randomly (for example, radioactive atoms randomly decay in a physical system, or genes that are expressed stochastically in a cell). However, these mathematical descriptions are often too difficult to solve for the study of the systems statistics (for example, the mean and variance of the number of atoms or proteins as a function of time). The system size expansion allows one to obtain an approximate statistical description that can be solved much more easily than the master equation.
Systems that admit a treatment with the system size expansion may be described by a probability distribution P ( X , t ) {\displaystyle P(X,t)} , giving the probability of observing the system in state X {\displaystyle X} at time t {\displaystyle t} . X {\displaystyle X} may be, for example, a vector with elements corresponding to the number of molecules of different chemical species in a system. In a system of size Ω {\displaystyle \Omega } (intuitively interpreted as the volume), we will adopt the following nomenclature: X {\displaystyle \mathbf {X} } is a vector of macroscopic copy numbers, x = X / Ω {\displaystyle \mathbf {x} =\mathbf {X} /\Omega } is a vector of concentrations, and ϕ {\displaystyle \mathbf {\phi } } is a vector of deterministic concentrations, as they would appear according to the rate equation in an infinite system. x {\displaystyle \mathbf {x} } and X {\displaystyle \mathbf {X} } are thus quantities subject to stochastic effects.
A master equation describes the time evolution of this probability. [ 1 ] Henceforth, a system of chemical reactions [ 2 ] will be discussed to provide a concrete example, although the nomenclature of "species" and "reactions" is generalisable. A system involving N {\displaystyle N} species and R {\displaystyle R} reactions can be described with the master equation:
Here, Ω {\displaystyle \Omega } is the system size, E {\displaystyle \mathbb {E} } is an operator which will be addressed later, S i j {\displaystyle S_{ij}} is the stoichiometric matrix for the system (in which element S i j {\displaystyle S_{ij}} gives the stoichiometric coefficient for species i {\displaystyle i} in reaction j {\displaystyle j} ), and f j {\displaystyle f_{j}} is the rate of reaction j {\displaystyle j} given a state x {\displaystyle \mathbf {x} } and system size Ω {\displaystyle \Omega } .
E − S i j {\displaystyle \mathbb {E} ^{-S_{ij}}} is a step operator, [ 1 ] removing S i j {\displaystyle S_{ij}} from the i {\displaystyle i} th element of its argument. For example, E − S 23 f ( x 1 , x 2 , x 3 ) = f ( x 1 , x 2 − S 23 , x 3 ) {\displaystyle \mathbb {E} ^{-S_{23}}f(x_{1},x_{2},x_{3})=f(x_{1},x_{2}-S_{23},x_{3})} . This formalism will be useful later.
The above equation can be interpreted as follows. The initial sum on the RHS is over all reactions. For each reaction j {\displaystyle j} , the brackets immediately following the sum give two terms. The term with the simple coefficient −1 gives the probability flux away from a given state X {\displaystyle \mathbf {X} } due to reaction j {\displaystyle j} changing the state. The term preceded by the product of step operators gives the probability flux due to reaction j {\displaystyle j} changing a different state X ′ {\displaystyle \mathbf {X'} } into state X {\displaystyle \mathbf {X} } . The product of step operators constructs this state X ′ {\displaystyle \mathbf {X'} } .
For example, consider the (linear) chemical system involving two chemical species X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} and the reaction X 1 → X 2 {\displaystyle X_{1}\rightarrow X_{2}} . In this system, N = 2 {\displaystyle N=2} (species), R = 1 {\displaystyle R=1} (reactions). A state of the system is a vector X = { n 1 , n 2 } {\displaystyle \mathbf {X} =\{n_{1},n_{2}\}} , where n 1 , n 2 {\displaystyle n_{1},n_{2}} are the number of molecules of X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} respectively. Let f 1 ( x , Ω ) = n 1 Ω = x 1 {\displaystyle f_{1}(\mathbf {x} ,\Omega )={\frac {n_{1}}{\Omega }}=x_{1}} , so that the rate of reaction 1 (the only reaction) depends on the concentration of X 1 {\displaystyle X_{1}} . The stoichiometry matrix is ( − 1 , 1 ) T {\displaystyle (-1,1)^{T}} .
Then the master equation reads:
where Δ X = { 1 , − 1 } {\displaystyle \mathbf {\Delta X} =\{1,-1\}} is the shift caused by the action of the product of step operators, required to change state X {\displaystyle \mathbf {X} } to a precursor state X ′ {\displaystyle \mathbf {X} '} .
If the master equation possesses nonlinear transition rates, it may be impossible to solve it analytically. The system size expansion utilises the ansatz that the variance of the steady-state probability distribution of constituent numbers in a population scales like the system size. This ansatz is used to expand the master equation in terms of a small parameter given by the inverse system size.
Specifically, let us write the X i {\displaystyle X_{i}} , the copy number of component i {\displaystyle i} , as a sum of its "deterministic" value (a scaled-up concentration) and a random variable ξ {\displaystyle \xi } , scaled by Ω 1 / 2 {\displaystyle \Omega ^{1/2}} :
The probability distribution of X {\displaystyle \mathbf {X} } can then be rewritten in the vector of random variables ξ {\displaystyle \xi } :
Consider how to write reaction rates f {\displaystyle f} and the step operator E {\displaystyle \mathbb {E} } in terms of this new random variable. Taylor expansion of the transition rates gives:
The step operator has the effect E f ( n ) → f ( n + 1 ) {\displaystyle \mathbb {E} f(n)\rightarrow f(n+1)} and hence E f ( ξ ) → f ( ξ + Ω − 1 / 2 ) {\displaystyle \mathbb {E} f(\xi )\rightarrow f(\xi +\Omega ^{-1/2})} :
We are now in a position to recast the master equation.
This rather frightening expression makes a bit more sense when we gather terms in different powers of Ω {\displaystyle \Omega } . First, terms of order Ω 1 / 2 {\displaystyle \Omega ^{1/2}} give
These terms cancel, due to the macroscopic reaction equation
The terms of order Ω 0 {\displaystyle \Omega ^{0}} are more interesting:
which can be written as
where
and
The time evolution of Π {\displaystyle \Pi } is then governed by the linear Fokker–Planck equation with coefficient matrices A {\displaystyle \mathbf {A} } and B B T {\displaystyle \mathbf {BB} ^{T}} (in the large- Ω {\displaystyle \Omega } limit, terms of O ( Ω − 1 / 2 ) {\displaystyle O(\Omega ^{-1/2})} may be neglected, termed the linear noise approximation ). With knowledge of the reaction rates f {\displaystyle \mathbf {f} } and stoichiometry S {\displaystyle S} , the moments of Π {\displaystyle \Pi } can then be calculated.
The approximation implies that fluctuations around the mean are Gaussian distributed. Non-Gaussian features of the distributions can be computed by taking into account higher order terms in the expansion. [ 3 ]
The linear noise approximation has become a popular technique for estimating the size of intrinsic noise in terms of coefficients of variation and Fano factors for molecular species in intracellular pathways. The second moment obtained from the linear noise approximation (on which the noise measures are based) are exact only if the pathway is composed of first-order reactions. However bimolecular reactions such as enzyme-substrate , protein-protein and protein-DNA interactions are ubiquitous elements of all known pathways; for such cases, the linear noise approximation can give estimates which are accurate in the limit of large reaction volumes. Since this limit is taken at constant concentrations, it follows that the linear noise approximation gives accurate results in the limit of large molecule numbers and becomes less reliable for pathways characterized by many species with low copy numbers of molecules.
The system size expansion and linear noise approximation have been made available via automated derivation in an open source software project Multi-Scale Modelling Tool (MuMoT). [ 4 ]
A number of studies have elucidated cases of the insufficiency of the linear noise approximation in biological contexts by comparison of its predictions with those of stochastic simulations. [ 5 ] [ 6 ] This has led to the investigation of higher order terms of the system size expansion that go beyond the linear approximation. These terms have been used to obtain more accurate moment estimates for the mean concentrations and for the variances of the concentration fluctuations in intracellular pathways. In particular, the leading order corrections to the linear noise approximation yield corrections of the conventional rate equations . [ 7 ] Terms of higher order have also been used to obtain corrections to the variances and covariances estimates of the linear noise approximation. [ 8 ] [ 9 ] The linear noise approximation and corrections to it can be computed using the open source software intrinsic Noise Analyzer . The corrections have been shown to be particularly considerable for allosteric and non-allosteric enzyme-mediated reactions in intracellular compartments . | https://en.wikipedia.org/wiki/System_size_expansion |
System under test ( SUT ) refers to a system that is being tested for correct operation. According to ISTQB it is the test object. [ 1 ] [ 2 ] [ 3 ]
From a unit testing perspective, the system under test represents all of the classes in a test that are not predefined pieces of code like stubs or even mocks. Each one of this can have its own configuration (a name and a version), making it scalable for a series of tests to get more and more precise, according to the quantity of quality of the system in test.
This article related to a type of software is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/System_under_test |
Systema Naturae (originally in Latin written Systema Naturæ with the ligature æ ) is one of the major works of the Swedish botanist, zoologist and physician Carl Linnaeus (1707–1778) and introduced the Linnaean taxonomy . Although the system, now known as binomial nomenclature , was partially developed by the Bauhin brothers, Gaspard and Johann , [ 2 ] Linnaeus was the first to use it consistently throughout his book. The first edition was published in 1735. The full title of the 10th edition (1758), which was the most important one, was Systema naturæ per regna tria naturæ, secundum classes, ordines, genera, species, cum characteribus, differentiis, synonymis, locis , which appeared in English in 1806 with the title: "A General System of Nature, Through the Three Grand Kingdoms of Animals, Vegetables, and Minerals, Systematically Divided Into their Several Classes, Orders, Genera, Species, and Varieties, with their Habitations, Manners, Economy, Structure and Peculiarities". [ 3 ]
The tenth edition of this book (1758) is considered the starting point of zoological nomenclature . [ 4 ] In 1766–1768 Linnaeus published the much enhanced 12th edition , the last under his authorship. Another again enhanced work in the same style titled " Systema Naturae " was published by Johann Friedrich Gmelin between 1788 and 1793. Since at least the early 20th century, zoologists have commonly recognized this as the last edition belonging to this series. [ 5 ] [ 6 ] [ 7 ]
Linnaeus (later known as "Carl von Linné", after his ennoblement in 1761) [ 8 ] published the first edition of Systema Naturae in the year 1735, during his stay in the Netherlands . As was customary for the scientific literature of its day, the book was published in Latin . In it, he outlined his ideas for the hierarchical classification of the natural world, dividing it into the animal kingdom ( regnum animale ), the plant kingdom ( regnum vegetabile ), and the " mineral kingdom " ( regnum lapideum ).
Linnaeus's Systema Naturae lists only about 10,000 species of organisms, of which about 6,000 are plants and 4,236 are animals. [ 9 ] According to the historian of botany William T. Stearn , "Even in 1753 he believed that the number of species of plants in the whole world would hardly reach 10,000; in his whole career he named about 7,700 species of flowering plants." [ 9 ]
Linnaeus developed his classification of the plant kingdom in an attempt to describe and understand the natural world as a reflection of the logic of God 's creation. [ 10 ] His sexual system , where species with the same number of stamens were treated in the same group, was convenient but in his view artificial. [ 10 ] Linnaeus believed in God's creation and that there were no deeper relationships to be expressed. The classification of animals was more natural [ compared to? ] . For instance, humans were for the first time placed together with other primates , as Anthropomorpha . They were also divided into four varieties , as distinguished by skin color and corresponding with the four known continents and temperaments . [ 11 ] The tenth edition expanded on these varieties with behavioral and cultural traits that the Linnean Society acknowledges as having cemented colonial stereotypes and provided one of the foundations for scientific racism . [ 12 ]
As a result of the popularity of the work, and the number of new specimens sent to him from around the world, Linnaeus kept publishing new and ever-expanding editions of his work. [ 13 ] It grew from eleven very large pages in the first edition (1735) to 2,400 pages in the 12th edition (1766–1768). [ 14 ] Also, as the work progressed, he made changes: in the first edition, whales were classified as fishes , following the work of Linnaeus' friend and "father of ichthyology " Peter Artedi ; in the 10th edition, published in 1758, whales were moved into the mammal class. In this same edition, he introduced two-part names (see binomen ) for animal species, something that he had done for plant species (see binary name ) in the 1753 publication of Species Plantarum . The system eventually developed into modern Linnaean taxonomy , a hierarchically organized biological classification .
After Linnaeus' health declined in the early 1770s, publication of editions of Systema Naturae went in two directions. Another Swedish scientist, Johan Andreas Murray issued the Regnum Vegetabile section separately in 1774 as the Systema Vegetabilium , rather confusingly labelled the 13th edition. [ 15 ] Meanwhile, a 13th edition of the entire Systema appeared in parts between 1788 and 1793. It was as the Systema Vegetabilium that Linnaeus' work became widely known in England following translation from the Latin by the Lichfield Botanical Society , as A System of Vegetables (1783–1785). [ 16 ]
In his Imperium Naturæ , Linnaeus established three kingdoms, namely Regnum Animale , Regnum Vegetabile and Regnum Lapideum . This approach, the Animal, Vegetable and Mineral Kingdoms, survives until today in the popular mind, notably in the form of parlour games: "Is it animal, vegetable or mineral ?" The classification was based on five levels: kingdom , class , order , genus , and species . While species and genus was seen as God-given (or "natural"), the three higher levels were seen by Linnaeus as constructs. The concept behind the set ranks being applied to all groups was to make a system that was easy to remember and navigate, a task which most say he succeeded in.
Linnaeus's work had a huge impact on science; it was indispensable as a foundation for biological nomenclature , now regulated by the Nomenclature Codes . Two of his works, the first edition of the Species Plantarum (1753) for plants and the 10th edition of the Systema Naturæ (1758), are accepted to be among the starting points of nomenclature. Most of his names for species and genera were published at very early dates, and thus take priority over those of other, later authors. In zoology there is one exception, which is a monograph on Swedish spiders, Svenska Spindlar , [ 17 ] published by Carl Clerck in 1757, so the names established there take priority over the Linnean names. [ 18 ] His exceptional importance to science was less in the value of his taxonomy, more in his deployment of skilful young students abroad to collect specimens. [ 19 ] [ page needed ] At the close of the 18th century, his system had effectively become the standard for biological classification.
Only in the animal kingdom is the higher taxonomy of Linnaeus still more or less recognizable and some of these names are still in use, but usually not quite for the same groups as used by Linnaeus. He divided the Animal Kingdom into six classes; in the tenth edition (1758), these were:
Linnaeus was one of the first scientists to classify humans as primates (originally Anthropomorpha for "manlike"), eliciting some controversy for placing people among animals and thus not ruling over nature . [ 20 ] He distinguished humans ( Homo sapiens ) from Homo troglodytes , a species of human-like creatures with exaggerated or non-human characteristics, despite finding limited evidence. [ 20 ] He divided Homo sapiens into four varieties , corresponding with the four known continents and four temperaments (some editions also classify Ferus wild children and Monstrosus monstrous to accommodate adaptations to extreme environments ). [ 21 ] The first edition included Europæus albescens (whitish Europeans), Americanus rubescens (reddish Americans), Asiaticus fuscus (tawny Asians), and Africanus nigriculus (blackish Africans). [ 11 ] The tenth edition solidified these descriptions by removing the "ish" qualifiers (e.g. albus "white" instead of albescens "whitish") and revising the characterization of Asiaticus from fuscus (tawny) to luridus (pale yellow). [ 11 ] [ 22 ] It also incorporates behavioral and cultural traits that the Linnean Society recognizes as having cemented colonial stereotypes and provided one of the foundations for scientific racism . [ 12 ]
The orders and classes of plants, according to his Systema Sexuale , were never intended to represent natural groups (as opposed to his ordines naturales in his Philosophia Botanica ) but only for use in identification. They were used in that sense well into the 19th century.
The Linnaean classes for plants, in the Sexual System, were:
Linnaeus's taxonomy of minerals has long since fallen out of use. In the 10th edition, 1758, of the Systema Naturæ , the Linnaean classes were:
Gmelin's thirteenth ( decima tertia ) edition of Systema Naturae (1788–1793) should be carefully distinguished from the more limited Systema Vegetabilium first prepared and published by Johan Andreas Murray in 1774 (but labelled as "thirteenth edition"). [ 15 ]
The dates of publication for Gmelin's edition were the following: [ 24 ] | https://en.wikipedia.org/wiki/Systema_Naturae |
Systematic Protein Investigative Research Environment (SPIRE) provides web-based experiment-specific mass spectrometry (MS) proteomics analysis in order to identify proteins and peptides, and label-free expression and relative expression analyses. SPIRE provides a web-interface and generates results in both interactive and simple data formats.
Spire's analyses are based on an experimental design that generates false discovery rates and local false discovery rates (FDR, LFDR) and integrates open-source search and data analysis methods. By combining X! Tandem , OMSSA and SpectraST SPIRE can produce an increase in protein IDs (50-90%) over current combinations of scoring and single search engines while also providing accurate multi-faceted error estimation. SPIRE combines its analysis results with data on protein function, pathways and protein expression from model organisms.
SPIRE also connects results to publicly available proteomics data through its Multi-Omics Profiling Expression Database (MOPED). SPIRE can provide analysis and annotation for user-supplied protein ID and expression data. Users can upload data (standardized appropriately) or mail in data files. | https://en.wikipedia.org/wiki/Systematic_Protein_Investigative_Research_Environment |
The Systematic and Evolutionary Biogeographical Association (SEBA) promotes an open and diverse international biogeographical community by assisting in sharing biogeographical information. Enhancing communication between biogeographers of all countries, SEBA contributes to the development of the theory and practice of systematic and evolutionary biogeography . [ 1 ]
SEBA was established in 2006. It is a non-profit organization and a scientific member of the International Union of Biological Sciences .
SEBA promotes the International Code of Area Nomenclature (ICAN) , a standardized system of biogeographical reference. | https://en.wikipedia.org/wiki/Systematic_and_Evolutionary_Biogeography_Association |
A systematic element name is the temporary name assigned to an unknown or recently synthesized chemical element . A systematic symbol is also derived from this name.
In chemistry, a transuranic element receives a permanent name and symbol only after its synthesis has been confirmed. In some cases, such as the Transfermium Wars , controversies over the formal name and symbol have been protracted and highly political. In order to discuss such elements without ambiguity, the International Union of Pure and Applied Chemistry (IUPAC) uses a set of rules, adopted in 1978, to assign a temporary systematic name and symbol to each such element. This approach to naming originated in the successful development of regular rules for the naming of organic compounds .
The temporary names derive systematically from the element's atomic number , and apply only to 101 ≤ Z ≤ 999. [ 1 ] Each digit is translated into a "numerical root" according to the table. The roots are concatenated , and the name is completed by the suffix -ium . Some of the roots are Latin and others are Greek , to avoid two digits starting with the same letter (for example, the Greek-derived pent is used instead of the Latin-derived quint to avoid confusion with quad for 4). There are two elision rules designed to prevent odd-looking names.
Traditionally the suffix -ium was used only for metals (or at least elements that were expected to be metallic), and other elements used different suffixes: halogens used -ine and noble gases used -on instead. However, the systematic names use -ium for all elements regardless of group. Thus, elements 117 and 118 were ununseptium and ununoctium , not ununseptine and ununocton . [ 2 ] This does not apply to the trivial names these elements receive once confirmed; thus, elements 117 and 118 are now tennessine and oganesson , respectively. For these trivial names, all elements receive the suffix -ium except those in group 17, which receive -ine (like the halogens), and those in group 18, which receive -on (like the noble gases). [ 2 ] (That being said, tennessine and oganesson are expected to behave quite differently from their lighter congeners.)
The systematic symbol is formed by taking the first letter of each root, converting the first to uppercase. This results in three-letter symbols instead of the one- or two-letter symbols used for named elements. The rationale is that any scheme producing two-letter symbols will have to deviate from full systematicity to avoid collisions with the symbols of the permanently named elements.
The Recommendations for the Naming of Elements of Atomic Numbers Greater than 100 can be found here .
As of 2019 [update] , all 118 discovered elements have received individual permanent names and symbols. [ 3 ] Therefore, systematic names and symbols are now used only for the undiscovered elements beyond element 118, oganesson. When such an element is discovered, it will keep its systematic name and symbol until its discovery meets the criteria of and is accepted by the IUPAC/IUPAP Joint Working Party , upon which the discoverers are invited to propose a permanent name and symbol. Once this name and symbol is proposed, there is still a comment period before they become official and replace the systematic name and symbol.
At the time the systematic names were recommended (1978), names had already been officially given to all elements up to atomic number 103, lawrencium . While systematic names were given for elements 101 ( mendelevium ), 102 ( nobelium ), and 103 (lawrencium), these were only as "minor alternatives to the trivial names already approved by IUPAC". [ 1 ] The following elements for some time only had systematic names as approved names, until their final replacement with trivial names after their discoveries were accepted. | https://en.wikipedia.org/wiki/Systematic_element_name |
Systematic evolution of ligands by exponential enrichment ( SELEX ), also referred to as in vitro selection or in vitro evolution , is a combinatorial chemistry technique in molecular biology for producing oligonucleotides of either single-stranded DNA or RNA that specifically bind to a target ligand or ligands. These single-stranded DNA or RNA are commonly referred to as aptamers . [ 1 ] [ 2 ] [ 3 ] Although SELEX has emerged as the most commonly used name for the procedure, some researchers have referred to it as SAAB (selected and amplified binding site) and CASTing (cyclic amplification and selection of targets) [ 4 ] [ 5 ] SELEX was first introduced in 1990. In 2015, a special issue was published in the Journal of Molecular Evolution in the honor of quarter century of the discovery of SELEX. [ 6 ]
The process begins with the synthesis of a very large oligonucleotide library, consisting of randomly generated sequences of fixed length flanked by constant 5' and 3' ends. The constant ends serve as primers , while a small number of random regions are expected to bind specifically to the chosen target. For a randomly generated region of length n , the number of possible sequences in the library using conventional DNA or RNA is 4 n ( n positions with four possibilities (A,T,C,G) at each position). The sequences in the library are exposed to the target ligand - which may be a protein or a small organic compound - and those that do not bind the target are removed, usually by affinity chromatography or target capture on paramagnetic beads. [ 7 ] The bound sequences are eluted and amplified by PCR [ 2 ] [ 3 ] to prepare for subsequent rounds of selection in which the stringency of the elution conditions can be increased to identify the tightest-binding sequences. [ 2 ] A caution to consider in this method is that the selection of extremely high, sub- nanomolar binding affinity entities may not in fact improve specificity for the target molecule. [ 8 ] Off-target binding to related molecules could have significant clinical effects.
SELEX has been used to develop a number of aptamers that bind targets interesting for both clinical and research purposes. [ 9 ] Nucleotides with chemically modified sugars and bases have been incorporated into SELEX reactions to increase the chemical diversity at each base, expanding the possibilities for specific and sensitive binding, or increasing stability in serum or in vivo . [ 9 ] [ 10 ]
Aptamers have emerged as a novel category in the field of bioreceptors due to their wide applications ranging from biosensing to therapeutics. Several variations of their screening process, called SELEX have been reported which can yield sequences with desired properties needed for their final use. [ 11 ]
The first step of SELEX involves the synthesis of fully or partially randomized oligonucleotide sequences of some length flanked by defined regions which allow PCR amplification of those randomized regions and, in the case of RNA SELEX, in vitro transcription of the randomized sequence. [ 2 ] [ 3 ] [ 12 ] While Ellington and Szostak demonstrated that chemical synthesis is capable of generating ~10 15 unique sequences for oligonucleotide libraries in their 1990 paper on in vitro selection, [ 3 ] they found that amplification of these synthesized oligonucleotides led to significant loss of pool diversity due to PCR bias and defects in synthesized fragments. [ 3 ] The oligonucleotide pool is amplified and a sufficient amount of the initial library is added to the reaction so that there are numerous copies of each individual sequence to minimize the loss of potential binding sequences due to stochastic events. [ 3 ] Before the library is introduced to target for incubation and selective retention, the sequence library must be converted to single stranded oligonucleotides to achieve structural conformations with target binding properties. [ 2 ] [ 3 ]
Immediately prior to target introduction, the single stranded oligonucleotide library is often heated and cooled slowly to renature oligonucleotides into thermodynamically stable secondary and tertiary structures. [ 3 ] [ 7 ] Once prepared, the randomized library is incubated with immobilized target to allow oligonucleotide-target binding. There are several considerations for this target incubation step, including the target immobilization method and strategies for subsequent unbound oligonucleotide separation, incubation time and temperature, incubation buffer conditions, and target versus oligonucleotide concentrations. Examples of target immobilization methods include affinity chromatography columns, [ 3 ] nitrocellulose binding assay filters , [ 2 ] and paramagnetic beads. [ 7 ] Recently, SELEX reactions have been developed where the target is whole cells, which are expanded near complete confluence and incubated with the oligonucleotide library on culture plates. [ 13 ] Incubation buffer conditions are altered based on the intended target and desired function of the selected aptamer. For example, in the case of negatively charged small molecules and proteins, high salt buffers are used for charge screening to allow nucleotides to approach the target and increase the chance of a specific binding event. [ 3 ] Alternatively, if the desired aptamer function is in vivo protein or whole cell binding for potential therapeutic or diagnostic application, incubation buffer conditions similar to in vivo plasma salt concentrations and homeostatic temperatures are more likely to generate aptamers that can bind in vivo. Another consideration in incubation buffer conditions is non-specific competitors. If there is a high likelihood of non-specific oligonucleotide retention in the reaction conditions, non specific competitors, which are small molecules or polymers other than the SELEX library that have similar physical properties to the library oligonucleotides, can be used to occupy these non-specific binding sites. [ 13 ] Varying the relative concentration of target and oligonucleotides can also affect properties of the selected aptamers. If a good binding affinity for the selected aptamer is not a concern, then an excess of target can be used to increase the probability that at least some sequences will bind during incubation and be retained. However, this provides no selective pressure for high binding affinity , which requires the oligonucleotide library to be in excess so that there is competition between unique sequences for available specific binding sites. [ 2 ]
Once the oligonucleotide library has been incubated with target for sufficient time, unbound oligonucleotides are washed away from immobilized target, often using the incubation buffer so that specifically bound oligonucleotides are retained. [ 3 ] With unbound sequences washed away, the specifically bound sequences are then eluted by creating denaturing conditions that promote oligonucleotide unfolding or loss of binding conformation including flowing in deionized water, [ 3 ] using denaturing solutions containing urea and EDTA, [ 13 ] [ 14 ] or by applying high heat and physical force. [ 7 ] Upon elution of bound sequences, the retained oligonucleotides are reverse-transcribed to DNA in the case of RNA or modified base selections, [ 2 ] [ 3 ] [ 13 ] or simply collected for amplification in the case of DNA SELEX. [ 15 ] These DNA templates from eluted sequences are then amplified via PCR and converted to single stranded DNA, RNA, or modified base oligonucleotides, which are used as the initial input for the next round of selection. [ 2 ] [ 3 ]
One of the most critical steps in the SELEX procedure is obtaining single stranded DNA (ssDNA) after the PCR amplification step. This will serve as input for the next cycle so it is of vital importance that all the DNA is single stranded and as little as possible is lost. Because of the relative simplicity, one of the most used methods is using biotinylated reverse primers in the amplification step, after which the complementary strands can be bound to a resin followed by elution of the other strand with lye. Another method is asymmetric PCR, where the amplification step is performed with an excess of forward primer and very little reverse primer, which leads to the production of more of the desired strand. A drawback of this method is that the product should be purified from double stranded DNA (dsDNA) and other left-over material from the PCR reaction. Enzymatic degradation of the unwanted strand can be performed by tagging this strand using a phosphate-probed primer, as it is recognized by enzymes such as Lambda exonuclease . These enzymes then selectively degrade the phosphate tagged strand leaving the complementary strand intact. All of these methods recover approximately 50 to 70% of the DNA. For a detailed comparison refer to the article by Svobodová et al. where these, and other, methods are experimentally compared. [ 16 ] In classical SELEX, the process of randomized single stranded library generation, target incubation, and binding sequence elution and amplification described above are repeated until the vast majority of the retained pool consists of target binding sequences, [ 2 ] [ 3 ] though there are modifications and additions to the procedure that are often used, which are discussed below.
In order to increase the specificity of aptamers selected by a given SELEX procedure, a negative selection, or counter selection, step can be added prior to or immediately following target incubation. To eliminate sequences with affinity for target immobilization matrix components from the pool, negative selection can be used where the library is incubated with target immobilization matrix components and unbound sequences are retained. [ 14 ] [ 17 ] [ 15 ] Negative selection can also be used to eliminate sequences that bind target-like molecules or cells by incubating the oligonucleotide library with small molecule target analogs, undesired cell types, or non-target proteins and retaining the unbound sequences. [ 13 ] [ 15 ] [ 18 ]
To track the progress of a SELEX reaction, the number of target bound molecules, which is equivalent to the number of oligonucleotides eluted, can be compared to the estimated total input of oligonucleotides following elution at each round. [ 3 ] [ 19 ] The number of eluted oligonucleotides can be estimated through elution concentration estimations via 260 nm wavelength absorbance [ 19 ] or fluorescent labeling of oligonucleotides. [ 7 ] As the SELEX reaction approaches completion, the fraction of the oligonucleotide library that binds target approaches 100%, such that the number of eluted molecules approaches the total oligonucleotide input estimate, but may converge at a lower number. [ 3 ]
Some SELEX reactions can generate probes that are dependent on primer binding regions for secondary structure formation. [ 7 ] There are aptamer applications for which a short sequence, and thus primer truncation, is desirable. [ 20 ] An advancement on the original method allows an RNA library to omit the constant primer regions, which can be difficult to remove after the selection process because they stabilize secondary structures that are unstable when formed by the random region alone. [ 21 ]
Recently, SELEX has expanded to include the use of chemically modified nucleotides. These chemically modified oligonucleotides offer many potential advantages for selected aptamers including greater stability and nuclease resistance, enhanced binding for select targets, expanded physical properties - like increased hydrophobicity, and more diverse structural conformations. [ 9 ] [ 10 ] [ 22 ]
The genetic alphabet, and thus possible aptamers, is also expanded using unnatural base pairs [ 23 ] [ 24 ] the use of these unnatural base pairs was applied to SELEX and high affinity DNA aptamers were generated. [ 25 ]
FRELEX was developed in 2016 by NeoVentures Biotechnology Inc to allow the selection of aptamers without immobilizing the target or the oligonucleotide library. [ 26 ] Immobilization is a necessary component of SELEX; however, it has the potential to inhibit key epitopes , and thus weaken the likelihood of successful binding, particularly when working with small molecules. [ 27 ] [ 28 ] FRELEX follows a similar overall methodology to SELEX; however, instead of immobilizing the target, the researcher introduces a series of random and blocker oligonucleotides to an immobilization field before introduction to the target. [ 26 ] This allows the researcher to better target small molecules that may be lost during partitioning. [ 26 ] It also can be used in some circumstances to select an aptamer library without knowing the target. [ 29 ]
Most modern aptamer selection methods strive to improve the conventional SELEX aptamer search method. [ 30 ] Despite the publication of various methods aimed at increasing the affinity and specificity of aptamers, [ 31 ] [ 32 ] [ 33 ] experimental approaches face limitations in the number and variety of sequences that can be examined and selected. Library capacity for SELEX experiments is practically limited to 10 15 candidates, whereas, assuming there is a 4-monomeric repertoire from which pools can be created, there are ~1.6 × 10 60 unique sequences in sequence space limited to a 100-residue matrix, which is clearly beyond experimental capabilities. [ 34 ] The library of oligonucleotides must be extremely diverse and not contain linear, incapable of providing a stable spatial arrangement, and double-stranded structures; due to these limitations, oligonucleotide libraries can cover the diversity of only ~10 6 sequences. [ 35 ] This means that existing aptamers may not fully cover the diversity of target molecules or may not have optimal properties due to limitations of the underlying method. To yield the best possible aptamers one must maximize the effectiveness of the discovery process and the library itself.
RNA and DNA secondary structure prediction by dynamic programming algorithms such as RNAfold ( ViennaRNA ) [ 36 ] and by machine learning models such as SPOT-RNA, [ 37 ] MXfold2 [ 38 ] provides the opportunity to assess the ability of sequences in the primary library to fold into complex structures, allowing for the selection of only the most promising sequences from the entire pool. However, these algorithms are low-performance, making them poorly suited for this task. For this reason, algorithms like Ufold from the University of California [ 39 ] and AliNA from Xelari Inc. [ 40 ] have been developed, which demonstrate a significant increase in computational speed due to their faster architecture, and can be applied for preliminary in silico analysis of these libraries.
The technique has been used to evolve aptamers of extremely high binding affinity to a variety of target ligands, including small molecules such as ATP [ 41 ] and adenosine [ 12 ] [ 42 ] and proteins such as prions [ 43 ] and vascular endothelial growth factor (VEGF). [ 44 ] Moreover, SELEX has been used to select high-affinity aptamers for complex targets such as tumor cells, [ 45 ] [ 46 ] tumor exosomes, [ 47 ] [ 48 ] or tumor tissue. [ 49 ] Clinical uses of the technique are suggested by aptamers that bind tumor markers , [ 50 ] GFP -related fluorophores , [ 51 ] and a VEGF-binding aptamer trade-named Macugen has been approved by the FDA for treatment of macular degeneration . [ 44 ] [ 52 ] Additionally, SELEX has been utilized to obtain highly specific catalytic DNA or DNAzymes. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific), [ 53 ] the CA1-3 DNAzymes (copper-specific), [ 54 ] the 39E DNAzyme (uranyl-specific) [ 55 ] and the NaA43 DNAzyme (sodium-specific). [ 56 ]
These developed aptamers have seen diverse application in therapies for macular degeneration [ 52 ] and various research applications including biosensors , [ 20 ] fluorescent labeling of proteins [ 57 ] and cells, [ 58 ] and selective enzyme inhibition. [ 59 ] | https://en.wikipedia.org/wiki/Systematic_evolution_of_ligands_by_exponential_enrichment |
The systematic layout planning (SLP) - also referred to as site layout planning [ 1 ] - is a tool used to arrange a workplace in a plant by locating areas with high frequency and logical relationships close to each other. [ 2 ] The process permits the quickest material flow in processing the product at the lowest cost and least amount of handling. It is used in construction projects to optimize the location of temporary facilities (such as engineers' caravans, material storage, generators, etc.) during construction to minimize transportation, minimize cost, minimize travel time, and enhance safety.
There are four levels of detail in plant layout design, | https://en.wikipedia.org/wiki/Systematic_layout_planning |
Systematics is the study of the diversification of living forms, both past and present, and the relationships among living things through time. Relationships are visualized as evolutionary trees (synonyms: phylogenetic trees , phylogenies). Phylogenies have two components: branching order (showing group relationships, graphically represented in cladograms ) and branch length (showing amount of evolution). Phylogenetic trees of species and higher taxa are used to study the evolution of traits (e.g., anatomical or molecular characteristics) and the distribution of organisms ( biogeography ). Systematics, in other words, is used to understand the evolutionary history of life on Earth.
The word systematics is derived from the Latin word of Ancient Greek origin systema , which means systematic arrangement of organisms. Carl Linnaeus used ' Systema Naturae ' as the title of his book.
In the study of biological systematics, researchers use the different branches to further understand the relationships between differing organisms. These branches are used to determine the applications and uses for modern day systematics. [ citation needed ]
Biological systematics classifies species by using three specific branches. Numerical systematics , or biometry , uses biological statistics to identify and classify animals. Biochemical systematics classifies and identifies animals based on the analysis of the material that makes up the living part of a cell—such as the nucleus , organelles , and cytoplasm . Experimental systematics identifies and classifies animals based on the evolutionary units that comprise a species, as well as their importance in evolution itself. Factors such as mutations, genetic divergence, and hybridization all are considered evolutionary units. [ 1 ]
With the specific branches, researchers are able to determine the applications and uses for modern-day systematics. These applications include:
John Lindley provided an early definition of systematics in 1830, although he wrote of "systematic botany" rather than using the term "systematics". [ 2 ]
In 1970 Michener et al. defined "systematic biology" and " taxonomy " (terms that are often confused and used interchangeably) in relationship to one another as follows: [ 3 ]
Systematic biology (hereafter called simply systematics) is the field that (a) provides scientific names for organisms, (b) describes them, (c) preserves collections of them, (d) provides classifications for the organisms, keys for their identification, and data on their distributions, (e) investigates their evolutionary histories, and (f) considers their environmental adaptations. This is a field with a long history that in recent years has experienced a notable renaissance, principally with respect to theoretical content. Part of the theoretical material has to do with evolutionary areas (topics e and f above), the rest relates especially to the problem of classification. Taxonomy is that part of Systematics concerned with topics (a) to (d) above.
The term "taxonomy" was coined by Augustin Pyramus de Candolle [ 4 ] while the term "systematic" was coined by Carl Linnaeus the father of taxonomy. [ citation needed ]
Taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, phylogenetics: At various times in history, all these words have had overlapping, related meanings. However, in modern usage, they can all be considered synonyms of each other.
For example, Webster's 9th New Collegiate Dictionary of 1987 treats "classification", "taxonomy", and "systematics" as synonyms. According to this work, the terms originated in 1790, c. 1828 , and in 1888 respectively. Some [ who? ] claim systematics alone deals specifically with relationships through time, and that it can be synonymous with phylogenetics , broadly dealing with the inferred hierarchy [ citation needed ] of organisms. This means it would be a subset of taxonomy as it is sometimes regarded, but the inverse is claimed by others. [ who? ]
Europeans tend to use the terms "systematics" and "biosystematics" for the study of biodiversity as a whole, whereas North Americans tend to use "taxonomy" more frequently. [ 5 ] However, taxonomy, and in particular alpha taxonomy , is more specifically the identification, description, and naming (i.e. nomenclature) of organisms, [ 6 ] while "classification" focuses on placing organisms within hierarchical groups that show their relationships to other organisms. All of these biological disciplines can deal with both extinct and extant organisms.
Systematics uses taxonomy as a primary tool in understanding, as nothing about an organism's relationships with other living things can be understood without it first being properly studied and described in sufficient detail to identify and classify it correctly. [ citation needed ] Scientific classifications are aids in recording and reporting information to other scientists and to laymen. The systematist , a scientist who specializes in systematics, must, therefore, be able to use existing classification systems, or at least know them well enough to skilfully justify not using them.
Phenetics was an attempt to determine the relationships of organisms through a measure of overall similarity, making no distinction between plesiomorphies (shared ancestral traits) and apomorphies (derived traits). From the late-20th century onwards, it was superseded by cladistics , which rejects plesiomorphies in attempting to resolve the phylogeny of Earth's various organisms through time. Today's [update] systematists generally make extensive use of molecular biology and of computer programs to study organisms. [ citation needed ]
Taxonomic characters are the taxonomic attributes that can be used to provide the evidence from which relationships (the phylogeny ) between taxa are inferred. [ 7 ] Kinds of taxonomic characters include: [ 8 ]
. | https://en.wikipedia.org/wiki/Systematics |
Systemic design is an interdiscipline [ 1 ] that integrates systems thinking and design practices. It is a pluralistic field, [ 2 ] [ 3 ] with several dialects [ 4 ] including systems-oriented design . [ 5 ] Influences have included critical systems thinking and second-order cybernetics . In 2021, the Design Council (UK) began advocating for a systemic design approach and embedded it in a revision of their double diamond model. [ 6 ]
Systemic design is closely related to sustainability as it aims to create solutions that are not only designed to have a good environmental impact, but are also socially and economically beneficial. In fact, from a systemic design approach, the system to be designed, its context with its relationships and its environment receive synchronous attention. [ 7 ] Systemic design's discourse has been developed through Relating Systems Thinking and Design—a series of symposia held annually since 2012. [ 8 ]
Systems thinking in design has a long history with origins in the design methods movement during the 1960s and 1970s, such as the idea of wicked problems developed by Horst Rittel . [ 9 ]
The theories about complexity help the management of an entire system, and the suggested design approaches help the planning of different divergent elements. The complexity theories evolved on the basis that living systems continually draw upon external sources of energy and maintain a stable state of low entropy, on the basis of the General Systems Theory by Karl Ludwig von Bertalanffy (1968). [ 10 ] Some of the next rationales applied those theories also on artificial systems: complexity models of living systems address also productive models with their organizations and management, where the relationships between parts are more important than the parts themselves.
Treating productive organizations as complex adaptive systems allows for new management models that address economical, social and environmental benefits (Pisek and Wilson, 2001.) [ 11 ] In that field, cluster theory (Porter, 1990) [ 12 ] evolved in more environmentally sensitive theories, like industrial ecology (Frosh and Gallopoulos, 1989) [ 13 ] and industrial symbiosis (Chertow, 2000). [ 14 ] Design thinking offers a way to creatively and strategically reconfigure a design concept in a situation with systemic integration (Buchanan, 1992). [ 15 ]
In 1994, Gunter Pauli and Heitor Gurgulino de Souza founded the research institute Zero Emission Research and Initiatives (ZERI), [ 16 ] starting from the idea that progress should embed respect for the environment and natural techniques that will allow production processes to be part of the ecosystem.
Strong interdisciplinary and transdisciplinarity approaches are critical during the design phase (Fuller, 1981) [ 17 ] with the increasing involvement of different disciplines, including urban planning, public policy, business management and environmental sciences (Chertow et al., 2004). [ 18 ] As an interdiscipline, systemic design joins systems thinking and design methodology to support humanity centred [ 19 ] and systems oriented design [ 20 ] academe and practice (Bistagnino, 2011; [ 21 ] Sevaldson, 2011; [ 22 ] Nelson and Stolterman, 2012; [ 23 ] Jones, 2014; [ 24 ] Toso at al., 2012 [ 25 ] ).
Numerous design projects demonstrate systemic design in their approach, including diverse topics involving food networks, [ 26 ] industrial processes and water purification, revitalization of internal areas through art and tourism, [ 27 ] circular economy , [ 28 ] [ 29 ] exhibition and fairs, social inclusion, and marginalization.
Since 2014 several scholarly journals have acknowledged systemic design with special publications, and in 2022, the Systemic Design Association launched “ Contexts—The Journal of Systemic Design .” The proceedings repository, Relating Systems Thinking and Design, exceeded 1000 articles in 2023.
Since 2012, host organisations have held an annual symposium dedicated to systemic design, Relating Systems Thinking and Design (RSD). Proceedings are available via the searchable repository on RSDsymposium.org. [ 41 ]
Academic research groups with a focus on systemic design include:
Academic programmes in systemic design include: | https://en.wikipedia.org/wiki/Systemic_design |
Systems-oriented design ( SOD ) uses system thinking in order to capture the complexity of systems addressed in design practice . The main mission of SOD is to build the designers ' own interpretation and implementation of systems thinking. SOD aims at enabling systems thinking to fully benefit from design thinking and practice and design thinking and practice to fully benefit from systems thinking. SOD addresses design for human activity systems and can be applied to any kind of design problem ranging from product design and interaction design through architecture to decision-making processes and policy design.
SOD is a variation in the pluralistic field of Systemic Design . It is one of the most practice and design-oriented versions of relating and merging systems thinking and design.
Design is getting more and more complex for several reasons, for example, due to globalisation , need for sustainability , and the introduction of new technology and increased use of automation . Many of the challenges designers meet today can be considered wicked problems . [ 1 ] The characteristics of a wicked problem include that there is no definitive formulation of the problem and that the solutions are never true or false but rather better or worse. [ 1 ] A traditional problem-solving approach is not sufficient in addressing for such design problems. SOD is an approach that addresses the challenges the designer faces when working with complex systems and wicked problems, providing tools and techniques which make it easier for the designer to grasp the complexity of the problem at hand. With a systems-oriented approach towards design, the designer acknowledges that the starting point for the design process is constantly moving and that "every implemented solution is consequential. It leaves "traces" that cannot be undone." (see Rittel and Webber's 5th property of wicked problems [ 1 ] ).
Designers are well suited to work with complexity and wicked problems for several reasons:
SOD emphasises these abilities as central and seeks to further train the designer in systems thinking and systems practice as a skill and an art.
SOD was developed and defined over time by Birger Ragnvald Sevaldson and colleagues at the Oslo School of Architecture and Design (AHO). Though there were earlier traces, it started in 2006 with a studio course for master students called "The Challenge of Complexity" " named after a conference in Finland in the early 1990s.
The initiative was purely design-driven, and it implied using large graphic maps as visual thinking tools and embracing very complex visualisations of systems. These were around 2008, dubbed "Gigamaps" by Sevaldson. In 2012, Sevaldson organised a seminar called "Relating Systems Thinking and Design" (RSD). A group from the international design community was invited and presented at the seminar.
After the seminar, this group got together in the loft of the Savoy hotel and there founded the informal network that later was called Systemic Design Research Network.
RSD developed into an annual conference with the first three conferences at AHO. In 2013, The emerging new movement of systems thinking in design shifted from being called Systems Oriented Design to Systemic Design. Sevaldson initiated this change to, on the one hand, maintain the development of SOD into a designerly approach while, on the other hand, allowing the bigger field to grow pluralistically into different variations. Harold Nelson suggested the name Systemic Design.
This allowed SOD to develop into a more designerly way where practice and praxeology [ 2 ] became ever more important. Parallel to this, SOD was clarifying its theoretical bases by relating to diverse historical systems theories but, most importantly, to Soft Systems [ 3 ] and Critical Systems Thinking. [ 4 ] Especially Gerald Midgley became important. [ 5 ] Also the crystallisation of SOD developed through the publication of the book mentioned above in 2022.
Through the years, the collaboration with Andreas Wettre, a business consultant, becoming a full-time employee at AHO has been crucial. He brought in organisational perspectives amongst others Stacey [ 6 ]
Systems-oriented design builds on systems theory and systems thinking to develop practices for addressing complexity in design. There are many of the classical first and second-wave systems theorists that have been influential that won't be mentioned here. Soft systems methodology (SSM) was influential, acknowledging conflicting worldviews and people's purposeful actions, and a systems view on creativity. However, more important, SOD is inspired by critical systems thinking and approaches systems theories in an eclectic way transforming the thoughts of the different theories to fit the design process. The design disciplines build on their own traditions and have a certain way of working with problems, often referred to as design thinking [ note 1 ] [ 7 ] [ 8 ] [ 9 ] or the design way . [ 10 ] Design thinking is a creative process based on the "building up" of ideas. This style of thinking is one of the advantages of the designer and is the reason why simply employing one of the existing systems approaches into design, like, for example, systems engineering , is not found sufficient by the advocates of SOD.
Compared with other systems approaches, SOD is less concerned with hierarchies and borders of systems, modelling and feedback loops , and more focused on the whole fields of relations and patterns of interactions. S.O.D. seeks richness rather than simplification of the complex systems.
Methods and techniques from other disciplines are used to understand the complexity of the system, including for example, ethnographic studies , risk analysis , and scenario thinking . Methods and concepts unique to SOD include, for example, the Rich Design Space, [ 11 ] Gigamapping, [ 12 ] and Incubation Techniques.
Incubation is one of the 4 proposed stages of creativity : preparation, incubation, illumination, and verification. [ 13 ]
The concept of systems-oriented design was initially proposed by professor Birger Sevaldson at the Oslo School of Architecture and Design (AHO) . The SOD approach is currently under development through teaching and research projects, as well as through the work of design practitioners. AHO provides Master courses in Systems Oriented Design each term as part of their Industrial Design program. In these courses, design students are trained in using the tools and techniques of SOD in projects with outside partners. Research projects in systems-oriented design are carried out at the Centre for Design Research [ 14 ] at AHO in order to develop the concept, methods and tools further. In 2016 the project Systemic Approach to Architectural Performance [ 15 ] was announced as an institutional cooperation between the Faculty of Art and Architecture [ 16 ] at the Technical University of Liberec and the Oslo School of Architecture and Design. Its mission is to link the methodology of systems-oriented design with performance-oriented architecture [ 17 ] on the case study Marie Davidova's project Wood as a Primary Medium to Architectural Performance. | https://en.wikipedia.org/wiki/Systems-oriented_design |
The Systems Biology Graphical Notation (SBGN) is a standard graphical representation intended to foster the efficient storage, exchange and reuse of information about signaling pathways, metabolic networks, and gene regulatory networks amongst communities of biochemists, biologists, and theoreticians. The system was created over several years by a community of biochemists , modelers and computer scientists . [ 1 ]
SBGN is made up of three orthogonal languages for representing different views of biological systems: Process Descriptions , Entity Relationships and Activity Flows . Each language defines a comprehensive set of symbols with precise semantics, together with detailed syntactic rules regarding the construction and interpretation of maps. Using these three notations, a life scientist can represent in an unambiguous way networks of interactions (for example biochemical interactions). These notations make use of an idea and symbols similar to that used by electrical and other engineers and known as the block diagram . The simplicity of SBGN syntax and semantics makes SBGN maps suitable for use at the high school level. [ citation needed ]
Some software support for SBGN is already available, mostly for the Process Description language. [ 2 ] SBGN visualizations can be exchanged with the XML-based file format SBGN-ML. [ 3 ]
The SBGN Process Description (PD) language shows the temporal courses of biochemical interactions in a network. It can be used to show all the molecular interactions taking place in a network of biochemical entities, with the same entity appearing multiple times in the same diagram. [ 4 ]
The SBGN Entity Relationship (ER) language allows to see all the relationships in which a given entity participates, regardless of the temporal aspects. Relationships can be seen as rules describing the influences of entities nodes on other relationships. [ 5 ]
The SBGN Activity Flow (AF) language depicts the flow of information between biochemical entities in a network. It omits information about the state transitions of entities and is particularly convenient for representing the effects of perturbations, whether genetic or environmental in nature. [ 6 ]
Work on defining a set of symbols to describe interactions and relationships of molecules was pioneered by Kurt Kohn at the National Cancer Institute with his Molecular Interaction Maps (MIM). [ 7 ] The development of SBGN was initiated by Hiroaki Kitano , supported by a funding from the Japanese New Energy and Industrial Technology Development Organization . The meeting that initiated development of the Systems Biology Graphical Notation took place on February 11–12, 2006, at the National Institute of Advanced Industrial Science and Technology (AIST), in Tokyo, Japan.
The first specification of SBGN Process Description language – then called Process Diagrams – was released on August 23, 2008 (Level 1 Version 1). [ 8 ] Corrections of the document were released on September 1, 2009 (Level 1 Version 1.1), [ 9 ] October 3, 2010 (Level 1 Version 1.2) [ 10 ] and February 14, 2011 (Level 1 Version 1.3). [ 4 ]
The first specification of SBGN Entity relationship language was released on September 1, 2009 (Level 1 Version 1). [ 11 ] Corrections of the document were released on October 6, 2010 (Level 1 Version 1.1) [ 12 ] and April 14, 2011 (Level 1 Version 1.2). [ 5 ]
The first specification of SBGN Activity Flow language was released on September 1, 2009. [ 6 ]
SBGN editors work in developing coherent specification documents. Below is a list of former SBGN editors and dates active: [ 13 ] | https://en.wikipedia.org/wiki/Systems_Biology_Graphical_Notation |
Systems Biology Ireland (SBI) is a Science Foundation Ireland -funded centre for science, engineering and technology research. It is an initiative between University College Dublin (UCD) and University of Galway (UCG). [ 1 ] [ 2 ] It is based on the Belfield campus of UCD, and works in the areas of systems biology , systems medicine and personalised medicine . [ 3 ]
SBI designs new therapeutic approaches to cancer, its research enabling the development of technologies that can be used for early identification of responsive patient groups and accelerated discovery of new combination therapies. [ 4 ]
People associated with SBI include its director Prof. Walter Kolch and deputy director Prof. Boris Kholodenko [ 5 ] | https://en.wikipedia.org/wiki/Systems_Biology_Ireland |
The Systems Biology Ontology (SBO) is a set of controlled, relational vocabularies of terms commonly used in systems biology , and in particular in computational modeling.
The rise of systems biology, seeking to comprehend biological processes as a whole, highlighted the need to not only develop corresponding quantitative models but also to create standards allowing their exchange and integration. This concern drove the community to design common data formats, such as SBML and CellML . SBML is now largely accepted and used in the field. However, as important as the definition of a common syntax is, it is also necessary to make clear the semantics of models. SBO tries to give us a way to label models with words that describe how they should be used in a large group of models that are commonly used in computational systems biology. [ 1 ] [ 2 ] The development of SBO was first discussed at the 9th SBML Forum Meeting in Heidelberg on October 14–15, 2004. During the forum, Pedro Mendes mentioned that modellers possessed a lot of knowledge that was necessary to understand the model and, more importantly, to simulate it, but this knowledge was not encoded in SBML. Nicolas Le Novère proposed to create a controlled vocabulary to store the content of Pedro Mendes' mind before he wandered out of the community. [ 3 ] The development of the ontology was announced more officially in a message from Le Novère to Michael Hucka and Andrew Finney on October 19.
SBO is currently made up of seven different vocabularies:
To curate and maintain SBO, a dedicated resource has been developed and the public interface of the SBO browser can be accessed at http://www.ebi.ac.uk/sbo .
A relational database management system ( MySQL ) at the back-end is
accessed through a web interface based on Java Server Pages (JSP) and JavaBeans . Its
content is encoded in UTF-8 , therefore supporting a large set of
characters in the definitions of terms. Distributed curation is made possible
by using a custom-tailored locking system allowing concurrent access.
This system allows a continuous update of the ontology with immediate
availability and suppress merging problems.
Several exports formats ( OBO flat file, SBO-XML and OWL ) are generated daily or on request and can be downloaded from the web interface.
To allow programmatic access to the resource, Web Services have been implemented based on Apache Axis for the communication layer and Castor for the validation. [ 4 ] The libraries, full documentation, samples and tutorial are available online .
The SourceForge project can be accessed at http://sourceforge.net/projects/sbo/ .
Since Level 2 Version 2 SBML provides a mechanism to annotate model components with SBO terms, therefore increasing the semantics of the
model beyond the sole topology of interaction and mathematical expression. Modelling tools such as SBMLsqueezer [ 5 ] interpret SBO terms to augment the mathematics in the SBML file. Simulation tools can check the consistency of a rate law, convert reaction from one modelling framework to another (e.g., continuous to discrete), or distinguish between identical mathematical expressions based on different assumptions (e.g., Michaelis–Menten vs. Briggs–Haldane). To add missing SBO terms to models, software such as SBOannotator [ 6 ] can be used. Other tools such as semanticSBML [ 7 ] can use the SBO annotation to integrate individual models into a larger one. The use of SBO is not restricted to the development of models. Resources providing quantitative experimental information such as SABIO Reaction Kinetics will be able to annotate the parameters (what do they mean exactly, how were they calculated) and determine relationships between them.
All the graphical symbols used in the SBGN languages are associated with an SBO term. This permits, for instance, to help generate SBGN maps from SBML models.
The Systems Biology Pathway Exchange (SBPAX) allows SBO terms to be added to Biological Pathway Exchange (BioPAX) . This links BioPAX to information useful for modelling, especially by adding quantitative descriptions described by SBO.
SBO is built in collaboration by the Computational Neurobiology Group (Nicolas Le Novère, EMBL - EBI , United-Kingdom) and the SBML Team (Michael Hucka, Caltech , USA).
SBO has benefited from the funds of the European Molecular Biology Laboratory and the National Institute of General Medical Sciences . | https://en.wikipedia.org/wiki/Systems_Biology_Ontology |
The Systems Engineering Body of Knowledge ( SEBoK ), formally known as Guide to the Systems Engineering Body of Knowledge, is a wiki-based collection of key knowledge sources and references for systems engineering . [ 1 ] The SEBoK is a curated wiki meaning that the content is managed by an editorial board, and updated on a regular basis. This wiki is a collaboration of three organizations: 1) International Council on Systems Engineering (INCOSE), 2) IEEE Systems Council, and 3) Stevens Institute of Technology. The most recent version (v.2.11) was released on November 25, 2024. [ 2 ]
The Guide was developed over three years, from 2009 to 2012, through the contributions of 70 authors worldwide. During this period, three prototype versions were created. The first prototype (v.0.25) was a document that was released for review in September 2010. However, the final versions were all published online as agreed by the authors in January 2011. This switch to a wiki-based SEBoK began with v.0.50. [ 3 ]
The first version of the SEBoK for public use was published online in September 2012. The initial release was named 2012 product of the year by the International Council on Systems Engineering . [ 4 ] Since then, the guide had several revisions and minor updates leading to the 23rd release, as of Nov 2024. [ 2 ] Version 1.7, released on October 27, 2016, added a new Healthcare Systems Engineering knowledge area. [ 5 ]
According to the site, the guide has a total of 26 knowledge areas distributed among the different parts. However, the majority of these knowledge areas can be grouped to form nine general knowledge areas. The general and specific knowledge areas are: | https://en.wikipedia.org/wiki/Systems_Engineering_Body_of_Knowledge |
Systems Engineering and Technical Advisory (SETA) contractors are government contractors who are contracted to assist the United States Department of Defense (DoD) components, and acquisition programs. (In some areas of DoD, the acronym SETA refers to "Systems Engineering and Technical Assessment" contractors; also refers to "Systems Engineering and Technical Advisors.") SETA contractors provide analysis and engineering services in a consulting capacity, working closely with the government's own engineering staff members. SETA contractors provide the flexibility and quick availability of expertise without the expense and commitment of sustaining the staff long-term.
SETA is an industry term, which the DoD has used since at least 1995, for example in the Software Engineering Institute; [ 1 ] 'Defense Acquisition Deskbook, "S"; the An Acronym List for the Information Age (Armed Forces Communications and Electronics Association); the DoD Guide to Integrated Product and Process Development. [ 2 ]
The government often needs to supplement its internal Systems Engineering and Technical Advisory capability in order to meet its frequently changing needs and demands. Through a formal Request for Information (RFI)/ Request for Proposal (RFP) process the government is able to contract with a commercial organization to provide certain services. SETA contractors work alongside government employees often within the same workspace. SETA contractors may participate in government contracting actions and may assist in managing other contracts. A SETA contractor cannot be the Contracting Officer's Technical Representative (COTR) or Assistant Contracting Officer Representative (ACOR), but they may function as the Technical Point of Contact (TPOC). Since SETA contractors may have access to procurement sensitive information there is a risk of conflict of interest (CoI) which is mitigated through Non-Disclosure Agreements (NDAs) and firewalls restricting communications within corporations.
The SETA support rate in total R&D expenditures of DARPA are evaluating as 7.4-9.9%. [ 3 ]
The policy related to SETA contractors can be found in the Federal Acquisition Regulation (FAR), Defense Federal Acquisition Regulation (DFAR) and DoD Instructions.
FAR Part 37 is the starting point for guidance for these types of contracts. Subpart 37.2 defines advisory and assistance services and provides that the use of such services is a legitimate way to improve the prospects for program or systems success. FAR Part 37.201(c) defines engineering and technical services used in support of a program office during the acquisition cycle. FAR 16.505(c) provides that the ordering period of an advisory and assistance services task order contract, including all options or modifications, may not exceed five years unless a longer period is specifically authorized in a law that is applicable to such a contract. DFARS Part 237 provides information for advisory and assistance contracts. FAR Subpart 9.5 addresses the potential for organizational and consultant conflicts of interest. [ 4 ]
Use of SETA contracts and NDAs allows for services involving systems and data and providing assistance and advice. This excludes Inherently Governmental Functions (IGF), as defined in statue by Public Law 105 - Federal Activities Inventory Reform Act (the "FAIR Act") of 1998 and in regulation by OMB Circular A-76. [ 5 ] | https://en.wikipedia.org/wiki/Systems_Engineering_and_Technical_Assistance |
Systems Tool Kit (formerly Satellite Tool Kit ), often referred to by its initials STK , is a multi-physics software application from Analytical Graphics, Inc. (an Ansys company) that enables engineers and scientists to perform complex analyses of ground, sea, air, and space platforms, and to share results in one integrated environment. [ 1 ] At the core of STK is a geometry engine for determining the time-dynamic position and attitude of objects ("assets"), and the spatial relationships among the objects under consideration including their relationships or accesses given a number of complex, simultaneous constraining conditions. STK has been developed since 1989 as a commercial off the shelf software tool. Originally created [ 2 ] to solve problems involving Earth-orbiting satellites , it is now used in the aerospace and defense communities and for many other applications.
STK is used in government, commercial, and defense applications around the world. Clients of AGI are organizations such as NASA , ESA , CNES , DLR , Boeing , JAXA , ISRO , Lockheed Martin , Northrop Grumman , Airbus , The US DoD , and Civil Air Patrol . [ 2 ]
In 1989, the three founders of Analytical Graphics, Inc. — Paul Graziani, Scott Reynolds, and Jim Poland, left GE Aerospace to create Satellite Tool Kit (STK) as an alternative to bespoke, project-specific aerospace software. [ 3 ]
The original version of STK ran only on Sun Microsystems computers, but as PCs became more powerful, the code was converted to run on Windows .
STK was first adopted by the aerospace community [ when? ] for orbit analysis and access calculations (when a satellite can see a ground-station or image target), but as the software was expanded, more modules were added that included the ability to perform calculations for communications systems, radar , interplanetary missions and orbit collision avoidance.
The addition of 3D viewing capabilities led to the adoption of the STK by military users for real-time visualization of air, land and sea forces as well as the space domain. STK has also been used by news organizations to graphically depict current events to a wider audience, including the deorbit of Russia's Mir Space Station , the Space Shuttle Columbia disaster , the Iridium/Cosmos collision , the asteroid 2012 DA14 close approach and various North Korea missile tests.
As of version 12.1 (released in 2020), the software underwent a name change from Satellite Tool Kit to Systems Tool Kit to reflect its applicability in land, sea, air, and space systems. [ 4 ]
In 2019, Dutch amateur skywatcher Marco Langbroek used STK to analyze a high-resolution photograph of an Iranian launch site accident tweeted by former US President Donald Trump . [ 5 ] It was "the first time in three and a half decades that an image [had] become public that [revealed] the sophistication of US spy satellites in orbit." [ 5 ] Langbroek and astronomer Cees Bassa, identified the specific classified spysat ( USA-224 , a KH-11 satellite with an objective mirror as large as the Hubble Space Telescope ) that had taken the photograph, and the time when it was taken on a particular satellite pass. [ 6 ] [ 5 ]
The STK interface is a standard GUI display with customizable toolbars and dockable maps and 3D graphic windows. All analysis can be done through mouse and keyboard interaction.
The STK Integration module provides a scripting interface named Connect that enables STK to act within a client/server environment (via TCP/IP ) and is language independent. Users also have the option of using STK programmatically via OLE automation .
Each analysis or design space within STK is called a scenario . Within each scenario any number of satellites, aircraft, targets, ships, communications systems or other objects can be created. Each scenario defines the default temporal limits to the child objects, as well as the base unit selection and properties. All of these properties can be overridden for each child object individually, as necessary. Only one scenario may exist at any one time, although data can be exported and reused in subsequent analyses.
For each object within a scenario, reports and graphics (both static and dynamic) may be created. Relative parameters, between one object and another can also be reported and the effect of real-world restrictions ( constraints ) enabled so that more accurate reporting is obtained. Through the use of the constellation and chains objects, multiple child objects may be grouped together and the multipath interactions between them investigated.
AGI also offers software development kits for embedding STK capabilities into third-party applications or creating new applications based on AGI technology.
STK is a modular product, in much the same way as MATLAB and Simulink, and allows users to add modules to the baseline package to enhance specific functions.
STK can be embedded within another application (as an ActiveX component) or controlled from an external application (through TCP/IP or Component Object Model (COM)). Both integration techniques can make use of the Connect scripting language to accomplish this task. There is also an object model for more "programmer oriented" integration methodologies. STK can be driven from a script that is run from the STK internal web browser in the free version of the tool. To control STK from an external source, or embed STK in another application requires the STK Integration module.
Since Connect is a messaging format, it has the advantage of being completely language independent. This allows applications and client tools to be created in the programming language of the user's or developer's choice. In practice, as long as it is possible to create a socket connection , send information through that socket and then receive information that way then STK can be controlled with connect using that language.
Applications have been developed in C , C++ , C# , Perl , Visual Basic , VBScript , Java , JavaScript and MATLAB . Examples can also be found in the STK help files or downloaded from the AGI website. | https://en.wikipedia.org/wiki/Systems_Tool_Kit |
A systems analyst , also known as business technology analyst , is an information technology (IT) professional who specializes in analyzing, designing and implementing information systems . Systems analysts assess the suitability of information systems in terms of their intended outcomes and liaise with end users , software vendors and programmers in order to achieve these outcomes. [ 1 ] A systems analyst is a person who uses analysis and design techniques to solve business problems using information technology. Systems analysts may serve as change agents who identify the organizational improvements needed, design systems to implement those changes, and train and motivate others to use the systems. [ 2 ]
As of 2015 [update] , the sectors employing the greatest numbers of computer systems analysts were state government, insurance , computer system design, professional and commercial equipment, and company and enterprise management. The number of jobs in this field is projected to grow from 487,000 as of 2009 to 650,000 by 2016. According to the U.S. Bureau of Labor Statistics (BLS), Occupational Outlook predicts the need for Computer Systems Analysts as growing 25% in 2012 to 2022 [ 3 ] and gradually decreasing their estimates and now predict the years 2022 to 2032 as only a growth of 10% Saying "Many of those openings are expected to result from the need to replace workers who transfer to different occupations or exit the labor force, such as to retire." [ 4 ]
This job ranked third best in a 2010 survey, [ 5 ] fifth best in the 2011 survey, 9th best in the 2012 survey and the 10th best in the 2013 survey. [ 6 ] | https://en.wikipedia.org/wiki/Systems_analyst |
Systems and Synthetic Biology is a peer-reviewed scientific journal covering systems and synthetic biology . It was established in 2007 and was published by Springer Science+Business Media . The editors-in-chief were Pawan K. Dhar ( University of Kerala ) and Ron Weiss ( Massachusetts Institute of Technology ). The journal's last volume was in 2015. [ 1 ]
The journal is abstracted and indexed in:
This article about a molecular and cell biology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Systems_and_Synthetic_Biology |
The systems architect is an information and communications technology professional. Systems architects define the architecture of a computerized system (i.e., a system composed of software and hardware) in order to fulfill certain requirements . Such definitions include: a breakdown of the system into components, the component interactions and interfaces (including with the environment, especially the user), and the technologies and resources to be used in its design and implementation.
The systems architect's work should seek to avoid implementation issues and readily permit unanticipated extensions/modifications in future stages. Because of the extensive experience required for this, the systems architect is typically a very senior technologist with substantial, but general, knowledge of hardware, software, and similar (user) systems. Above all, the systems architect must be reasonably knowledgeable of the users' domain of experience. For example, the architect of an air traffic system needs to be more than superficially familiar with all of the tasks of an air traffic system, including those of all levels of users.
The title of systems architect connotes higher-level design responsibilities than a systems engineer , software engineer or programmer , though day-to-day activities may overlap.
Systems architects interface with multiple stakeholders in an organization in order to understand the various levels of requirements, the domain, the viable technologies, and anticipated development process. Their work includes determining multiple design and implementation alternatives, assessing such alternatives based on all identified constraints (such as cost, schedule, space, power, safety, usability, reliability, maintainability, availability, and other "ilities" ), and selecting the most suitable options for further design. The output of such work sets the core properties of the system and those that are hardest to change later.
In small systems the architecture is typically defined directly by the developers. However, in larger systems, a systems architect should be appointed to outline the overall system, and to interface between the users, sponsors, and other stakeholders on one side and the engineers on the other. Very large, highly complex systems may include multiple architects, in which case the architects work together to integrate their subsystems or aspects, and respond to a chief architect responsible for the entire system. In general, the role of the architect is to act as a mediator between the users and the engineers, reconciling the users' needs and requirements with what the engineers have determined to be doable within the given (engineering) constraints.
In systems design , the architects (and engineers) are responsible for:
Large systems architecture was developed as a way to handle systems too large for one person to conceive of, let alone design. Systems of this size are rapidly becoming the norm, so architectural approaches and architects are increasingly needed to solve the problems of large to very large systems. In general, increasingly large systems are reduced to 'human' proportions by a layering approach, where each layer is composed of a number of individually comprehensible sub-layers-- each having its own principal engineer and/or architect. A complete layer at one level will be shown as a functional 'component' of a higher layer (and may disappear altogether at the highest layers).
Architects are expected to understand human needs and develop humanly functional and aesthetically-pleasing products. A good architect is also the principal keeper of the users' vision of the end product, and of the process of deriving requirements from and implementing that vision.
Architects do not follow exact procedures. They communicate with users/sponsors in a highly interactive, relatively informal way— together they extract the true requirements necessary for the designed (end) system. The architect must remain constantly in communication with the end users and with the (principal) systems engineers. Therefore, the architect must be intimately familiar with the users' environment and problem, and with the engineering environment(s) of likely solution spaces.
The user requirements specification should be a joint product of the users and architect: the users bring their needs and wish list, the architect brings knowledge of what is likely to prove doable within the cost, time and other constraints. When the users needs are translated into a set of high-level requirements is also the best time to write the first version of the acceptance test , which should, thereafter, be religiously kept up to date with the requirements. That way, the users will be absolutely clear about what they are getting. It is also a safeguard against untestable requirements, misunderstandings, and requirements creep.
The development of the first level of engineering requirements is not a purely analytical exercise and should also involve both the architect and engineer. If any compromises are to be made— to meet constraints- the architect must ensure that the final product and overall look and feel do not stray very far from the users' intent. The engineer should focus on developing a design that optimizes the constraints but ensures a workable, reliable, extensible and robust product. The provision of needed services to the users is the true function of an engineered system. However, as systems become ever larger and more complex, and as their emphases move away from simple hardware and software components, the narrow application of traditional systems development principles have been found to be insufficient— the application of more general principles of systems, hardware, and software architecture to the design of (sub)systems is seen to be needed. Architecture may also be seen as a simplified model of the finished end product— its primary function is to define the parts and their relationships to each other so that the whole can be seen to be a consistent, complete, and correct representation of what the users' had in mind— especially for the computer-human-interface. It is also used to ensure that the parts fit together and relate in the desired way.
It is necessary to distinguish between the architecture of the users' world and the engineered systems architecture. The former represents and addresses problems and solutions in the user's world. It is principally captured in the computer-human-interfaces (CHI) of the engineered system. The engineered system represents the engineering solutions— how the engineer proposes to develop and/or select and combine the components of the technical infrastructure to support the CHI. In the absence of an experienced architect, there is an unfortunate tendency to confuse the two architectures. But — the engineer thinks in terms of hardware and software and the technical solution space, whereas the users may be thinking in terms of solving a problem of getting people from point A to point B in a reasonable amount of time and with a reasonable expenditure of energy, or of getting needed information to customers and staff. A systems architect is expected to combine knowledge of both the architecture of the users' world and of (all potentially useful) engineering systems architectures . The former is a joint activity with the users; the latter is a joint activity with the engineers. The product is a set of high-level requirements reflecting the users' requirements which can be used by the engineers to develop systems design requirements.
Because requirements evolve over the course of a project, especially a long one, an architect is needed until the system is accepted by the user: the architect ensures that all changes and interpretations made during the course of development do not compromise the users' viewpoint.
Architects are generalists. They are not expected to be experts in any one technology but are expected to be knowledgeable of many technologies and able to judge their applicability to specific situations. They also apply their knowledge to practical situations, but evaluate the cost/benefits of various solutions using different technologies, for example, hardware versus software versus manual, and assure that the system as a whole performs according to the users' expectations.
Many commercial-off-the-shelf or already developed hardware and software components may be selected independently according to constraints such as cost, response, throughput, etc. In some cases, the architect can already assemble the end system (almost) unaided. Or, s/he may still need the help of a hardware or software engineer to select components and to design and build any special purpose function. The architects (or engineers) may also enlist the aid of other specialists— in safety , security , communications , special purpose hardware, graphics , human factors , test and evaluation , quality control , reliability , maintainability , availability , interface management, etc. An effective systems architectural team must have access to specialists in critical specialties as needed.
An architect planning a building works on the overall design, making sure it will be pleasing and useful to its inhabitants. While a single architect by himself may be enough to build a single-family house, many engineers may be needed, in addition, to solve the detailed problems that arise when a novel high-rise building is designed. If the job is large and complex enough, parts of the architecture may be designed as independent components. That is, if we are building a housing complex, we may have one architect for the complex, and one for each type of building, as part of an architectural team.
Large automation systems also require an architect and much engineering talent. If the engineered system is large and complex enough, the systems architect may defer to a hardware architect and/or a software architect for parts of the job, although they all may be members of a joint architectural team.
The architect should sub-allocate the system requirements to major components or subsystems that are within the scope of a single hardware or software engineer, or engineering manager and team. But the architect must never be viewed as an engineering supervisor. (If the item is sufficiently large and/or complex, the chief architect will sub-allocate portions to more specialized architects.) Ideally, each such component/subsystem is a sufficiently stand-alone object that it can be tested as a complete component, separate from the whole, using only a simple testbed to supply simulated inputs and record outputs. That is, it is not necessary to know how an air traffic control system works in order to design and build a data management subsystem for it. It is only necessary to know the constraints under which the subsystem will be expected to operate.
A good architect ensures that the system, however complex, is built upon relatively simple and "clean" concepts for each (sub)system or layer and is easily understandable by everyone, especially the users, without special training. The architect will use a minimum of heuristics to ensure that each partition is well defined and clean of kludges , work-arounds , short-cuts , or confusing detail and exceptions. As users needs evolve, (once the system is fielded and in use), it is a lot easier subsequently to evolve a simple concept than one laden with exceptions, special cases, and much "fine print."
Layering the architecture is important for keeping the architecture sufficiently simple at each layer so that it remains comprehensible to a single mind. As layers are ascended, whole systems at lower layers become simple components at the higher layers, and may disappear altogether at the highest layers.
The acceptance test is a principal responsibility of the systems architect. It is the chief means by which the program lead will prove to the users that the system is as originally planned and that all involved architects and engineers have met their objectives.
A building architect uses sketches, models, and drawings. An automation systems (or software or hardware) architect should use sketches, models, and prototypes to discuss different solutions and results with users, engineers, and other architects. An early, draft version of the users' manual is invaluable, especially in conjunction with a prototype. Nevertheless, it is important that a workable, well written set of requirements, or specification , be created which is reasonably understandable to the customer (so that they can properly sign off on it, but the principal users' requirements should be captured in a preliminary users' manual for intelligibility). But it must use precise and unambiguous language so that designers and other implementers are left in no doubt as to meanings or intentions. In particular, all requirements must be testable , and the initial draft of the test plan should be developed contemporaneously with the requirements. All stakeholders should sign off on the acceptance test descriptions, or equivalent, as the sole determinant of the satisfaction of the requirements, at the outset of the program.
The use of any form of the word "architect" is regulated by "title acts" in many states in the US, and a person must be licensed as a building architect to use it. [ 1 ]
In the UK the architects registration board excludes the usage of architect (when used in the context of software and IT) from its restricted usage. [ 2 ] | https://en.wikipedia.org/wiki/Systems_architect |
A system architecture is the conceptual model that defines the structure , behavior , and views of a system . [ 1 ] An architecture description is a formal description and representation of a system, organized in a way that supports reasoning about the structures and behaviors of the system.
A system architecture can consist of system components and the sub-systems developed, that will work together to implement the overall system. There have been efforts to formalize languages to describe system architecture, collectively these are called architecture description languages (ADLs). [ 2 ] [ 3 ] [ 4 ]
Various organizations can define systems architecture in different ways, including:
One can think of system architecture as a set of representations of an existing (or future) system. These representations initially describe a general, high-level functional organization, and are progressively refined to more detailed and concrete descriptions.
System architecture conveys the informational content of the elements consisting of a system, the relationships among those elements, and the rules governing those relationships. The architectural components and set of relationships between these components that an architecture description may consist of hardware, software , documentation, facilities, manual procedures, or roles played by organizations or people. [ clarification needed ]
A system architecture primarily concentrates on the internal interfaces among the system's components or subsystems , and on the interface(s) between the system and its external environment, especially the user . (In the specific case of computer systems, this latter, special, interface is known as the computer human interface , AKA human computer interface, or HCI ; formerly called the man-machine interface.)
One can contrast a system architecture with system architecture engineering (SAE) - the method and discipline for effectively implementing the architecture of a system: [ 13 ]
Systems architecture depends heavily on practices and techniques which were developed over thousands of years in many other fields, perhaps the most important being civil architecture.
With the increasing complexity of digital systems , modern systems architecture has evolved to incorporate advanced principles such as modularization , microservices, and artificial intelligence-driven optimizations. Cloud computing , edge computing, and distributed ledger technologies (DLTs) have also influenced architectural decisions, enabling more scalable, secure, and fault-tolerant designs.
One of the most significant shifts in recent years has been the adoption of Software-Defined Architectures (SDA) , which decouple hardware from software , allowing systems to be more flexible and adaptable to changing requirements. [ 14 ] This trend is particularly evident in network architectures, where Software-Defined Networking (SDN) [ citation needed ] and Network Function Virtualization (NFV) enable more dynamic management of network resources. [ 15 ]
In addition, AI-enhanced system architectures have gained traction, leveraging machine learning for predictive maintenance , anomaly detection , and automated system optimization . The rise of cyber-physical systems (CPS) and digital twins has further extended system architecture principles beyond traditional computing, integrating real-world data into virtual models for better decision-making. [ 16 ]
With the rise of edge computing , system architectures now focus on decentralization and real-time processing , reducing dependency on centralized data centers and improving latency-sensitive applications such as autonomous vehicles , robotics , and IoT networks . [ 4 ]
These advancements continue to redefine how systems are designed, leading to more resilient, scalable, and intelligent architectures suited for the digital age.
Several types of system architectures exist, each catering to different domains and applications. While all system architectures share fundamental principles of structure, behavior, and interaction, they vary in design based on their intended purpose. Several types of systems architectures (underlain by the same fundamental principles [ 17 ] ) have been identified as follows: [ 18 ] | https://en.wikipedia.org/wiki/Systems_architecture |
Systems biology is the computational and mathematical analysis and modeling of complex biological systems . It is a biology -based interdisciplinary field of study that focuses on complex interactions within biological systems, using a holistic approach ( holism instead of the more traditional reductionism ) to biological research. [ 1 ] This multifaceted research domain necessitates the collaborative efforts of chemists, biologists, mathematicians, physicists, and engineers to decipher the biology of intricate living systems by merging various quantitative molecular measurements with carefully constructed mathematical models. It represents a comprehensive method for comprehending the complex relationships within biological systems. In contrast to conventional biological studies that typically center on isolated elements, systems biology seeks to combine different biological data to create models that illustrate and elucidate the dynamic interactions within a system. This methodology is essential for understanding the complex networks of genes, proteins, and metabolites that influence cellular activities and the traits of organisms. [ 2 ] [ 3 ] One of the aims of systems biology is to model and discover emergent properties, of cells, tissues and organisms functioning as a system whose theoretical description is only possible using techniques of systems biology. [ 1 ] [ 4 ] By exploring how function emerges from dynamic interactions, systems biology bridges the gaps that exist between molecules and physiological processes.
As a paradigm , systems biology is usually defined in antithesis to the so-called reductionist paradigm ( biological organisation ), although it is consistent with the scientific method . The distinction between the two paradigms is referred to in these quotations: "the reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge ... the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models." (Sauer et al. ) [ 5 ] "Systems biology ... is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different. ... It means changing our philosophy, in the full sense of the term." ( Denis Noble ) [ 6 ]
As a series of operational protocols used for performing research, namely a cycle composed of theory, analytic or computational modelling to propose specific testable hypotheses about a biological system, experimental validation, and then using the newly acquired quantitative description of cells or cell processes to refine the computational model or theory. [ 7 ] Since the objective is a model of the interactions in a system, the experimental techniques that most suit systems biology are those that are system-wide and attempt to be as complete as possible. Therefore, transcriptomics , metabolomics , proteomics and high-throughput techniques are used to collect quantitative data for the construction and validation of models. [ 8 ]
A comprehensive systems biology approach necessitates: (i) a thorough characterization of an organism concerning its molecular components, the interactions among these molecules, and how these interactions contribute to cellular functions; (ii) a detailed spatio-temporal molecular characterization of a cell (for example, component dynamics, compartmentalization, and vesicle transport); and (iii) an extensive systems analysis of the cell's 'molecular response' to both external and internal perturbations. Furthermore, the data from (i) and (ii) should be synthesized into mathematical models to test knowledge by generating predictions (hypotheses), uncovering new biological mechanisms, assessing the system's behavior derived from (iii), and ultimately formulating rational strategies for controlling and manipulating cells. To tackle these challenges, systems biology must incorporate methods and approaches from various disciplines that have not traditionally interfaced with one another. [ 9 ] The emergence of multi-omics technologies has transformed systems biology by providing extensive datasets that cover different biological layers, including genomics, transcriptomics, proteomics, and metabolomics. These technologies enable the large-scale measurement of biomolecules, leading to a more profound comprehension of biological processes and interactions. [ 10 ] Increasingly, methods such as network analysis, machine learning, and pathway enrichment are utilized to integrate and interpret multi-omics data, thereby improving our understanding of biological functions and disease mechanisms. [ 11 ]
Holism vs. Reductionism
It is challenging to trace the origins and beginnings of systems biology. A comprehensive perspective on the human body was central to the medical practices of Greek, Roman, and East Asian traditions, where physicians and thinkers like Hippocrates believed that health and illness were linked to the equilibrium or disruption of bodily fluids known as humors. This holistic perspective persisted in the Western world throughout the 19th and 20th centuries, with prominent physiologists viewing the body as controlled by various systems, including the nervous system, the gastrointestinal system, and the cardiovascular system. In the latter half of the 20th century, however, this way of thinking was largely supplanted by reductionism: [ 12 ] [ 13 ] To grasp how the body functions properly, one needed to comprehend the role of each component, from tissues and cells to the complete set of intracellular molecular building blocks. [ 14 ]
In the 17th century, the triumphs of physics and the advancement of mechanical clockwork prompted a reductionist viewpoint in biology, interpreting organisms as intricate machines made up of simpler elements. [ 15 ]
Jan Smuts (1870–1950), naturalist/philosopher and twice Prime Minister of South Africa, coined the commonly used term holism. Whole systems such as cells, tissues, organisms, and populations were proposed to have unique (emergent) properties. It was impossible to try and reassemble the behavior of the whole from the properties of the individual components, and new technologies were necessary to define and understand the behavior of systems. [ 15 ]
Even though reductionism and holism are often contrasted with one another, they can be synthesized. One must understand how organisms are built (reductionism), while it is just as important to understand why they are so arranged (systems; holism). Each provides useful insights and answers different questions. However, the study of biological systems requires knowledge about control and design paradigms, as well as principles of structural stability, resilience, and robustness that are not directly inferred from mechanistic information. More profound insight will be gained by employing computer modeling to overcome the complexity in biological systems. [ 15 ]
Nevertheless, this perspective was consistently balanced by thinkers who underscored the significance of organization and emergent traits in living systems. This reductionist perspective has achieved remarkable success, and our understanding of biological processes has expanded with incredible speed and intensity. However, alongside these extraordinary advancements, science gradually came to understand that possessing complete information about molecular components alone would not suffice to elucidate the workings of life: the individual components rarely illustrate the function of a complex system. It is now commonly recognized that we need approaches for reconstructing integrated systems from their constituent parts and processes if we are to comprehend biological phenomena and manipulate them in a thoughtful, focused way. [ 16 ]
Origin of systems biology as a field
In 1968, the term "systems biology" was first introduced at a conference. [ 18 ] Those within the discipline soon recognized—and this understanding gradually became known to the wider public—that computational approaches were necessary to fully articulate the concepts and potential of systems biology. Specifically, these techniques needed to view biological phenomena as complex, multi-layered, adaptive, and dynamic systems. They had to account for transformations and intricate nonlinearities, thereby allowing for the smooth integration of smaller models ("modules") into larger, well-organized assemblies of models within complex settings. It became clear that mathematics and computation were vital for these methods. [ 19 ] [ 20 ] [ 21 ] [ 22 ] An acceleration of systems understanding came with the publication of the first ground-breaking text compiling molecular, physiological, and anatomical individuality in animals, [ 23 ] which has been described as a revolution. [ 24 ]
Initially, the wider scientific community was reluctant to accept the integration of computational methods and control theory in the exploration of living systems, believing that "biology was too complex to apply mathematics." However, as the new millennium neared, this viewpoint underwent a significant and lasting transformation. [ 14 ] More scientists started working on integration of mathematical concepts to understand and solve biological problems. Now, Systems biology have been widely applied in several fields including agriculture and medicine .
Top-down systems biology identifies molecular interaction networks by analyzing the correlated behaviors observed in large-scale 'omics' studies. With the advent of 'omics', this top-down strategy has become a leading approach. It begins with an overarching perspective of the system's behavior – examining everything at once – by gathering genome-wide experimental data and seeks to unveil and understand biological mechanisms at a more granular level – specifically, the individual components and their interactions. In this framework of 'top-down' systems biology, the primary goal is to uncover novel molecular mechanisms through a cyclical process that initiates with experimental data, transitions into data analysis and integration to identify correlations among molecule concentrations and concludes with the development of hypotheses regarding the co- and inter-regulation of molecular groups. These hypotheses then generate new predictions of correlations, which can be explored in subsequent experiments or through additional biochemical investigations. [ 25 ] The notable advantages of top-down systems biology lie in its potential to provide comprehensive (i.e., genome-wide) insights and its focus on the metabolome, fluxome, transcriptome, and/or proteome. Top-down methods prioritize overall system states as influencing factors in models and the computational (or optimality) principles that govern the dynamics of the global system. For instance, while the dynamics of motor control (neuro) emerge from the interactions of millions of neurons, one can also characterize the neural motor system as a sort of feedback control system, which directs a 'plant' (the body) and guides movement by minimizing 'cost functions' (e.g., achieving trajectories with minimal jerk). [ 26 ]
Bottom-up systems biology infers the functional characteristics that may arise from a subsystem characterized with a high degree of mechanistic detail using molecular techniques. This approach begins with the foundational elements by developing the interactive behavior (rate equation) of each component process (e.g., enzymatic processes) within a manageable portion of the system. It examines the mechanisms through which functional properties arise in the interactions of known components. Subsequently, these formulations are combined to understand the behavior of the system. The primary goal of this method is to integrate the pathway models into a comprehensive model representing the entire system - the top or whole. As research and understanding advance, these models are often expanded by incorporating additional processes with high mechanistic detail. [ 26 ]
The bottom-up approach facilitates the integration and translation of drug-specific in vitro findings to the in vivo human context. This encompasses data collected during the early phases of drug development, such as safety evaluations. When assessing cardiac safety, a purely bottom-up modeling and simulation method entails reconstructing the processes that determine exposure, which includes the plasma (or heart tissue) concentration-time profiles and their electrophysiological implications, ideally incorporating hemodynamic effects and changes in contractility. Achieving this necessitates various models, ranging from single-cell to advanced three-dimensional (3D) multiphase models. Information from multiple in vitro systems that serve as stand-ins for the in vivo absorption, distribution, metabolism, and excretion (ADME) processes enables predictions of drug exposure, while in vitro data on drug-ion channel interactions support the translation of exposure to body surface potentials and the calculation of important electrophysiological endpoints. The separation of data related to the drug, system, and trial design, which is characteristic of the bottom-up approach, allows for predictions of exposure-response relationships considering both inter- and intra-individual variability, making it a valuable tool for evaluating drug effects at a population level. Numerous successful instances of applying physiologically based pharmacokinetic (PBPK) modeling in drug discovery and development have been documented in the literature. [ 27 ]
According to the interpretation of systems biology as using large data sets using interdisciplinary tools, a typical application is metabolomics , which is the complete set of all the metabolic products, metabolites , in the system at the organism, cell, or tissue level. [ 28 ]
Items that may be a computer database include: phenomics , organismal variation in phenotype as it changes during its life span; genomics , organismal deoxyribonucleic acid (DNA) sequence, including intra-organismal cell specific variation. (i.e., telomere length variation); epigenomics / epigenetics , organismal and corresponding cell specific transcriptomic regulating factors not empirically coded in the genomic sequence. (i.e., DNA methylation , Histone acetylation and deacetylation , etc.); transcriptomics , organismal, tissue or whole cell gene expression measurements by DNA microarrays or serial analysis of gene expression ; interferomics , organismal, tissue, or cell-level transcript correcting factors (i.e., RNA interference ), proteomics , organismal, tissue, or cell level measurements of proteins and peptides via two-dimensional gel electrophoresis , mass spectrometry or multi-dimensional protein identification techniques (advanced HPLC systems coupled with mass spectrometry ). Sub disciplines include phosphoproteomics , glycoproteomics and other methods to detect chemically modified proteins; glycomics , organismal, tissue, or cell-level measurements of carbohydrates ; lipidomics , organismal, tissue, or cell level measurements of lipids . [ citation needed ]
The molecular interactions within the cell are also studied, this is called interactomics . [ 29 ] A discipline in this field of study is protein–protein interactions , although interactomics includes the interactions of other molecules. [ 30 ] Neuroelectrodynamics , where the computer's or a brain's computing function as a dynamic system is studied along with its (bio)physical mechanisms; [ 31 ] and fluxomics , measurements of the rates of metabolic reactions in a biological system (cell, tissue, or organism). [ 28 ]
In approaching a systems biology problem there are two main approaches. These are the top down and bottom up approach. The top down approach takes as much of the system into account as possible and relies largely on experimental results. The RNA-Seq technique is an example of an experimental top down approach. Conversely, the bottom up approach is used to create detailed models while also incorporating experimental data. An example of the bottom up approach is the use of circuit models to describe a simple gene network. [ 32 ]
Various technologies utilized to capture dynamic changes in mRNA, proteins, and post-translational modifications. Mechanobiology , forces and physical properties at all scales, their interplay with other regulatory mechanisms; [ 33 ] biosemiotics , analysis of the system of sign relations of an organism or other biosystems; Physiomics , a systematic study of physiome in biology.
Cancer systems biology is an example of the systems biology approach, which can be distinguished by the specific object of study ( tumorigenesis and treatment of cancer ). It works with the specific data (patient samples, high-throughput data with particular attention to characterizing cancer genome in patient tumour samples) and tools (immortalized cancer cell lines , mouse models of tumorigenesis, xenograft models, high-throughput sequencing methods, siRNA-based gene knocking down high-throughput screenings , computational modeling of the consequences of somatic mutations and genome instability ). [ 34 ] The long-term objective of the systems biology of cancer is ability to better diagnose cancer, classify it and better predict the outcome of a suggested treatment, which is a basis for personalized cancer medicine and virtual cancer patient in more distant prospective. Significant efforts in computational systems biology of cancer have been made in creating realistic multi-scale in silico models of various tumours. [ 35 ]
The systems biology approach often involves the development of mechanistic models, such as the reconstruction of dynamic systems from the quantitative properties of their elementary building blocks. [ 36 ] [ 37 ] [ 38 ] [ 39 ] For instance, a cellular network can be modelled mathematically using methods coming from chemical kinetics [ 40 ] and control theory . Due to the large number of parameters, variables and constraints in cellular networks, numerical and computational techniques are often used (e.g., flux balance analysis ). [ 38 ] [ 40 ]
Other aspects of computer science, informatics , and statistics are also used in systems biology. These include new forms of computational models, such as the use of process calculi to model biological processes (notable approaches include stochastic π-calculus , BioAmbients, Beta Binders, BioPEPA, and Brane calculus) and constraint -based modeling; integration of information from the literature, using techniques of information extraction and text mining ; [ 41 ] development of online databases and repositories for sharing data and models, approaches to database integration and software interoperability via loose coupling of software, websites and databases, or commercial suits; network-based approaches for analyzing high dimensional genomic data sets. For example, weighted correlation network analysis is often used for identifying clusters (referred to as modules), modeling the relationship between clusters, calculating fuzzy measures of cluster (module) membership, identifying intramodular hubs, and for studying cluster preservation in other data sets; pathway-based methods for omics data analysis, e.g. approaches to identify and score pathways with differential activity of their gene, protein, or metabolite members. [ 42 ] Much of the analysis of genomic data sets also include identifying correlations. Additionally, as much of the information comes from different fields, the development of syntactically and semantically sound ways of representing biological models is needed. [ 43 ]
A model serves as a conceptual depiction of objects or processes, highlighting certain characteristics of these items or activities. A model captures only certain facets of reality; however, when created correctly, this limited scope is adequate because the primary goal of modeling is to address specific inquiries. [ 44 ] The saying, "essentially, all models are wrong, but some are useful," attributed to the statistician George Box, is a suitable principle for constructing models. [ 45 ]
Researchers begin by choosing a biological pathway and diagramming all of the protein, gene, and/or metabolic pathways. After determining all of the interactions, mass action kinetics or enzyme kinetic rate laws are used to describe the speed of the reactions in the system. Using mass-conservation, the differential equations for the biological system can be constructed. Experiments or parameter fitting can be done to determine the parameter values to use in the differential equations . [ 68 ] These parameter values will be the various kinetic constants required to fully describe the model. This model determines the behavior of species in biological systems and bring new insight to the specific activities of systems. Sometimes it is not possible to gather all reaction rates of a system. Unknown reaction rates are determined by simulating the model of known parameters and target behavior which provides possible parameter values. [ 69 ] [ 70 ]
The use of constraint-based reconstruction and analysis (COBRA) methods has become popular among systems biologists to simulate and predict the metabolic phenotypes, using genome-scale models. One of the methods is the flux balance analysis (FBA) approach, by which one can study the biochemical networks and analyze the flow of metabolites through a particular metabolic network, by optimizing the objective function of interest (e.g. maximizing biomass production to predict growth). [ 27 ]
Systems biology, an interdisciplinary field that combines biology, data analysis, and mathematical modeling, has revolutionized various sectors, including medicine, agriculture, and environmental science. By integrating omics data (genomics, proteomics, metabolomics, etc.), systems biology provides a holistic understanding of complex biological systems, enabling advancements in drug discovery, crop improvement, and environmental impact assessment. This response explores the applications of systems biology across these domains, highlighting both industrial and academic research contributions. System biology is used in agriculture to identify the genetic and metabolic components of complex characteristics through trait dissection. [ 91 ] It aids in the comprehension of plant-pathogen interactions in disease resistance. [ 92 ] It is utilized in nutritional quality to enhance nutritional content through metabolic engineering. [ 93 ]
Approaches to cancer systems biology have made it possible to effectively combine experimental data with computer algorithms and, as an exception, to apply actionable targeted medicines for the treatment of cancer. In order to apply innovative cancer systems biology techniques and boost their effectiveness for customizing new, individualized cancer treatment modalities, comprehensive multi-omics data acquired through the sequencing of tumor samples and experimental model systems will be crucial. [ 94 ]
Cancer systems biology has the potential to provide insights into intratumor heterogeneity and identify therapeutic options. In particular, enhanced cancer systems biology methods that incorporate not only multi-omics data from tumors, but also extensive experimental models derived from patients can assist clinicians in their decision-making processes, ultimately aiming to address treatment failures in cancer. [ 94 ]
Before the 1990s, phenotypic drug discovery formed the foundation of most research in drug discovery, utilizing cellular and animal disease models to find drugs without focusing on a specific molecular target. However, following the completion of the human genome project, target-based drug discovery has become the predominant approach in contemporary pharmaceutical research for various reasons. Gene knockout and transgenic models enable researchers to investigate and gain insights into the function of targets and the mechanisms by which drugs operate on a molecular level. Target-based assays lend themselves better to high-throughput screening, which simplifies the process of identifying second-generation drugs—those that improve upon first-in-class drugs in aspects such as potency, selectivity, and half-life, especially when combined with structure-based drug design. To do this, researchers utilize the three-dimensional structure of target proteins and computational models of interactions between small molecules and those targets to aid in the identification of superior compounds. [ 95 ]
Cell systems biology represents a phenotypic drug discovery method that integrates the complexity of human disease biology with combinatorial design to develop assays. [ 96 ] BioMAP® systems, founded on the principles of cell systems biology, consist of assays based on primary human cells that are designed to replicate intricate human disease and tissue biology in a feasible in vitro environment. Primary human cell types and co-cultures are activated using combinations of pathway activators to create cell signaling networks that align more closely with human disease. These systems are analyzed by assessing the levels of both secreted proteins and cell surface mediators. The distinct variations in protein readouts resulting from drug effects are recorded in a database that enables users to search for functional similarities (or biological 'read across'). In this method, inhibitors or activators targeting specific pathways are discovered to consistently affect the levels of multiple endpoints, often exhibiting a uniquely defined pattern, so that the resulting signatures can be linked to particular mechanisms of action. [ 95 ] [ 97 ] [ 98 ]
The multi-omics technologies in system biology can be also be used in aspects of food quality and safety. High-throughput omics techniques, including genomics, proteomics, and metabolomics, offer valuable insights into the molecular composition of food products, facilitating the identification of critical elements that affect food quality and safety. For example, integrating omics data can enhance the understanding of the metabolic pathways and associated functional gene patterns that contribute to both the nutritional value and safety of food crops. This comprehensive approach guarantees the creation of food products that are both nutritious and safe, capable of satisfying the increasing global demand. [ 99 ] [ 100 ]
Environmental system biology
Genomics examines all genes as an evolving system over time, aiming to understand their interactions and effects on biological pathways, networks, and physiology in a broader context compared to genetics. [ 101 ] As a result, genomics holds significant potential for discovering clusters of genes associated with complex disorders, aiding in the comprehension and management of diseases induced by environmental factors. [ 102 ]
When exploring the interactions between the environment and the genome as contributors to complex diseases, it is clear that the genome itself cannot be altered for the time being. However, once these interactions are recognized, it is feasible to minimize exposure or adjust lifestyle factors related to the environmental aspect of the disease. [ 103 ] [ 104 ] Gene-environment interactions can occur through direct associations with active metabolites at certain locations within the genome, potentially leading to mutations that could cause human diseases. Indirect interactions with the human genome can take place through intracellular receptors that function as ligand-activated transcription factors, which modulate gene expression and maintain cellular balance, or with an environmental factor that may produce detrimental effects. [ 105 ] This type of environmental-gene interaction could be more straightforward to investigate than direct interactions since there are numerous markers of this kind of interaction that are readily measurable before the disease manifests. Examples of this include the expression of cytochrome P450 genes following exposure to environmental substances, such as the polycyclic aromatic hydrocarbon benzo[a]pyrene, which binds to the Ah receptor. [ 106 ] [ 107 ] [ 108 ]
One of the main challenges in systems biology is the connection between experimental descriptions, observations, data, models, and the assumptions that stem from them. In essence, systems biology must be understood within an information management framework that significantly encompasses experimental life sciences. Models are created using various languages or representation schemes, each suitable for conveying and reasoning about distinct sets of characteristics. There is no single universal language for systems biology that can adequately cover the diverse phenomena we aim to investigate. However, this intricate scenario overlooks two important aspects. Models can be developed in multiple versions over time and by different research teams. Conflicts can occur, and observations may be disputed. Various researchers might produce models in different versions and configurations. The unpredictable elements suggest that systems biology is not likely to yield a definitive collection of established models. Instead, we can expect a rich ecosystem of models to develop within a structure that fosters discussion and cooperation among participants. Challenges also exist in verifying the constraints and creating modeling frameworks with robust compositional strategies. This may create a need to handle models that may conflict with one another, whether between schemes or across different scales. In the end, the goal could involve the creation of personalized models that reflect differences in physiology, as opposed to universal models of biological processes. [ 109 ]
Other challenges include the massive amount of data created by high-throughput omics technologies which presents considerable challenges in terms of computation and storage. Each analysis in omics can result in data files ranging from terabytes to petabytes, which requires strong computational systems and ample storage solutions to manage and process these datasets effectively. [ 110 ] The computational requirements are made more difficult by the necessity for advanced algorithms that can integrate and analyze diverse, high-dimensional data. Approaches like deep learning and network-based methods have displayed potential in tackling these issues, but they also demand significant computational power. [ 111 ]
Utilizing AI in Systems Biology enables scientists to uncover novel insights into the intricate relationships present within biological systems, such as those among genes, proteins, and cells. A significant focus within Systems Biology is the application of AI for the analysis of expansive and complex datasets, including multi-omics data produced by high-throughput methods like next-generation sequencing and proteomics. Approaches powered by AI can be employed to detect patterns and correlations within these datasets and to anticipate the behavior of biological systems under varying conditions. [ 112 ]
For instance, artificial intelligence can identify genes that are expressed differently across various cancer types or detect small molecules linked to particular disease states. [ 113 ] A key difficulty in analyzing multi-omics data is the integration of information from multiple sources. AI can create integrative models that consider the intricate interactions between different types of molecular data. These models may be utilized to uncover new biomarkers or therapeutic targets for diseases, as well as to enhance our understanding of fundamental biological processes. By significantly speeding up our comprehension of complex biological systems, AI has the potential to lead to new treatments and therapies for a range of diseases. [ 112 ]
Structural systems biology is a multidisciplinary field that merges systems biology with structural biology to investigate biological systems at the molecular scale. This domain strives for a thorough understanding of how biological molecules interact and function within cells, tissues, and organisms. The integration of AI in structural systems biology has become increasingly vital for examining extensive and complex datasets and modeling the behavior of biological systems. AI facilitates the analysis of protein–protein interaction networks within structural systems biology. These networks can be explored using graph theory and various mathematical methods, uncovering key characteristics such as hubs and modules. [ 114 ] AI can also assist in the discovery of new drugs or therapies by predicting the effect of a drug on a particular biological component or pathway. [ 115 ] | https://en.wikipedia.org/wiki/Systems_biology |
Systems biomedicine , also called systems biomedical science , [ 1 ] is the application of systems biology to the understanding and modulation of developmental and pathological processes in humans, and in animal and cellular models. Whereas systems biology aims at modeling exhaustive networks of interactions [ 2 ] (with the long-term goal of, for example, creating a comprehensive computational model of the cell), mainly at intra-cellular level, systems biomedicine emphasizes the multilevel, hierarchical nature of the models ( molecule , organelle , cell , tissue , organ , individual/ genotype , environmental factor , population , ecosystem ) by discovering and selecting the key factors at each level and integrating them into models that reveal the global, emergent behavior of the biological process under consideration.
Such an approach will be favorable when the execution of all the experiments necessary to establish exhaustive models is limited by time and expense (e.g., in animal models) or basic ethics (e.g., human experimentation).
In the year of 1992, a paper on system biomedicine by Kamada T. was published (Nov.-Dec.), and an article on systems medicine and pharmacology by Zeng B.J. was also published (April) in the same time period. [ 3 ] In 2009, the first collective book on systems biomedicine was edited by Edison T. Liu and Douglas A. Lauffenburger. [ 4 ]
In October 2008, one of the first research groups uniquely devoted to systems biomedicine was established at the European Institute of Oncology . [ 5 ] One of the first research centers specialized on systems biomedicine was founded by Rudi Balling . The Luxembourg Centre for Systems Biomedicine is an interdisciplinary center of the University of Luxembourg . The first centre devoted to spatial issues in systems biomedicine has been recently established [ 6 ] at Oregon Health and Science University .
The first peer-reviewed journal on this topic, Systems Biomedicine, was recently established by Landes Bioscience . | https://en.wikipedia.org/wiki/Systems_biomedicine |
Systems control , in a communications system , is the control and implementation of a set of functions that:
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Systems_control |
The basic study of system design is the understanding of component parts and their subsequent interaction with one another. [ 1 ]
Systems design has appeared in a variety of fields, including sustainability, [ 2 ] computer/software architecture, [ 3 ] and sociology. [ 4 ]
If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development," [ 5 ] then design is the act of taking the marketing information and creating the design of the product to be manufactured.
Thus in product development, systems design involves the process of defining and developing systems, such as interfaces and data , for an electronic control system to satisfy specified requirements . Systems design could be seen as the application of systems theory to product development . There is some overlap with the disciplines of systems analysis , systems architecture and systems engineering . [ 6 ] [ 7 ]
The physical design relates to the actual input and output processes of the system. This is explained in terms of how data is input into a system, how it is verified/authenticated, how it is processed, and how it is displayed.
In physical design, the following requirements about the system are decided.
Put another way, the physical portion of system design can generally be broken down into three sub-tasks:
Designing the overall structure of a system focuses on creating a scalable, reliable, and efficient system. For example, services like Google, Twitter, Facebook, Amazon, and Netflix exemplify large-scale distributed systems. Here are key considerations:
Machine learning systems design focuses on building scalable, reliable, and efficient systems that integrate machine learning (ML) models to solve real-world problems. ML systems require careful consideration of data pipelines, model training, and deployment infrastructure. ML systems are often used in applications such as recommendation engines , fraud detection , and natural language processing .
Key components to consider when designing ML systems include:
Designing an ML system involves balancing trade-offs between accuracy, latency, cost, and maintainability, while ensuring system scalability and reliability. The discipline overlaps with MLOps , a set of practices that unifies machine learning development and operations to ensure smooth deployment and lifecycle management of ML systems. | https://en.wikipedia.org/wiki/Systems_design |
Systems ecology is an interdisciplinary field of ecology , a subset of Earth system science , that takes a holistic approach to the study of ecological systems, especially ecosystems . [ 1 ] [ 2 ] [ 3 ] Systems ecology can be seen as an application of general systems theory to ecology. Central to the systems ecology approach is the idea that an ecosystem is a complex system exhibiting emergent properties . Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems.
Systems ecology seeks a holistic view of the interactions and transactions within and between biological and ecological systems. Systems ecologists realise that the function of any ecosystem can be influenced by human economics in fundamental ways. They have therefore taken an additional transdisciplinary step by including economics in the consideration of ecological-economic systems. In the words of R.L. Kitching : [ 4 ]
As a mode of scientific enquiry, a central feature of Systems Ecology is the general application of the principles of energetics to all systems at any scale. Perhaps the most notable proponent of this view was Howard T. Odum - sometimes considered the father of ecosystems ecology. In this approach the principles of energetics constitute ecosystem principles . Reasoning by formal analogy from one system to another enables the Systems Ecologist to see principles functioning in an analogous manner across system-scale boundaries. H.T. Odum commonly used the Energy Systems Language as a tool for making systems diagrams and flow charts.
The fourth of these principles, the principle of maximum power efficiency , takes central place in the analysis and synthesis of ecological systems. The fourth principle suggests that the most evolutionarily advantageous system function occurs when the environmental load matches the internal resistance of the system. The further the environmental load is from matching the internal resistance, the further the system is away from its sustainable steady state. Therefore, the systems ecologist engages in a task of resistance and impedance matching in ecological engineering , just as the electronic engineer would do.
Earth systems engineering and management (ESEM) is a discipline used to analyze, design, engineer and manage complex environmental systems . It entails a wide range of subject areas including anthropology, engineering, environmental science , ethics and philosophy. At its core, ESEM looks to "rationally design and manage coupled human-natural systems in a highly integrated and ethical fashion."
Ecological economics is a transdisciplinary field of academic research that addresses the dynamic and spatial interdependence between human economies and natural ecosystems . Ecological economics brings together and connects different disciplines, within the natural and social sciences but especially between these broad areas. As the name suggests, the field is made up of researchers with a background in economics and ecology . An important motivation for the emergence of ecological economics has been criticism on the assumptions and approaches of traditional (mainstream) environmental and resource economics .
Ecological energetics is the quantitative study of the flow of energy through ecological systems. It aims to uncover the principles which describe the propensity of such energy flows through the trophic, or 'energy availing' levels of ecological networks. In systems ecology the principles of ecosystem energy flows or "ecosystem laws" (i.e. principles of ecological energetics) are considered formally analogous to the principles of energetics.
Ecological humanities aims to bridge the divides between the sciences and the humanities, and between Western , Eastern and Indigenous ways of knowing nature. Like ecocentric political theory, the ecological humanities are characterised by a connectivity ontology and a commitment to two fundamental axioms relating to the need to submit to ecological laws and to see humanity as part of a larger living system.
Ecosystem ecology is the integrated study of biotic and abiotic components of ecosystems and their interactions within an ecosystem framework. This science examines how ecosystems work and relates this to their components such as chemicals , bedrock , soil , plants , and animals . Ecosystem ecology examines physical and biological structure and examines how these ecosystem characteristics interact.
The relationship between systems ecology and ecosystem ecology is complex. Much of systems ecology can be considered a subset of ecosystem ecology. Ecosystem ecology also utilizes methods that have little to do with the holistic approach of systems ecology. However, systems ecology more actively considers external influences such as economics that usually fall outside the bounds of ecosystem ecology. Whereas ecosystem ecology can be defined as the scientific study of ecosystems, systems ecology is more of a particular approach to the study of ecological systems and phenomena that interact with these systems.
Industrial ecology is the study of industrial processes as linear (open loop) systems, in which resource and capital investments move through the system to become waste, to a closed loop system where wastes become inputs for new processes. | https://en.wikipedia.org/wiki/Systems_ecology |
Systems engineering is a field focused on the design, integration, and management of complex systems over their life cycles. It is commonly applied in industries like aerospace, defense, and transportation. Systems engineering may also refer to: | https://en.wikipedia.org/wiki/Systems_engineering_(disambiguation) |
Systems immunology is a research field under systems biology that uses mathematical approaches and computational methods to examine the interactions within cellular and molecular networks of the immune system . [ 1 ] The immune system has been thoroughly analyzed as regards to its components and function by using a " reductionist " approach, but its overall function can't be easily predicted by studying the characteristics of its isolated components because they strongly rely on the interactions among these numerous constituents. It focuses on in silico experiments rather than in vivo .
Recent studies in experimental and clinical immunology have led to development of mathematical models that discuss the dynamics of both the innate and adaptive immune system . [ 2 ] Most of the mathematical models were used to examine processes in silico that can't be done in vivo . These processes include: the activation of T cells , cancer-immune interactions , migration and death of various immune cells (e.g. T cells , B cells and neutrophils ) and how the immune system will respond to a certain vaccine or drug without carrying out a clinical trial . [ 3 ]
The techniques that are used in immunology for modelling have a quantitative and qualitative approach, where both have advantages and disadvantages. Quantitative models predict certain kinetic parameters and the behavior of the system at a certain time point or concentration point. The disadvantage is that it can only be applied to a small number of reactions and prior knowledge about some kinetic parameters is needed. On the other hand, qualitative models can take into account more reactions but in return they provide less details about the kinetics of the system. The only thing in common is that both approaches lose simplicity and become useless when the number of components drastically increase. [ 4 ]
Ordinary differential equations (ODEs) are used to describe the dynamics of biological systems . ODEs are used on a microscopic , mesoscopic and macroscopic scale to examine continuous variables . The equations represent the time evolution of observed variables such as concentrations of protein , transcription factors or number of cell types. They are usually used for modelling immunological synapses , microbial recognition and cell migration . Over the last 10 years, these models have been used to study the sensitivity of TCR to agonist ligands and the roles of CD4 and CD8 co-receptors . Kinetic rates of these equations are represented by binding and dissociation rates of the interacting species. These models are able to present the concentration and steady state of each interacting molecule in the network . ODE models are defined by linear and non-linear equations, where the nonlinear ones are used more often because they are easier to simulate on a computer ( in silico ) and to analyse . The limitation of this model is that for every network , the kinetics of each molecule has to be known so that this model could be applied. [ 5 ]
The ODE model was used to examine how antigens bind to the B cell receptor . This model was very complex because it was represented by 1122 equations and six signalling proteins . The software tool that was used for the research was BioNetGen. [ 6 ] The outcome of the model is according to the in vivo experiment. [ 7 ]
The Epstein-Barr virus (EBV) was mathematically modeled with 12 equations to investigate three hypotheses that explain the higher occurrence of mononucleosis in younger people. After running numerical simulations, only the first two hypotheses were supported by the model. [ 8 ]
Partial differential equation (PDE) models are an extended version of the ODE model, which describes the time evolution of each variable in both time and space . PDEs are used on a microscopic level for modeling continuous variables in the sensing and recognition of pathogens pathway. They are also applied for physiological modeling [ 9 ] to describe how proteins interact and where their movement is directed in an immunological synapse . These derivatives are partial because they are calculated with the respect to time and also with the respect to space . Sometimes a physiological variable such as age in cell division can be used instead of the spatial variables. Comparing the PDE models, which take into account the spatial distribution of cells, to the ODE ones, the PDEs are computationally more demanding. Spatial dynamics are an important aspect of cell signalling as it describes the motion of cells within a three dimensional compartment. T cells move around in a three dimensional lymph node while TCRs are located on the surface of cell membranes and therefore move within a two dimensional compartment. [ 10 ] The spatial distribution of proteins is important especially upon T cell stimulation, when an immunological synapse is made, therefore this model was used in a study where the T cell was activated by a weak agonist peptide . [ 11 ]
Particle-based stochastic models are obtained based on the dynamics of an ODE model. What differs this model from others, is that it considers the components of the model as discrete variables , not continuous like the previous ones. They examine particles on a microscopic and mesoscopic level in immune-specific transduction pathways and immune cells - cancer interactions, respectively. The dynamics of the model are determined by the Markov process, which in this case, expresses the probability of each possible state in the system upon time in a form of differential equations . The equations are difficult to solve analytically, so simulations on the computer are performed as kinetic Monte Carlo schemes. The simulation is commonly carried out with the Gillespie algorithm , which uses reaction constants that are derived from chemical kinetic rate constants to predict whether a reaction is going to occur. Stochastic simulations are more computationally demanding and therefore the size and scope of the model is limited.
The stochastic simulation was used to show that the Ras protein , which is a crucial signalling molecule in T cells , can have an active and inactive form. It provided insight to a population of lymphocytes that upon stimulation had active and inactive subpopulations. [ 12 ]
Co-receptors have an important role in the earliest stages of T cell activation and a stochastic simulation was used to explain the interactions as well as to model the migrating cells in a lymph node . [ 13 ]
This model was used to examine T cell proliferation in the lymphoid system . [ 14 ]
Agent-based modeling (ABM) is a type of modelling where the components of the system that are being observed, are treated as discrete agents and represent an individual molecule or cell . The components - agents, called in this system, can interact with other agents and the environment. ABM has the potential to observe events on a multiscale level and is becoming more popular in other disciplines. It has been used for modelling the interactions between CD8+ T cells and Beta cells in Diabetes I [ 15 ] and modelling the rolling and activation of leukocytes . [ 16 ]
Logic models are used to model the life cycles of cells , immune synapse , pathogen recognition and viral entries on a microscopic and mesoscopic level. Unlike the ODE models, details about the kinetics and concentrations of interacting species isn't required in logistic models . Each biochemical species is represented as a node in the network and can have a finite number of discrete states, usually two, for example: ON/OFF, high/low, active/inactive. Usually, logic models , with only two states are considered as Boolean models. When a molecule is in the OFF state, it means that the molecule isn't present at a high enough level to make a change in the system, not that it has zero concentration . Therefore, when it is in the ON state it has reached a high enough amount to initiate a reaction. This method was first introduced by Kauffman. The limit of this model is that it can only provide qualitative approximations of the system and it can’t perfectly model concurrent events. [ 17 ]
This method has been used to explore special pathways in the immune system such as affinity maturation and hypermutation in the humoral immune system [ 18 ] and tolerance to pathologic rheumatoid factors. [ 19 ] Simulation tools that support this model are DDlab, [ 20 ] Cell-Devs [ 21 ] and IMMSIM-C . IMMSIM-C is used more often than the others, as it doesn’t require knowledge in the computer programming field. The platform is available as a public web application and finds usage in undergraduate immunology courses at various universities (Princeton, Genoa, etc.). [ 22 ]
For modelling with statecharts , only Rhapsody has been used so far in systems immunology. It can translate the statechart into executable Java and C++ codes.
This method was also used to build a model of the Influenza Virus Infection . Some of the results were not in accordance with earlier research papers and the Boolean network showed that the amount of activated macrophages increased for both young and old mice, while others suggest that there is a decrease. [ 23 ]
The SBML (Systems Biology Markup Language) was supposed to cover only models with ordinary differential equations , but recently it was upgraded so that Boolean models could be applied. Almost all modeling tools are compatible with SBML . There are a few more software packages for modeling with Boolean models : BoolNet, [ 24 ] GINsim [ 25 ] and Cell Collective. [ 26 ]
To model a system by using differential equations , the computer tool has to perform various tasks such as model construction , calibration , verification , analysis , simulation and visualization . There isn’t a single software tool that satisfies the mentioned criteria, so multiple tools need to be used. [ 27 ]
GINsim [ 28 ] is a computer tool that generates and simulates genetic networks based on discrete variables . Based on the regulatory graphs and logical parameters, GINsim [ 29 ] calculates the temporal evolution of the system which is returned as a State Transition Graph (STG) where the states are represented by nodes and transitions by arrows. It was used to examine how T cells respond upon activation of the TCR and TLR5 pathway. These processes were observed both separately and in combination. First, the molecular maps and logic models for both TCR and TLR5 pathways were built and then merged. Molecular maps were produced in CellDesigner [ 30 ] based on data from literature and various databases, such as KEGG [ 31 ] and Reactome. [ 32 ] The logical models were generated by GINsim [ 33 ] where each component has the value of either 0 or 1 or additional values when modified. Logical rules are then applied to each component, which are called logical nodes in this network . After merging the final model consists of 128 nodes. The results of modelling were in accordance with the experimental ones, where it was demonstrated that the TLR5 is a costimulatory receptor for CD4+ T cells . [ 34 ]
Boolnet [ 35 ] is a R package which contains tools for reconstruction, analysis and visualization of Boolean networks. [ 36 ]
The Cell Collective [ 37 ] is a scientific platform which enables scientists to build, analyse and simulate biological models without formulating mathematical equations and coding . It has a Knowledge Base component built in it which extends the knowledge of individual entities ( proteins , genes , cells , etc.) into dynamical models . The data is qualitative but it takes into account the dynamical relationship between the interacting species. The models are simulated in real-time and everything is done on the web. [ 38 ]
BioNetGen (BNG) is an open-source software package that is used in rule-based modeling of complex systems such as gene regulation , cell signaling and metabolism . The software uses graphs to represent different molecules and their functional domains and rules to explain the interactions between them. In terms of immunology, it was used to model intracellular signalling pathways of the TLR-4 cascade. [ 39 ]
DSAIRM (Dynamical Systems Approach to Immune Response Modeling) is a R package that is designed for studying infection and immune response dynamics without prior knowledge of coding. [ 40 ]
Other useful applications and learning environments are: Gepasi, [ 41 ] [ 42 ] Copasi, [ 43 ] BioUML , [ 44 ] Simbiology (MATLAB) [ 45 ] and Bio-SPICE. [ 46 ]
The first conference in Synthetic and Systems Immunology was hosted in Ascona by CSF and ETH Zurich. [ 47 ] It took place in the first days of May 2019 where over fifty researchers, from different scientific fields were involved. Among all presentations that were held, the best went to Dr. Govinda Sharma who invented a platform for screening TCR epitopes.
Cold Spring Harbor Laboratory (CSHL) [ 48 ] from New York, in March 2019, hosted a meeting where the focus was to exchange ideas between experimental, computational and mathematical biologists that study the immune system in depth. The topics for the meeting where: Modelling and Regulatory networks, the future of Synthetic and Systems Biology and Immunoreceptors. [ 49 ] | https://en.wikipedia.org/wiki/Systems_immunology |
A systems integrator (or system integrator ) is a person or company that specializes in bringing together component subsystems into a whole and ensuring that those subsystems function together, [ 1 ] a practice known as system integration . They also solve problems of automation . [ 2 ] Systems integrators may work in many fields but the term is generally used in the information technology (IT) field such as computer networking , the defense industry , the mass media , enterprise application integration , business process management or manual computer programming . [ 3 ] Data quality issues are an important part of the work of systems integrators. [ 4 ]
A system integration engineer needs a broad range of skills and is likely to be defined by a breadth of knowledge rather than a depth of knowledge. These skills are likely to include software , systems and enterprise architecture, software and hardware engineering, interface protocols , and general problem solving skills. It is likely that the problems to be solved have not been solved before except in the broadest sense. They are likely to include new and challenging problems with an input from a broad range of engineers where the system integration engineer "pulls it all together." [ 5 ]
Systems integrators generally have to be good at matching clients needs with existing products. An inductive reasoning aptitude is useful for quickly understanding how to operate a system or a GUI . A systems integrator will tend to benefit from being a generalist, knowing a lot about a large number of products. Systems integration includes a substantial amount of diagnostic and troubleshooting work. The ability to research existing products and software components is also helpful. Creation of these information systems may include designing or building customized prototypes or concepts.
In the defense industry, the job of 'Systems Integration' engineer is growing in importance as defense systems become more 'connected'. As well as integrating new systems, the task of integrating current systems is attracting a lot of research and effort. It is only in recent years that systems have started to be deployed that can interconnect with each other; most systems were designed as 'stovepipe' designs with no thought to future connectivity.
The current problem is how to harness all the information available, from the various information generators (or sensors) into one complete picture.
As well as the design of the actual interfaces much effort is being put into presenting the information in a useful manner. The level of information, needed by the different levels in the military structure, and the relevance of the information (information can become outdated in seconds) is so variable that it may be necessary to have more than one system connected.
Another problem is how information is networked. The Internet may seem to be an obvious solution, but it is vulnerable to denial of service and physical destruction of the key 'hubs'. One answer is to use a dedicated military communication system, but the bandwidth needed would be astronomical in such a system. [ citation needed ]
Army Warrant Officer (United States) military occupational specialty (MOS) 140A – Command and Control Systems Technician is an example of a systems integrator in the defense industry.
140A Warrant Officers assigned to Brigade Combat Teams (BCT) integrate systems with multiple operating systems (OS) and hardware configurations – that include: UNIX , Linux & Microsoft Windows Others can fill a similar role at the Division level and higher.
In entertainment and architectural installations, systems integrators function as a designer/engineer, bringing together a wide array of components from various manufacturers to accomplish the goal of creating a unified, functioning system that meets the needs of the client. Systems integrators are usually involved in the selection of instruments and control components from among various OEMs to determine the specific mix of output, function, interconnection, program storage, controls, and user interfaces required for specific projects. The integrator is generally responsible for generating the control riser, collaborating with the lighting consultant/ AV consultant on the function programming, and will commission the system once installed. Often (but not always) the system integrator will also be the vendor for projects they are commissioning, and will collaborate with the lighting designer on the artistic design. As lighting and A/V systems increase in their level of sophistication, and the number of manufacturers for components of these systems increases, so does the demand for systems integrators.
Common data protocols involved in entertainment and architectural systems are Digital Multiplexing (or DMX512-A) , Remote Device Management (or RDM) , Art-Net , ACN or sACN (Streaming Architecture for Control Networks), Analog, and various proprietary control software from a variety of manufacturers. Systems Integrators design many distributed nodes in traditional star or ring topologies, or customize system layout for specific installations. The network will have a hierarchy of main and sub-control stations with varying degrees of access. The network can be designed such that the controls for this system can be on an established timeline, or controlled in real time by astronomical clocks, audio/motion/IR sensors, or various means of user interface (buttons, touch-pads, consoles). This system might utilize a primary controller that can access the entire system, and satellite control interfaces linked via a network backbone that would determine functionality based on access codes. For example, a casino might use a networked system that interfaces with architectural lighting, stage lighting, special effects (such as fog machines or fountains), and media content routed to a media server. The primary controller would have access to all devices on the network, while individual control stations could have varying levels of functionality. A ballroom might have a multi-button panel that would adjust lighting only in that ballroom, while a cabaret space with a stage would require an access code that would give employees access to the stage lighting, while the IT manager could use the same panel to access the main controls for the networks.
System Integrators in the automation industry typically provide the product and application experience in implementing complex automation solutions. Often, System Integrators are aligned with automation vendors, joining their various System Integrator programs for access to development products, resources and technical support. System integrators are tightly linked to their accounts and often are viewed as the engineering departments for small manufacturers, handling their automation system installation, commissioning, long term maintenance and security. [ 6 ]
The Systems Software Integrator (SSI) certification is a national program through the National Institute for Certification in Engineering Technologies (NICET). This program focuses on the link between software-driven systems and traditional engineering. This is meant to be a more advanced certification, requiring prior experience in various segments in the software industry or relevant experience in traditional systems. The driver for this program is the need to protect infrastructure and industry from threats through cyberphysical integration best practices.
These threats can come in the form of ineffective operations or from cyberattacks from bad actors. Halliburton, a major oilfield services company, suffered a cyberattack in 2024 that disrupted its internal systems. The breach potentially compromised core systems like Active Directory, forcing the company to shut down certain operations to prevent further damage.Failure to fully integrate and test software in the physical system led to this failure. [ 7 ]
Other aspects that drove the development of the SSI certification include the developing number of cybersecurity requirements, such as the Software Bill of Materials (SBOM), that project engineers and project managers are responsible for once software is integrated into their system. [ 8 ]
The certification exam covers four main areas: quality assurance, program management, systems integration, and risk mitigation. The specific technical areas in the exam include: requirement specifications and development; version control; verification and validation (V&V); document and data management; care and custody control; cybersecurity postures; risk management; software supply chain; and functional safety. [ 9 ] | https://en.wikipedia.org/wiki/Systems_integrator |
Systems management is enterprise-wide administration of distributed systems including (and commonly in practice) computer systems . [ citation needed ] Systems management is strongly influenced by network management initiatives in telecommunications . The application performance management (APM) technologies are now a subset of Systems management. Maximum productivity can be achieved more efficiently through event correlation, system automation and predictive analysis which is now all part of APM. [ 1 ]
Centralized management has a time and effort trade-off that is related to the size of the company, the expertise of the IT staff, and the amount of technology being used:
Systems management may involve one or more of the following tasks:
Functional groups are provided according to International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Common management information protocol (X.700) standard. This framework is also known as Fault, Configuration, Accounting, Performance, Security (FCAPS).
However this standard should not be treated as comprehensive, there are obvious omissions. Some are recently emerging sectors, some are implied and some are just not listed. The primary ones are:
Performance management functions can also be split into end-to-end performance measuring and infrastructure component measuring functions. Another recently emerging sector is operational intelligence (OI) which focuses on real-time monitoring of business events that relate to business processes, not unlike business activity monitoring (BAM).
Schools that offer or have offered degrees in the field of systems management include the University of Southern California , the University of Denver , Capitol Technology University , and Florida Institute of Technology . | https://en.wikipedia.org/wiki/Systems_management |
The systems modeling language ( SysML ) [ 1 ] is a general-purpose modeling language for systems engineering applications. It supports the specification, analysis , design , verification and validation of a broad range of systems and systems-of-systems .
SysML was originally developed by an open source specification project, and includes an open source license for distribution and use. [ 2 ] SysML is defined as an extension of a subset of the Unified Modeling Language (UML) using UML's profile mechanism . The language's extensions were designed to support systems engineering activities.
SysML offers several systems engineering specific improvements over UML , which has been developed as a software modeling language. These improvements include the following:
SysML reuses seven of UML 2's fourteen " nominative " types of diagrams , [ 4 ] and adds two diagrams (requirement and parametric diagrams) for a total of nine diagram types. SysML also supports allocation tables, a tabular format that can be dynamically derived from SysML allocation relationships. A table which compares SysML and UML 2 diagrams is available in the SysML FAQ.
Consider modeling an automotive system: with SysML one can use Requirement diagrams to efficiently capture functional, performance, and interface requirements, whereas with UML one is subject to the limitations of use case diagrams to define high-level functional requirements. Likewise, with SysML one can use Parametric diagrams to precisely define performance and quantitative constraints like maximum acceleration , minimum curb weight , and total air conditioning capacity. UML provides no straightforward mechanism to capture this sort of essential performance and quantitative information.
Concerning the rest of the automotive system, enhanced activity diagrams and state machine diagrams can be used to specify the embedded software control logic and information flows for the on-board automotive computers. Other SysML structural and behavioral diagrams can be used to model factories that build the automobiles, as well as the interfaces between the organizations that work in the factories.
The SysML initiative originated in a January 2001 decision by the International Council on Systems Engineering (INCOSE) Model Driven Systems Design workgroup to customize the UML for systems engineering applications. Following this decision, INCOSE and the Object Management Group (OMG), which maintains the UML specification, jointly chartered the OMG Systems Engineering Domain Special Interest Group (SE DSIG) in July 2001. The SE DSIG, with support from INCOSE and the ISO AP 233 workgroup, developed the requirements for the modeling language, which were subsequently issued by the OMG parting in the UML for Systems Engineering Request for Proposal (UML for SE RFP; OMG document ad/03-03-41) in March 2003. [ 5 ]
In 2003 David Oliver and Sanford Friedenthal of INCOSE requested that Cris Kobryn , who successfully led the UML 1 and UML 2 language design teams, lead their joint effort to respond to the UML for SE RFP. [ 6 ] As Chair of the SysML Partners, Kobryn coined the language name "SysML" (short for "Systems Modeling Language"), designed the original SysML logo, and organized the SysML Language Design team as an open source specification project. [ 7 ] Friedenthal served as Deputy Chair, and helped organize the original SysML Partners team.
In January 2005, the SysML Partners published the SysML v0.9 draft specification. Later, in August 2005, Friedenthal and several other original SysML Partners left to establish a competing SysML Submission Team (SST). [ 6 ] The SysML Partners released the SysML v1.0 Alpha specification in November 2005.
After a series of competing SysML specification proposals, a SysML Merge Team was proposed to the OMG in April 2006. [ 8 ] This proposal was voted upon and adopted by the OMG in July 2006 as OMG SysML, to differentiate it from the original open source specification from which it was derived. Because OMG SysML is derived from open source SysML, it also includes an open source license for distribution and use.
The OMG SysML v. 1.0 specification was issued by the OMG as an Available Specification in September 2007. [ 9 ] The current version of OMG SysML is v1.6, which was issued by the OMG in December 2019. [ 10 ] In addition, SysML was published by the International Organization for Standardization (ISO) in 2017 as a full International Standard (IS), ISO/IEC 19514:2017 (Information technology -- Object management group systems modeling language). [ 11 ]
The OMG has been working on the next generation of SysML and issued a Request for Proposals (RFP) for version 2 on December 8, 2017, following its open standardization process. [ 12 ] [ 13 ] The resulting specification, which will incorporate language enhancements from experience applying the language, will include a UML profile, a metamodel , and a mapping between the profile and metamodel. [ 12 ] A second RFP for a SysML v2 Application Programming Interface (API) and Services RFP was issued in June 2018. Its aim is to enhance the interoperability of model-based systems engineering tools.
SysML includes 9 types of diagram, some of which are taken from UML .
There are several modeling tool vendors offering SysML support. Lists of tool vendors who support SysML or OMG SysML can be found on the SysML Forum [ 14 ] or SysML [ 15 ] websites, respectively.
As an OMG UML 2.0 profile , SysML models are designed to be exchanged using the XML Metadata Interchange (XMI) standard. In addition, architectural alignment work is underway to support the ISO 10303 (also known as STEP, the Standard for the Exchange of Product model data) AP-233 standard for exchanging and sharing information between systems engineering software applications and tools. | https://en.wikipedia.org/wiki/Systems_modeling_language |
Systems of Logic Based on Ordinals was the PhD dissertation of the mathematician Alan Turing . [ 1 ] [ 2 ]
Turing's thesis is not about a new type of formal logic , nor was he interested in so-called "ranked logic" systems derived from ordinal or relative numbering, in which comparisons can be made between truth-states on the basis of relative veracity. Instead, Turing investigated the possibility of resolving the Gödelian incompleteness condition using Cantor 's method of infinites.
The thesis is an exploration of formal mathematical systems after Gödel's theorem . Gödel showed that for any formal system S powerful enough to represent arithmetic, there is a theorem G that is true but the system is unable to prove. G could be added as an additional axiom to the system in place of a proof. However this would create a new system S' with its own unprovable true theorem G' , and so on. Turing's thesis looks at what happens if you simply iterate this process repeatedly, generating an infinite set of new axioms to add to the original theory, and even goes one step further in using transfinite recursion to go "past infinity", yielding a set of new theories G α , one for each ordinal number α .
The thesis was completed at Princeton under Alonzo Church and was a classic work in mathematics that introduced the concept of ordinal logic . [ 3 ]
Martin Davis states that although Turing's use of a computing oracle is not a major focus of the dissertation, it has proven to be highly influential in theoretical computer science , e.g. in the polynomial-time hierarchy . [ 4 ]
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it .
This article about a mathematical publication is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Systems_of_Logic_Based_on_Ordinals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.