text
stringlengths
11
320k
source
stringlengths
26
161
Stripe 82 is a 300 deg 2 equatorial field of sky that was imaged multiple times by the Sloan Digital Sky Survey from 2000 to 2008. [ 1 ] It approximately covers the region with right ascension from 20:00h to 4:00h and declination from -1.26° to +1.26°. Stripe 82 has also been observed using many other telescopes and instruments, a list of which is given below. This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stripe_82
Stripping is a physical separation process where one or more components are removed from a liquid stream by a vapor stream. [ 1 ] In industrial applications the liquid and vapor streams can have co-current or countercurrent flows. Stripping is usually carried out in either a packed or trayed column. [ 2 ] Stripping works on the basis of mass transfer . The idea is to make the conditions favorable for the component, A, in the liquid phase to transfer to the vapor phase. This involves a gas–liquid interface that A must cross. The total amount of A that has moved across this boundary can be defined as the flux of A, N A . Stripping is mainly conducted in trayed towers ( plate columns ) and packed columns , and less often in spray towers , bubble columns , and centrifugal contactors . [ 2 ] Trayed towers consist of a vertical column with liquid flowing in the top and out the bottom. The vapor phase enters in the bottom of the column and exits out of the top. Inside of the column are trays or plates. These trays force the liquid to flow back and forth horizontally while the vapor bubbles up through holes in the trays. The purpose of these trays is to increase the amount of contact area between the liquid and vapor phases. Packed columns are similar to trayed columns in that the liquid and vapor flows enter and exit in the same manner. The difference is that in packed towers there are no trays. Instead, packing is used to increase the contact area between the liquid and vapor phases. There are many different types of packing used and each one has advantages and disadvantages. The variables and design considerations for strippers are many. Among them are the entering conditions, the degree of recovery of the solute needed, the choice of the stripping agent and its flow, the operating conditions, the number of stages, the heat effects, and the type and size of the equipment. [ 2 ] The degree of recovery is often determined by environmental regulations, such as for volatile organic compounds like chloroform . Frequently, steam , air, inert gases , and hydrocarbon gases are used as stripping agents. This is based on solubility , stability, degree of corrosiveness , cost, and availability. As stripping agents are gases, operation at nearly the highest temperature and lowest pressure that will maintain the components and not vaporize the liquid feed stream is desired. This allows for the minimization of flow. As with all other variables, minimizing cost while achieving efficient separation is the ultimate goal. [ 2 ] The size of the equipment, and particularly the height and diameter, is important in determining the possibility of flow channeling that would reduce the contact area between the liquid and vapor streams. If flow channeling is suspected to be occurring, a redistribution plate is often necessary to, as the name indicates, redistribute the liquid flow evenly to reestablish a higher contact area. As mentioned previously, strippers can be trayed or packed. Packed columns, and particularly when random packing is used, are usually favored for smaller columns with a diameter less than 2 feet and a packed height of not more than 20 feet. Packed columns can also be advantageous for corrosive fluids, high foaming fluids, when fluid velocity is high, and when particularly low pressure drop is desired. Trayed strippers are advantageous because of ease of design and scale up. Structured packing can be used similar to trays despite possibly being the same material as dumped (random) packing. Using structured packing is a common method to increase the capacity for separation or to replace damaged trays. [ 2 ] Trayed strippers can have sieve, valve, or bubble cap trays while packed strippers can have either structured packing or random packing. [ 2 ] Trays and packing are used to increase the contact area over which mass transfer can occur as mass transfer theory dictates. Packing can have varying material, surface area, flow area, and associated pressure drop. Older generation packing include ceramic Raschig rings and Berl saddles . More common packing materials are metal and plastic Pall rings , metal and plastic Zbigniew Białecki rings , [ 3 ] and ceramic Intalox saddles . Each packing material of this newer generation improves the surface area, the flow area, and/or the associated pressure drop across the packing. Also important, is the ability of the packing material to not stack on top of itself. If such stacking occurs, it drastically reduces the surface area of the material. Lattice design work has been increasing of late that will further improve these characteristics. [ 2 ] During operation, monitoring the pressure drop across the column can help to determine the performance of the stripper. A changed pressure drop over a significant range of time can be an indication that the packing may need to be replaced or cleaned. Stripping is commonly used in industrial applications to remove harmful contaminants from waste streams. One example would be the removal of TBT and PAH contaminants from harbor soils. [ 4 ] The soils are dredged from the bottom of contaminated harbors, mixed with water to make a slurry and then stripped with steam. The cleaned soil and contaminant rich steam mixture are then separated. This process is able to decontaminate soils almost completely. Steam is also frequently used as a stripping agent for water treatment . Volatile organic compounds are partially soluble in water and because of environmental considerations and regulations, must be removed from groundwater , surface water , and wastewater . [ 5 ] These compounds can be present because of industrial, agricultural, and commercial activity.
https://en.wikipedia.org/wiki/Stripping_(chemistry)
In chemistry , a stripping reaction is a chemical process, studied in a molecular beam , in which the reaction products are scattered forward with respect to the moving centre of mass of the system. [ 1 ] This chemical reaction article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stripping_reaction_(chemistry)
In nuclear physics , a stripping reaction is a nuclear reaction in which part of the incident nucleus combines with the target nucleus, and the remainder proceeds with most of its original momentum in almost its original direction. This reaction was first described by Stuart Thomas Butler in 1950. [ 1 ] [ 2 ] A simple one-step stripping reaction can be represented as where A represents the target core, b represents the projectile core, and x is the transferred mass which may represent any number of particles. Deuteron stripping reactions (the Oppenheimer–Phillips process ) have been extensively used to study nuclear reactions and structure, this occurs where the incident nucleus is a deuteron and only a proton emerges from the target nucleus.
https://en.wikipedia.org/wiki/Stripping_reaction_(physics)
A stroboscope, also known as a strobe , is an instrument used to make a cyclically moving object appear to be slow-moving, or stationary. It consists of either a rotating disk with slots or holes or a lamp such as a flashtube which produces brief repetitive flashes of light. Usually, the rate of the stroboscope is adjustable to different frequencies. When a rotating or vibrating object is observed with the stroboscope at its vibration frequency (or a submultiple of it), it appears stationary. Thus stroboscopes are also used to measure frequency. The principle is used for the study of rotating , reciprocating , oscillating or vibrating objects. Machine parts and vibrating string are common examples. A stroboscope used to set the ignition timing of internal combustion engines is called a timing light . In its simplest mechanical form, a stroboscope can be a rotating cylinder (or bowl with a raised edge) with evenly spaced holes or slots placed in the line of sight between the observer and the moving object. The observer looks through the holes/slots on the near and far side at the same time, with the slots/holes moving in opposite directions. When the holes/slots are aligned on opposite sides, the object is visible to the observer. Alternately, a single moving hole or slot can be used with a fixed/stationary hole or slot. The stationary hole or slot limits the light to a single viewing path and reduces glare from light passing through other parts of the moving hole/slot. Viewing through a single line of holes/slots does not work, since the holes/slots appear to just sweep across the object without a strobe effect. The rotational speed is adjusted so that it becomes synchronised with the movement of the observed system, which seems to slow and stop. The illusion is caused by temporal aliasing , commonly known as the stroboscopic effect . In electronic versions, the perforated disc is replaced by a lamp capable of emitting brief and rapid flashes of light. Typically a gas-discharge or solid-state lamp is used, because they are capable of emitting light nearly instantly when power is applied, and extinguishing just as fast when the power is removed. By comparison, incandescent lamps have a brief warm-up when energized, followed by a cool-down period when power is removed. These delays result in smearing and blurring of detail of objects partially illuminated during the warm-up and cool-down periods. For most applications, incandescent lamps are too slow for clear stroboscopic effects. Yet when operated from an AC source they are mostly fast enough to cause audible hum (at double mains frequency) on optical audio playback such as on film projection. The frequency of the flash is adjusted so that it is an equal to, or a unit fraction of the object's cyclic speed, at which point the object is seen to be either stationary or moving slowly backward or forward, depending on the flash frequency. Neon lamps or light-emitting diodes are commonly used for low-intensity strobe applications. Neon lamps were more common before the development of solid-state electronics, but are being replaced by LEDs in most low-intensity strobe applications. Xenon flash lamps are used for medium- and high-intensity strobe applications. Sufficiently rapid or bright flashing may require active cooling such as forced-air or water cooling to prevent the xenon flash lamp from melting. Joseph Plateau of Belgium is generally credited with the invention of the stroboscope in 1832, when he used a disc with radial slits which he turned while viewing images on a separate rotating wheel. Plateau's device became known as the " Phenakistoscope ". There was an almost simultaneous and independent invention of the device by the Austrian Simon Ritter von Stampfer , which he named the "Stroboscope", and it is his term which is used today. The etymology is from the Greek words στρόβος - strobos , meaning "whirlpool" and σκοπεῖν - skopein , meaning "to look at". As well as having important applications for scientific research, the earliest inventions received immediate popular success as methods for producing moving pictures , and the principle was used for numerous toys. Other early pioneers employed rotating mirrors, or vibrating mirrors known as mirror galvanometers . In 1917, French engineer Etienne Oehmichen patented the first electric stroboscope, [ 1 ] building at the same time a camera capable of shooting 1,000 frames per second. Harold Eugene Edgerton ("Doc" Edgerton) employed a flashing lamp to study machine parts in motion. [ 2 ] General Radio Corporation then went on to produce this device in the form of their "Strobotac", an early example of a commercially successful stroboscope. [ 3 ] Edgerton later used very short flashes of light as a means of producing still photographs of fast-moving objects, such as bullets in flight. Stroboscopes play an important role in the study of stresses on machinery in motion, and in many other forms of research. Bright stroboscopes are able to overpower ambient lighting and make stop-motion effects apparent without the need for dark ambient operating conditions. They are also used as measuring instruments for determining cyclic speed. As a timing light they are used to set the ignition timing of internal combustion engines . In medicine, stroboscopes are used to view the vocal cords for the diagnosis of conditions that have produced dysphonia (hoarseness). The patient hums or speaks into a microphone which in turn activates the stroboscope at either the same or a slightly different frequency. The light source and a camera are positioned by endoscopy . Another application of the stroboscope can be seen on many gramophone turntables. The edge of the platter has marks at specific intervals so that when viewed under fluorescent lighting powered at mains frequency , provided the platter is rotating at the correct speed, the marks appear to be stationary. This will not work well under incandescent lighting , as incandescent bulbs do not significantly strobe. For this reason, some turntables have a neon bulb or LED next to the platter. The LED must be driven by a half wave rectifier from the mains transformer, or by an oscillator. Flashing lamp strobes have also been adapted as a lighting effect for discotheques and night clubs where they give the impression of dancing in slow motion. The strobe rate of these devices is typically not very precise or very fast, because entertainment applications do not usually require a high degree of performance. Rapid flashing of the stroboscopic light can give the illusion that white light is tinged with color, known as Fechner color . Within certain ranges, the apparent color can be controlled by the frequency of the flash. Effective stimuli frequencies go from 3 Hz upwards, with optimal frequencies of about 4–6 Hz. The colours are an illusion generated in the mind of the observer and not a real color. The Benham's top demonstrates the effect. [ 4 ] [ 5 ] [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Stroboscope
Stroke ratio , today universally defined as bore/stroke ratio , is a term to describe the ratio between cylinder bore diameter and piston stroke length in a reciprocating piston engine . This can be used for either an internal combustion engine , where the fuel is burned within the cylinders of the engine, or external combustion engine , such as a steam engine , where the combustion of the fuel takes place outside the working cylinders of the engine. The contemporary convention for describing the stroke ratio of a piston engine ‘s cylinders is its bore/stroke ratio. [ 1 ] Stroke/bore ratio is an obsolete expression dating to the early era of reciprocating engine development. [ 2 ] The diameter of the cylinder bore is divided by the length of the piston stroke to give the ratio. The following terms describe the naming conventions for the configurations of the various bore/stroke ratio: A square engine has equal bore and stroke dimensions, giving a bore/stroke value of exactly 1:1. 1953 – Ferrari 250 Europa had Lampredi V12 with 68.0 mm × 68.0 mm (2.7 in × 2.7 in) bore and stroke. 1967 – FIAT 125, 124Sport engine 125A000, 125B000, 125BC000, 1608 ccm, DOHC, 80.0 mm × 80.0 mm (3.15 in × 3.15 in) bore and stroke. 1970 – Ford 400 had a 101.6 mm × 101.6 mm (4.00 in × 4.00 in) bore and stroke. 1973 – Kawasaki Z1 and KZ(Z)900 had a 66.0 mm × 66.0 mm (2.60 in × 2.60 in) bore and stroke. [ 3 ] 1982 - Honda Nighthawk 250 and Honda CMX250C Rebel have a 53.0 mm × 53.0 mm (2.09 in × 2.09 in) bore and stroke. [ 4 ] 1983 – Mazda FE 2.0L inline four-cylinder engine with a 86.0 mm × 86.0 mm (3.4 in × 3.4 in) bore and stroke. 1987 – The Opel/Vauxhall 2.0 L GM Family II engines are square at 86.0 mm × 86.0 mm (3.39 in × 3.39 in) bore and stroke; example as C20XE C20NE C20LET X20A X20XEV X20XER Z20LET Z20LEH Z20LER A20NHT A20NFT. 1989 – Nissan's SR20DE is a square engine, with an 86.0 mm × 86.0 mm (3.39 in × 3.39 in) bore and stroke. 1990–2010 Saab B234/B235 is a square engine, with a 90.0 mm × 90.0 mm (3.54 in × 3.54 in) bore and stroke. 1991 – Ford's 4.6 V8 OHC engine has a 90.2 mm × 90.0 mm (3.552 in × 3.543 in) bore and stroke. 1995 – The BMW M52 engine with a displacement of 2793 cubic centimeters is an example of a perfect square engine with an 84.0 mm × 84.0 mm (3.31 in × 3.31 in) bore and stroke. 1996 – Jaguar's AJ-V8 engine in 4.0-litre form has an 86.0 mm bore and stroke. 2000 – Mercedes-Benz 4.0-litre (3996 cc; 243.9 cu in) OM628 V8 diesel engine is an example of a square engine – with an 86.0 mm × 86.0 mm (3.39 in × 3.39 in) bore and stroke. An engine is described as oversquare or short-stroke if its cylinders have a greater bore diameter than its stroke length, giving a bore/stroke ratio greater than 1:1. An oversquare engine allows for more and larger valves in the head of the cylinder, higher possible rpm by lowering maximum piston speed, and lower crank stress due to the lower peak piston acceleration for the same engine (rotational) speed. Because these characteristics favor higher engine speeds, oversquare engines are often tuned to develop peak torque at a relatively high speed. Due to the increased piston and head surface area, the heat loss increases as the bore/stroke ratio is increased. Thus an excessively high ratio can lead to a decreased thermal efficiency compared to other engine geometries. The large size/width of the combustion chamber at ignition can cause increased inhomogeneity in the air/fuel mixture during combustion, resulting in higher emissions. The reduced stroke length allows for a shorter cylinder and sometimes a shorter connecting rod, generally making oversquare engines less tall but wider than undersquare engines of similar engine displacement . Oversquare engines (a.k.a. "short stroke engines") are very common, as they allow higher rpm (and thus more power), without excessive piston speed. Examples include both Chevrolet and Ford small-block V8s; the GMC 478 V6 has a bore/stroke ratio of 1.33. The 1.6 litre version of the BMW N45 gasoline engine has a bore/stroke ratio of 1.167. Horizontally opposed, also known as "Boxer" or "flat", engines typically feature oversquare designs since any increase in stroke length would result in twice the increase in overall engine width. This is particularly so in Subaru ’s front-engine layout, where the steering angle of the front wheels is constrained by the width of the engine. The Subaru EJ181 engine develops peak torque at speeds as low as 3200 rpm. Nissan's RB, VQ, VK, VH and VR38DETT engines are all oversquare. Additionally, SR16VE engine found in Nissan Pulsar VZ-R and VZ-R N1 is an oversquare engine with 86 millimetres (3.39 in) bore and 68.7 millimetres (2.70 in) stroke, giving it 175–200 horsepower (130–150 kW) but relatively small torque of 119–134 pound-feet (161–182 N⋅m; 16.5–18.5 kg⋅m) Extreme oversquare engines are found in Formula One racing cars, where strict rules limit displacement, thereby necessitating that power be achieved through high engine speeds. Stroke ratios approaching 2.5:1 are allowed, [ a ] enabling engine speeds of 18,000 rpm while remaining reliable for multiple races. [ 5 ] The Ducati Panigale motorcycle engine is extremely oversquare with a bore/stroke ratio of 1.84:1. It was given the name "SuperQuadro" by Ducati , roughly translated as "super-square" from Italian. [ 6 ] The side-valve Belgian D-Motor LF26 aero-engine has a bore/stroke ratio of 1.4:1. [ 7 ] Early Mercedes-Benz M116 engines had a 92 millimetres (3.62 in) bore and a 65.6 millimetres (2.58 in) stroke for a 3.5 litre V8. [ 8 ] An engine is described as undersquare or long-stroke if its cylinders have a smaller bore (width, diameter) than its stroke (length of piston travel) - giving a ratio value of less than 1:1. At a given engine speed, a longer stroke increases engine friction and increases stress on the crankshaft due to the higher peak piston acceleration. The smaller bore also reduces the area available for valves in the cylinder head, requiring them to be smaller or fewer in number. Undersquare engines often exhibit peak torque at lower rpm than an oversquare engine due to their smaller valves and high piston speed limiting their potential to rev higher. Undersquare engines have become more common lately, as manufacturers push for more and more efficient engines and higher fuel economy. [ clarify ] [ 9 ] Many inline engines, particularly those mounted transversely in front-wheel-drive cars, utilize an undersquare design. The smaller bore allows for a shorter engine that increases room available for the front wheels to steer. Examples of this include many Volkswagen , Nissan , Honda , and Mazda engines. The 1KR-FE -engine used in the Toyota Aygo , Citroën C1 and Peugeot 107 amongst others is an example of a modern long-stroke engine widely used in FF layout cars. This engine has a bore and stroke of 71 mm × 84 mm (2.8 in × 3.3 in) stroke giving it a bore/stroke ratio of 0.845:1. Some rear-wheel-drive cars that borrow engines from front-wheel-drive cars (such as the Mazda MX-5 ) use an undersquare design. BMW's acclaimed S54B32 M54 engine was undersquare with a bore and stroke of 87 mm × 91 mm (3.4 in × 3.6 in)), offering a world record torque-per-litre figure (114 N⋅m/L, 1.38 lb⋅ft/cu in) for normally-aspirated production engines at the time; this record stood until Ferrari unveiled the 458 Italia . Many British automobile companies used undersquare designs until the 1950s, largely because of a motor tax system that taxed cars by their cylinder bore . This includes the BMC A-Series engine , and many Nissan derivatives. The Trojan Car used an undersquare, split piston , two stroke , two-cylinder inline engine; this was partly for this tax advantage and partly because its proportions allowed flexing V-shaped connecting rods for the two pistons of each U-shaped cylinder, which was cheaper and simpler than two connecting rods joined with an additional bearing. Their French and German competitors at the time also used undersquare designs even in absence of the tax reasoning, e. g. Renault Billancourt engine and Opel straight-6 engine . The 225 cu in (3.7 litre ) Chrysler Slant-6 engine is undersquare, with a bore and stroke of 86 mm × 105 mm (3.4 in × 4.1 in) stroke (bore/stroke ratio = 0.819:1). The Ford 5.4L Modular Engine features a bore and stroke of 90.1 mm × 105.8 mm (3.55 in × 4.17 in), which makes a bore/stroke ratio of 0.852:1. Since the stroke is significantly longer than the bore, the SOHC 16V (2-valve per cylinder) version of this engine is able to generate a peak torque of 350 lb·ft as low as 2501 rpm. The Willys Jeep L134 and F134 engines were undersquare, with a bore and stroke of 79.4 mm × 111.1 mm (3.13 in × 4.37 in) stroke (bore/stroke ratio = 0.714:1). The Dodge Power Wagon used a straight-six Chrysler Flathead engine of 230 cu in (3.8 L) with a bore and stroke of 83 mm × 117 mm (3.3 in × 4.6 in), yielding a substantially undersquare bore/stroke ratio of 0.709:1. The 4-litre Barra Inline 6 and Intech engines from the Australian Ford Falcon , uses a bore and stroke of 92.21 mm × 99.31 mm (3.63 in × 3.91 in) stroke, which equates to a 0.929:1 bore-stroke ratio. The 292 Chevrolet I6 is also undersquare, with a bore and stroke of 98.4 mm × 104.8 mm (3.875 in × 4.125 in) in (bore/stroke ratio = 0.939:1). Mitsubishi's 4G63T engine found primarily in many generations of Mitsubishi Lancer Evolution is an undersquare engine with a bore and stroke of 85 mm × 88 mm (3.3 in × 3.5 in). The Jaguar XK6 engine , used in all 6-cylinder Jaguars from 1949 to 1987 was undersquare. For example, the 4.2 litre engine had a bore and stroke of 92.08 mm × 106 mm (3.63 in × 4.17 in), providing a bore/stroke ratio of 0.869:1. Virtually all piston engines used in military aircraft were long-stroke engines. The PW R-2800, Wright R-3350, Pratt & Whitney R-4360 Wasp Major , Rolls-Royce Merlin (1650), Allison V-1710, and Hispano-Suiza 12Y-Z are only a few of more than a hundred examples. All diesel-powered ships have massively undersquare marine engines, usually using crossheads . A Wärtsilä two-stroke marine diesel engine has a bore and stroke of 960 mm × 2,500 mm (37.8 in × 98.4 in), (bore/stroke ratio = 0.384:1). While most modern motorcycle engines are square or oversquare, some are undersquare. The Kawasaki Z1300 's straight-six engine was made undersquare to minimise engine width, more recently, a new straight-twin engine for the Honda NC700 series used an undersquare design to achieve better combustion efficiency in order to reduce fuel consumption. [ 10 ] [ 11 ]
https://en.wikipedia.org/wiki/Stroke_ratio
Stromagen is a product that is made of stem cells taken from a patient's bone marrow and grown in the laboratory. After a patient's bone marrow is destroyed by treatment with whole body irradiation or chemotherapy , these cells are injected back into the patient to help rebuild bone marrow. Stromagen has been studied in the prevention of graft-versus-host disease during stem cell transplant in patients receiving treatment for cancer . Stromagen is used in cellular therapy. Also called autologous expanded mesenchymal stem cells OTI-010. [ 1 ] [ 2 ] Peripheral stem cell transplantation may allow doctors to give higher doses of chemotherapy and kill more tumor cells. It is not yet known whether Stromagen improves the success of stem cell transplantation in women with breast cancer. [ 3 ] This antineoplastic or immunomodulatory drug article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stromagen
The strong CP problem is a question in particle physics , which brings up the following quandary: why does quantum chromodynamics (QCD) seem to preserve CP-symmetry ? In particle physics, CP stands for the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). According to the current mathematical formulation of quantum chromodynamics, a violation of CP-symmetry in strong interactions could occur. However, no violation of the CP-symmetry has ever been seen in any experiment involving only the strong interaction. As there is no known reason in QCD for it to necessarily be conserved, this is a " fine tuning " problem known as the strong CP problem . The strong CP problem is sometimes regarded as an unsolved problem in physics , and has been referred to as "the most underrated puzzle in all of physics." [ 1 ] [ 2 ] There are several proposed solutions to solve the strong CP problem. The most well-known is Peccei–Quinn theory , [ 3 ] involving new pseudoscalar particles called axions . CP-symmetry states that physics should be unchanged if particles were swapped with their antiparticles and then left-handed and right-handed particles were also interchanged. This corresponds to performing a charge conjugation transformation and then a parity transformation. The symmetry is known to be broken in the Standard Model through weak interactions , but it is also expected to be broken through strong interactions which govern quantum chromodynamics (QCD), something that has not yet been observed. To illustrate how the CP violation can come about in QCD, consider a Yang–Mills theory with a single massive quark . [ 4 ] The most general mass term possible for the quark is a complex mass written as m e i θ ′ γ 5 {\displaystyle me^{i\theta '\gamma _{5}}} for some arbitrary phase θ ′ {\displaystyle \theta '} . In that case the Lagrangian describing the theory consists of four terms: The first and third terms are the CP-symmetric kinetic terms of the gauge and quark fields. The fourth term is the quark mass term which is CP violating for non-zero phases θ ′ ≠ 0 {\displaystyle \theta '\neq 0} while the second term is the so-called θ-term or “vacuum angle”, which also violates CP-symmetry. Quark fields can always be redefined by performing a chiral transformation by some angle α {\displaystyle \alpha } as which changes the complex mass phase by θ ′ → θ ′ − α {\displaystyle \theta '\rightarrow \theta '-\alpha } while leaving the kinetic terms unchanged. The transformation also changes the θ-term as θ → θ + α {\displaystyle \theta \rightarrow \theta +\alpha } due to a change in the path integral measure, an effect closely connected to the chiral anomaly . The theory would be CP invariant if one could eliminate both sources of CP violation through such a field redefinition. But this cannot be done unless θ = − θ ′ {\displaystyle \theta =-\theta '} . This is because even under such field redefinitions, the combination θ ′ + θ → ( θ ′ − α ) + ( θ + α ) = θ ′ + θ {\displaystyle \theta '+\theta \rightarrow (\theta '-\alpha )+(\theta +\alpha )=\theta '+\theta } remains unchanged. For example, the CP violation due to the mass term can be eliminated by picking α = θ ′ {\displaystyle \alpha =\theta '} , but then all the CP violation goes to the θ-term which is now proportional to θ ¯ {\displaystyle {\bar {\theta }}} . If instead the θ-term is eliminated through a chiral transformation, then there will be a CP violating complex mass with a phase θ ¯ {\displaystyle {\bar {\theta }}} . Practically, it is usually useful to put all the CP violation into the θ-term and thus only deal with real masses. In the Standard Model where one deals with six quarks whose masses are described by the Yukawa matrices Y u {\displaystyle Y_{u}} and Y d {\displaystyle Y_{d}} , the physical CP violating angle is θ ¯ = θ − arg ⁡ det ( Y u Y d ) {\displaystyle {\bar {\theta }}=\theta -\arg \det(Y_{u}Y_{d})} . Since the θ-term has no contributions to perturbation theory, all effects from strong CP violation is entirely non-perturbative. Notably, it gives rise to a neutron electric dipole moment [ 5 ] Current experimental upper bounds on the dipole moment give an upper bound of d N < 10 − 26 e ⋅ {\displaystyle d_{N}<10^{-26}{\text{e}}\cdot } cm, [ 6 ] which requires θ ¯ < 10 − 10 {\displaystyle {\bar {\theta }}<10^{-10}} . The angle θ ¯ {\displaystyle {\bar {\theta }}} can take any value between zero and 2 π {\displaystyle 2\pi } , so it taking on such a particularly small value is a fine-tuning problem called the strong CP problem. The strong CP problem is solved automatically if one of the quarks is massless. [ 7 ] In that case one can perform a set of chiral transformations on all the massive quark fields to get rid of their complex mass phases and then perform another chiral transformation on the massless quark field to eliminate the residual θ-term without also introducing a complex mass term for that field. This then gets rid of all CP violating terms in the theory. The problem with this solution is that all quarks are known to be massive from experimental matching with lattice calculations . Even if one of the quarks was essentially massless to solve the problem, this would in itself just be another fine-tuning problem since there is nothing requiring a quark mass to take on such a small value. The most popular solution to the problem is through the Peccei–Quinn mechanism. [ 8 ] This introduces a new global anomalous symmetry which is then spontaneously broken at low energies, giving rise to a pseudo-Goldstone boson called an axion. The axion ground state dynamically forces the theory to be CP-symmetric by setting θ ¯ = 0 {\displaystyle {\bar {\theta }}=0} . Axions are also considered viable candidates for dark matter and axion-like particles are also predicted by string theory . Other less popular proposed solutions exist such as Nelson–Barr models. [ 9 ] [ 10 ] These set θ ¯ = 0 {\displaystyle {\bar {\theta }}=0} at some high energy scale where CP-symmetry is exact but the symmetry is then spontaneously broken. The Nelson–Barr mechanism is a way of explaining why θ ¯ {\displaystyle {\bar {\theta }}} remains small at low energies while the CP breaking phase in the CKM matrix is large.
https://en.wikipedia.org/wiki/Strong_CP_problem
In order theory , a subset A of a partially ordered set P is a strong downwards antichain if it is an antichain in which no two distinct elements have a common lower bound in P , that is, In the case where P is ordered by inclusion, and closed under subsets, but does not contain the empty set, this is simply a family of pairwise disjoint sets. A strong upwards antichain B is a subset of P in which no two distinct elements have a common upper bound in P . Authors will often omit the "upwards" and "downwards" term and merely refer to strong antichains. Unfortunately, there is no common convention as to which version is called a strong antichain. In the context of forcing , authors will sometimes also omit the "strong" term and merely refer to antichains. To resolve ambiguities in this case, the weaker type of antichain is called a weak antichain . If ( P , ≤) is a partial order and there exist distinct x , y ∈ P such that { x , y } is a strong antichain, then ( P , ≤) cannot be a lattice (or even a meet semilattice ), since by definition, every two elements in a lattice (or meet semilattice) must have a common lower bound. Thus lattices have only trivial strong antichains (i.e., strong antichains of cardinality at most 1).
https://en.wikipedia.org/wiki/Strong_antichain
In chemistry , a strong electrolyte is a solute that completely, or almost completely, ionizes or dissociates in a solution. These ions are good conductors of electric current in the solution. Originally, a "strong electrolyte" was defined as a chemical compound that, when in aqueous solution , is a good conductor of electricity. With a greater understanding of the properties of ions in solution, its definition was replaced by the present one. A concentrated solution of this strong electrolyte has a lower vapor pressure than that of pure water at the same temperature. Strong acids , strong bases and soluble ionic salts that are not weak acids or weak bases are strong electrolytes. For strong electrolytes, a single reaction arrow shows that the reaction occurs completely in one direction, in contrast to the dissociation of weak electrolytes, which both ionize and re-bond in significant quantities. [ 1 ] Strong electrolyte ( a q ) ⟶ Cation ( a q ) + + Anion ( a q ) − {\displaystyle {\text{Strong electrolyte}}_{\rm {(aq)}}\longrightarrow {\text{Cation}}_{\rm {(aq)}}^{+}+{\text{Anion}}_{\rm {(aq)}}^{-}} Strong electrolytes conduct electricity only in aqueous solutions , or in molten salt , and ionic liquid . Strong electrolytes break apart into ions completely. The strength of an electrolyte does not affect the open circuit voltage produced by a galvanic cell . But when electric current flows, stronger electrolytes result in smaller voltage losses through the electrolyte and therefore higher cell voltage. This electrochemistry -related article is a stub . You can help Wikipedia by expanding it . This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Strong_electrolyte
' Strong gravity' is a non-mainstream theoretical approach to particle confinement having both a cosmological scale and a particle scale gravity. In the 1960s, it was taken up as an alternative to the then young QCD theory by several theorists, including Abdus Salam , who showed that the particle level gravity approach can produce confinement and asymptotic freedom while not requiring a force behavior differing from an inverse-square law , as does QCD. [ 1 ] Sivaram published a review of this bimetric theory approach. [ 2 ] Although this approach has not so far led to a recognizably successful unification of strong and other forces, the modern approach of string theory is characterized by a close association between gauge forces and spacetime geometry. In some cases, string theory recognizes important duality between gravity-like and QCD-like theories, most notably the AdS/QCD correspondence. The concept of strong gravity follows from applying the potential gravitational energy to the term of heat in the equation of the first law of thermodynamics ( E = Q + W {\displaystyle E=Q+W} ), where the total energy is mass-energy and the work is also the kinetic energy: m c 2 = k T + E K {\displaystyle mc^{2}=kT+E_{K}} , becomes m c 2 = G m s m r + E K {\displaystyle mc^{2}={\frac {Gm_{s}m}{r}}+E_{K}} This particle physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Strong_gravity
In nuclear physics and particle physics , the strong interaction , also called the strong force or strong nuclear force , is one of the four known fundamental interactions . It confines quarks into protons , neutrons , and other hadron particles, and also binds neutrons and protons to create atomic nuclei, where it is called the nuclear force . Most of the mass of a proton or neutron is the result of the strong interaction energy; the individual quarks provide only about 1% of the mass of a proton. At the range of 10 −15 m (1 femtometer , slightly more than the radius of a nucleon ), the strong force is approximately 100 times as strong as electromagnetism , 10 6 times as strong as the weak interaction , and 10 38 times as strong as gravitation . [ 1 ] In the context of atomic nuclei, the force binds protons and neutrons together to form a nucleus and is called the nuclear force (or residual strong force ). [ 2 ] Because the force is mediated by massive, short lived mesons on this scale, the residual strong interaction obeys a distance-dependent behavior between nucleons that is quite different from when it is acting to bind quarks within hadrons. There are also differences in the binding energies of the nuclear force with regard to nuclear fusion versus nuclear fission . Nuclear fusion accounts for most energy production in the Sun and other stars . Nuclear fission allows for decay of radioactive elements and isotopes , although it is often mediated by the weak interaction. Artificially, the energy associated with the nuclear force is partially released in nuclear power and nuclear weapons , both in uranium or plutonium -based fission weapons and in fusion weapons like the hydrogen bomb . [ 3 ] [ 4 ] Before 1971, physicists were uncertain as to how the atomic nucleus was bound together. It was known that the nucleus was composed of protons and neutrons and that protons possessed positive electric charge , while neutrons were electrically neutral. By the understanding of physics at that time, positive charges would repel one another and the positively charged protons should cause the nucleus to fly apart. However, this was never observed. New physics was needed to explain this phenomenon. A stronger attractive force was postulated to explain how the atomic nucleus was bound despite the protons' mutual electromagnetic repulsion . This hypothesized force was called the strong force , which was believed to be a fundamental force that acted on the protons and neutrons that make up the nucleus. In 1964, Murray Gell-Mann , and separately George Zweig , proposed that baryons , which include protons and neutrons, and mesons were composed of elementary particles. Zweig called the elementary particles "aces" while Gell-Mann called them "quarks"; the theory came to be called the quark model . [ 5 ] The strong attraction between nucleons was the side-effect of a more fundamental force that bound the quarks together into protons and neutrons. The theory of quantum chromodynamics explains that quarks carry what is called a color charge , although it has no relation to visible color. [ 6 ] Quarks with unlike color charge attract one another as a result of the strong interaction, and the particle that mediates this was called the gluon . The strong interaction is observable at two ranges, and mediated by different force carriers in each one. On a scale less than about 0.8 fm (roughly the radius of a nucleon), the force is carried by gluons and holds quarks together to form protons, neutrons, and other hadrons. On a larger scale, up to about 3 fm, the force is carried by mesons and binds nucleons ( protons and neutrons ) together to form the nucleus of an atom . [ 2 ] In the former context, it is often known as the color force , and is so strong that if hadrons are struck by high-energy particles, they produce jets of massive particles instead of emitting their constituents (quarks and gluons) as freely moving particles. This property of the strong force is called color confinement . The word strong is used since the strong interaction is the "strongest" of the four fundamental forces. At a distance of 10 −15 m, its strength is around 100 times that of the electromagnetic force , some 10 6 times as great as that of the weak force, and about 10 38 times that of gravitation . The strong force is described by quantum chromodynamics (QCD), a part of the Standard Model of particle physics. Mathematically, QCD is a non-abelian gauge theory based on a local (gauge) symmetry group called SU(3) . The force carrier particle of the strong interaction is the gluon, a massless gauge boson . Gluons are thought to interact with quarks and other gluons by way of a type of charge called color charge . Color charge is analogous to electromagnetic charge, but it comes in three types (±red, ±green, and ±blue) rather than one, which results in different rules of behavior. These rules are described by quantum chromodynamics (QCD), the theory of quark–gluon interactions. Unlike the photon in electromagnetism, which is neutral, the gluon carries a color charge. Quarks and gluons are the only fundamental particles that carry non-vanishing color charge, and hence they participate in strong interactions only with each other. The strong force is the expression of the gluon interaction with other quark and gluon particles. All quarks and gluons in QCD interact with each other through the strong force. The strength of interaction is parameterized by the strong coupling constant . This strength is modified by the gauge color charge of the particle, a group-theoretical property. The strong force acts between quarks. Unlike all other forces (electromagnetic, weak, and gravitational), the strong force does not diminish in strength with increasing distance between pairs of quarks. After a limiting distance (about the size of a hadron ) has been reached, it remains at a strength of about 10 000 N , no matter how much farther the distance between the quarks. [ 7 ] : 164 As the separation between the quarks grows, the energy added to the pair creates new pairs of matching quarks between the original two; hence it is impossible to isolate quarks. The explanation is that the amount of work done against a force of 10 000 N is enough to create particle–antiparticle pairs within a very short distance. The energy added to the system by pulling two quarks apart would create a pair of new quarks that will pair up with the original ones. In QCD, this phenomenon is called color confinement ; as a result, only hadrons, not individual free quarks, can be observed. The failure of all experiments that have searched for free quarks is considered to be evidence of this phenomenon. The elementary quark and gluon particles involved in a high energy collision are not directly observable. The interaction produces jets of newly created hadrons that are observable. Those hadrons are created, as a manifestation of mass–energy equivalence, when sufficient energy is deposited into a quark–quark bond, as when a quark in one proton is struck by a very fast quark of another impacting proton during a particle accelerator experiment. However, quark–gluon plasmas have been observed. [ 8 ] While color confinement implies that the strong force acts without distance-diminishment between pairs of quarks in compact collections of bound quarks (hadrons), at distances approaching or greater than the radius of a proton, a residual force (described below) remains. It manifests as a force between the "colorless" hadrons, and is known as the nuclear force or residual strong force (and historically as the strong nuclear force ). The nuclear force acts between hadrons, known as mesons and baryons . This "residual strong force", acting indirectly, transmits gluons that form part of the virtual π and ρ mesons , which, in turn, transmit the force between nucleons that holds the nucleus (beyond hydrogen-1 nucleus) together. [ 9 ] The residual strong force is thus a minor residuum of the strong force that binds quarks together into protons and neutrons. This same force is much weaker between neutrons and protons, because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms ( van der Waals forces ) are much weaker than the electromagnetic forces that hold electrons in association with the nucleus, forming the atoms. [ 7 ] Unlike the strong force, the residual strong force diminishes with distance, and does so rapidly. The decrease is approximately as a negative exponential power of distance, though there is no simple expression known for this; see Yukawa potential . The rapid decrease with distance of the attractive residual force and the less rapid decrease of the repulsive electromagnetic force acting between protons within a nucleus, causes the instability of larger atomic nuclei, such as all those with atomic numbers larger than 82 (the element lead). Although the nuclear force is weaker than the strong interaction itself, it is still highly energetic: transitions produce gamma rays . The mass of a nucleus is significantly different from the summed masses of the individual nucleons. This mass defect is due to the potential energy associated with the nuclear force. Differences between mass defects power nuclear fusion and nuclear fission . The so-called Grand Unified Theories (GUT) aim to describe the strong interaction and the electroweak interaction as aspects of a single force, similarly to how the electromagnetic and weak interactions were unified by the Glashow–Weinberg–Salam model into electroweak interaction . The strong interaction has a property called asymptotic freedom , wherein the strength of the strong force diminishes at higher energies (or temperatures). The theorized energy where its strength becomes equal to the electroweak interaction is the grand unification energy . However, no Grand Unified Theory has yet been successfully formulated to describe this process, and Grand Unification remains an unsolved problem in physics . If GUT is correct, after the Big Bang and during the electroweak epoch of the universe, the electroweak force separated from the strong force. Accordingly, a grand unification epoch is hypothesized to have existed prior to this.
https://en.wikipedia.org/wiki/Strong_interaction
The strong programme or strong sociology is a variety of the sociology of scientific knowledge (SSK) particularly associated with David Bloor , [ 1 ] Barry Barnes , Harry Collins , Donald A. MacKenzie , [ 2 ] and John Henry. The strong programme's influence on science and technology studies is credited as being unparalleled ( Latour 1999). The largely Edinburgh -based school of thought aims to illustrate how the existence of a scientific community , bound together by allegiance to a shared paradigm , is a prerequisite for normal scientific activity. The strong programme is a reaction against "weak" sociologies of science, which restricted the application of sociology to "failed" or "false" theories, such as phrenology . Failed theories would be explained by citing the researchers' biases , such as covert political or economic interests. Sociology would be only marginally relevant to successful theories, which succeeded because they had revealed a fact of nature. The strong programme proposed that both "true" and "false" scientific theories should be treated the same way. Both are caused by social factors or conditions, such as cultural context and self-interest . All human knowledge, as something that exists in the human cognition, must contain some social components in its formation process. As formulated by David Bloor, [ 3 ] the strong programme has four indispensable components: Because the strong programme originated at the 'Science Studies Unit,' University of Edinburgh , it is sometimes termed the Edinburgh School . However, there is also a Bath School associated with Harry Collins that makes similar proposals. In contrast to the Edinburgh School, which emphasizes historical approaches, the Bath School emphasizes microsocial studies of laboratories and experiments. [ 4 ] The Bath school, however, does depart from the strong programme on some fundamental issues. In the social construction of technology (SCOT) approach developed by Collins' student Trevor Pinch , as well as by the Dutch sociologist Wiebe Bijker , the strong programme was extended to technology. There are SSK-influenced scholars working in science and technology studies programs throughout the world. [ 5 ] In order to study scientific knowledge from a sociological point of view, the strong programme has adhered to a form of radical relativism . In other words, it argues that – in the social study of institutionalised beliefs about " truth " – it would be unwise to use "truth" as an explanatory resource. To do so would (according to the relativist view) include the answer as part of the question (Barnes 1992), and propound a " whiggish " approach towards the study of history – a narrative of human history as an inevitable march towards truth and enlightenment. Physicists Alan Sokal and Jean Bricmont wrote a scathing critique Fashionable Nonsense of the Strong Programme in 1997 and its reliance on social constructionism . [ 6 ] Sokal in particular has criticised radical relativism as part of the science wars , on the basis that such an understanding will lead inevitably towards solipsism and postmodernism . [ citation needed ] Markus Seidel attacks the main arguments – underdetermination and norm-circularity – provided by Strong Programme proponents for their relativism. [ 7 ] It has also been argued that the strong programme has incited climate denial . [ 8 ]
https://en.wikipedia.org/wiki/Strong_programme
Strong reciprocity is an area of research in behavioral economics , evolutionary psychology , and evolutionary anthropology on the predisposition to cooperate even when there is no apparent benefit in doing so. This topic is particularly interesting to those studying the evolution of cooperation , as these behaviors seem to be in contradiction with predictions made by many models of cooperation. [ 1 ] In response, current work on strong reciprocity is focused on developing evolutionary models which can account for this behavior. [ 2 ] [ 3 ] Critics of strong reciprocity argue that it is an artifact of lab experiments and does not reflect cooperative behavior in the real world. [ 4 ] A variety of studies from experimental economics provide evidence for strong reciprocity, either by demonstrating people's willingness to cooperate with others, or by demonstrating their willingness to take costs on themselves to punish those who do not. One experimental game used to measure levels of cooperation is the dictator game . In the standard form of the dictator game, there are two anonymous unrelated participants. One participant is assigned the role of the allocator and the other the role of the recipient. The allocator is assigned some amount of money, which they can divide in any way they choose. If a participant is trying to maximize their payoff, the rational solution ( nash equilibrium ) for the allocator to assign nothing to the recipient. In a 2011 meta study of 616 dictator game studies, Engel found an average allocation of 28.3%, with 36% of participants giving nothing, 17% choosing the equal split, and 5.44% give the recipient everything. [ 5 ] The trust game , an extension of the dictator game, provides additional evidence for strong reciprocity. The trust game extends the dictator game by multiplying the amount given by the allocator to the recipient by some value greater than one, and then allowing the recipient to give some amount back to the allocator. Once again in this case, if participants are trying to maximize their payoff, recipient should give nothing back to the allocator, and the allocator should assign nothing to the recipient. A 2009 meta analysis of 84 trust game studies revealed that the allocator gave an average of 51% and that the receiver returned an average of 37%. [ 6 ] A third commonly used experiment used to demonstrate strong reciprocity preferences is the public goods game . In a public goods game, some number of participants are placed in a group. Each participant is given some amount of money. They are then allowed to contribute any of their allocation to a common pool. The common pool is then multiplied by some amount greater than one, then evenly redistributed to each participant, regardless of how much they contributed. In this game, for anyone trying to maximize their payoff, the rational nash equilibrium strategy is to not contribute anything. However, in a 2001 study, Fischbacher observed average contributions of 33.5%. [ 7 ] The second component of strong reciprocity is that people are willing to punish those who fail to cooperate, even when punishment is costly. There are two types of punishment: second party and third party punishment. In second party punishment, the person who was hurt by the other parties' failure to cooperate has the opportunity to punish the non-cooperator. In third party punishment, an uninvolved third party has the opportunity to punish the non-cooperator. A common game used to measure willingness to engage in second party punishment is the ultimatum game . This game is very similar to the previously described dictator game in which the allocator divides a sum of money between himself and a recipient. In the ultimatum game, the recipient has the choice to either accept the offer or reject it, resulting in both players receiving nothing. If recipients are payoff maximizers, it is in the nash equilibrium for them to accept any offer, and it is therefore in the allocator's interest to offer as close to zero as possible. [ 8 ] However, the experimental results show that the allocator usually offers over 40%, and is rejected by the recipient 16% of the time. Recipients are more likely to reject low offers rather than high offers. [ 9 ] Another example of second party punishment is in public goods game as described earlier, but with a second stage added in which participants can pay to punish other participants. In this game, a payoff maximizer's rational strategy in nash equilibrium is not to punish and to not contribute. However, experimental results show that participants are willing to pay to punish those who deviate from the average level of contribution – so much so that it becomes disadvantageous to give a lower amount, which allows for sustained cooperation. [ 10 ] [ 11 ] Modifications of the dictator game and prisoner's dilemma provide support for the willingness to engage in costly third party punishment. The modified dictator game is exactly the same as the traditional dictator game but with a third party observing. After the allocator makes their decision, the third party has the opportunity to pay to punish the allocator. A payoff maximizing third party would choose not to punish, and a similarly rational allocator would choose to keep the entire sum for himself. However, experimental results show that a majority of third parties punish allocations less than 50% [ 12 ] In the prisoner's dilemma with third party punishment, two of the participants play a prisoner's dilemma, in which each must choose to either cooperate or defect. The game is set up such that regardless of what the other player does, it is rational for an income maxizer to always choose to defect, even though both players cooperating yields a higher payoff than both players defecting. A third player observes this exchange, then can pay to punish either player. An income maximizing third parties' rational response would be to not punish, and income maximizing players would choose to defect. A 2004 study demonstrates that a near majority of participants (46%) are willing to pay to punish if one participant defects. If both parties defect, 21% are still willing to punish. [ 12 ] Other researchers have investigated to what extent these behavioral economic lab experiments on social preferences can be generalized to behavior in the field. In a 2011 study, Fehr and Leibbrandt examined the relationship between contributions in public goods games to participation in public goods in the community of shrimpers in Brazil. These shrimpers cut a hole in the bottom of their fishing bucket in order to allow immature shrimp to escape, thereby investing in the public good of the shared shrimp population. The size of the hole can be seen as the degree to which participants cooperate, as larger holes allow more shrimp to escape. Controlling for a number of other possible influences, Fehr and Leibbrandt demonstrated a positive relationship between hole size and contributions in the public goods game experiment. [ 13 ] Rustagi and colleagues were able to demonstrate a similar effect with 49 groups of Bale Oromo herders in Ethiopia, who were participating in forest management. Results from public goods game experiments revealed more than one third of participant herders were conditional cooperators, meaning they cooperate with other cooperators. Rustagi et al. demonstrated that groups with larger amounts of conditional cooperators planted a larger number of trees. [ 14 ] In addition to experimental results, ethnography collected by anthropologists describes strong reciprocity observed in the field. Records of the Turkana , an acephalous African pastoral group, demonstrate strong reciprocity behavior. If someone acts cowardly in combat or commits some other free-riding behavior, the group confers and decides if a violation has occurred. If they do decide a violation has occurred, corporal punishment is administered by the age cohort of the violator. Importantly, the age cohort taking the risks are not necessarily those who were harmed, making it costly third party punishment. [ 15 ] The Walibri of Australia also exhibit third party costly punishment. The local community determines whether an act of homicide, adultery, theft, etc. was an offense. The community then appoints someone to carry out the punishment, and others to protect that person against retaliation. [ 16 ] Data from the Aranda foragers of the Central Desert in Australia suggest this punishment can be very costly, as it carries with it the risk of retaliation from the family members of the punished, which can be as severe as homicide. [ 17 ] A number of evolutionary models have been proposed in order to account for the existence of strong reciprocity. This section briefly touches on an important small subset of such models. The first model of strong reciprocity was proposed by Herbert Gintis in 2000, which contained a number of simplifying assumptions addressed in later models. [ 2 ] In 2004, Samuel Bowles and Gintis presented a follow up model in which they incorporated cognitive, linguistic, and other capacities unique to humans in order to demonstrate how these might be harnessed to strengthen the power of social norms in large scale public goods games . [ 3 ] In a 2001 model, Joe Henrich and Robert Boyd also build on Gintis' model by incorporating conformist transmission of cultural information, demonstrating that this can also stabilize cooperative groups norms. [ 18 ] Boyd, Gintis, Bowles, and Peter Richerson 's 2003 model of the evolution of third party punishment demonstrates how even though the logic underlying altruistic giving and altruistic punishment may be the same, the evolutionary dynamics are not. This model is the first to employ cultural group selection in order to select for better performing groups, while using norms to stabilize behavior within groups. [ 19 ] Though many of the previously proposed models were both costly and uncoordinated, a 2010 model by Boyd, Gintis and Bowles presents a mechanism for coordinated costly punishment. In this quorum-sensing model, each agent chooses whether or not they are willing to engage in punishment. If a sufficient number of agents are willing to engage in punishment, then the group acts collectively to administer it. [ 20 ] An important aspect of this model is that strong reciprocity is self-regarding when rare in the population, but may be altruistic when common within a group. Significant cross-cultural variation has been observed in strong reciprocity behavior. In 2001, dictator game experiments were run in a 15 small scale societies across the world. The results of the experiments showed dramatic variation, with some groups mean offer as little as 26% and some as great as 58%. The pattern of receiver results was also interesting, with some participants in some cultures rejecting offers above 50%. Henrich and colleagues determined that the best predictors of dictator game allocations were the size of the group (small groups giving less) and market integration (the more involved with markets, the more participants gave). [ 21 ] This study was then repeated with a different 15 small scale societies and with better measures of market integration, finding a similar pattern of results. [ 22 ] These results are consistent with the culture-gene coevolution hypothesis. [ 22 ] A later paper by the same researchers identified religion as a third major contributor. Those people who participate in a world religion were more likely to exhibit strong reciprocity behavior. [ 23 ] A particularly prominent criticism of strong reciprocity theory is that it does not correspond to behavior found in the actual environment. In particular, the existence of third party punishment in the field is called into question. [ 4 ] Some have responded to this criticism by pointing out that if effective, third party punishment will rarely be used, and will therefore be difficult to observe. [ 24 ] [ 25 ] Others have suggested that there is evidence of third party costly punishment in the field. [ 26 ] Critics have responded to these claims by arguing that it is unfair for proponents to argue that both a demonstration of costly third party punishment as well as a lack of costly third party punishment are both evidence of its existence. They also question whether the ethnographic evidence presented is costly third party punishment, and call for additional analysis of the costs and benefits of the punishment. [ 27 ] Other research has shown that different types of strong reciprocity do not predict other types of strong reciprocity within individuals. [ 28 ] The existence of strong reciprocity implies that systems developed based purely on material self-interest may be missing important motivators in the marketplace. This section gives two examples of possible implications. One area of application is in the design of incentive schemes. For example, standard contract theory has difficulty dealing with the degree of incompleteness in contracts and the lack of use of performance measures, even when they are cheap to implement. Strong reciprocity and models based on it suggest that this can be explained by people's willingness to act fairly, even when it is against their material self-interest. Experimental results suggest that this is indeed the case, with participants preferring less complete contracts, and workers willing to contribute a fair amount beyond what would be in their own self-interest. [ 29 ] Another application of strong reciprocity is in allocating property rights and ownership structure. Joint ownership of property can be very similar to the public goods game , where owners can independently contribute to the common pool, which then returns on the investment and is evenly distributed to all parties. This ownership structure is subject to the tragedy of the commons , in which if all the parties are self-interested, no one will invest. Alternatively, property could be allocated in an owner and employee relationship, in which an employee is hired by the owner and paid a specific wage for a specific level of investment. Experimental studies show that participants generally prefer joint ownership, and do better under joint ownership than in the owner employee organization. [ 30 ]
https://en.wikipedia.org/wiki/Strong_reciprocity
In mathematics , a strong topology is a topology which is stronger than some other "default" topology. This term is used to describe different topologies depending on context, and it may refer to: A topology τ is stronger than a topology σ (is a finer topology ) if τ contains all the open sets of σ. In algebraic geometry , it usually means the topology of an algebraic variety as complex manifold or subspace of complex projective space , as opposed to the Zariski topology (which is rarely even a Hausdorff space ).
https://en.wikipedia.org/wiki/Strong_topology
A strongback is a beam or girder which acts as a secondary support member to an existing structure. A strongback in a staircase is usually ordinary two-by dimensional lumber attached to the staircase stringers to stiffen the assembly. In shipbuilding , a strongback, known as a waler is oriented lengthwise along a ship to brace across several frames to keep the frames square and plumb. [ 1 ] In formwork strongbacks (typically vertical) reinforce typically horizontal walers to provide additional support against hydrostatic pressure during concrete pours. [ 2 ] Some rockets like the Antares , the Falcon 9 and the Falcon Heavy use a strongback to restrain the rocket prior to launch. [ 3 ] This structure tilts several degrees away from the rocket to clear the launch, either at the moment of launch or a few minutes before. This architecture -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Strongback_(girder)
Heisenberg's uncertainty relation is one of the fundamental results in quantum mechanics. [ 1 ] Later Robertson proved the uncertainty relation for two general non-commuting observables , [ 2 ] which was strengthened by Schrödinger . [ 3 ] However, the conventional uncertainty relation like the Robertson-Schrödinger relation cannot give a non-trivial bound for the product of variances of two incompatible observables because the lower bound in the uncertainty inequalities can be null and hence trivial even for observables that are incompatible on the state of the system. The Heisenberg–Robertson–Schrödinger uncertainty relation was proved at the dawn of quantum formalism and is ever-present in the teaching and research on quantum mechanics. After about 85 years of existence of the uncertainty relation this problem was solved recently by Lorenzo Maccone and Arun K. Pati . The standard uncertainty relations are expressed in terms of the product of variances of the measurement results of the observables A {\displaystyle A} and B {\displaystyle B} , and the product can be null even when one of the two variances is different from zero. However, the stronger uncertainty relations due to Maccone and Pati provide different uncertainty relations, based on the sum of variances that are guaranteed to be nontrivial whenever the observables are incompatible on the state of the quantum system. [ 4 ] (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., He et al., [ 5 ] and Ref. [ 6 ] due to Huang.) The Heisenberg–Robertson or Schrödinger uncertainty relations do not fully capture the incompatibility of observables in a given quantum state. The stronger uncertainty relations give non-trivial bounds on the sum of the variances for two incompatible observables. For two non-commuting observables A {\displaystyle A} and B {\displaystyle B} the first stronger uncertainty relation is given by where Δ A 2 = ⟨ Ψ | A 2 | Ψ ⟩ − ⟨ Ψ | A | Ψ ⟩ 2 {\displaystyle \Delta A^{2}=\langle \Psi |A^{2}|\Psi \rangle -\langle \Psi |A|\Psi \rangle ^{2}} , Δ B 2 = ⟨ Ψ | B 2 | Ψ ⟩ − ⟨ Ψ | B | Ψ ⟩ 2 {\displaystyle \Delta B^{2}=\langle \Psi |B^{2}|\Psi \rangle -\langle \Psi |B|\Psi \rangle ^{2}} , | Ψ ¯ ⟩ {\displaystyle |{\bar {\Psi }}\rangle } is a vector that is orthogonal to the state of the system, i.e., ⟨ Ψ | Ψ ¯ ⟩ = 0 {\displaystyle \langle \Psi |{\bar {\Psi }}\rangle =0} and one should choose the sign of ± i ⟨ Ψ | [ A , B ] | Ψ ⟩ {\displaystyle \pm i\langle \Psi |[A,B]|\Psi \rangle } so that this is a positive number. The other non-trivial stronger uncertainty relation is given by where | Ψ ¯ A + B ⟩ {\displaystyle |{\bar {\Psi }}_{A+B}\rangle } is a unit vector orthogonal to | Ψ ⟩ {\displaystyle |\Psi \rangle } . The form of | Ψ ¯ A + B ⟩ {\displaystyle |{\bar {\Psi }}_{A+B}\rangle } implies that the right-hand side of the new uncertainty relation is nonzero unless | Ψ ⟩ {\displaystyle |\Psi \rangle } is an eigenstate of ( A + B ) {\displaystyle (A+B)} . One can prove [ clarification needed ] an improved version of the Heisenberg–Robertson uncertainty relation which reads as The Heisenberg–Robertson uncertainty relation follows from the above uncertainty relation. [ clarification needed ] In quantum theory, one should distinguish between the uncertainty relation and the uncertainty principle. The former refers solely to the preparation of the system which induces a spread in the measurement outcomes, and does not refer to the disturbance induced by the measurement. The uncertainty principle captures the measurement disturbance by the apparatus and the impossibility of joint measurements of incompatible observables. The Maccone–Pati uncertainty relations refer to preparation uncertainty relations. These relations set strong limitations for the nonexistence of common eigenstates for incompatible observables. The Maccone–Pati uncertainty relations have been experimentally tested for qutrit systems. [ 7 ] The new uncertainty relations not only capture the incompatibility of observables but also of quantities that are physically measurable (as variances can be measured in the experiment).
https://en.wikipedia.org/wiki/Stronger_uncertainty_relations
Strongly correlated materials are a wide class of compounds that include insulators and electronic materials, and show unusual (often technologically useful) electronic and magnetic properties , such as metal-insulator transitions , heavy fermion behavior, half-metallicity , and spin-charge separation . The essential feature that defines these materials is that the behavior of their electrons or spinons cannot be described effectively in terms of non-interacting entities. [ 1 ] Theoretical models of the electronic ( fermionic ) structure of strongly correlated materials must include electronic ( fermionic ) correlation to be accurate. As of recently, the label quantum materials is also used to refer to strongly correlated materials, among others. Many transition metal oxides belong to this class [ 2 ] which may be subdivided according to their behavior, e.g. high-T c , spintronic materials , multiferroics , Mott insulators , spin Peierls materials, heavy fermion materials, quasi-low-dimensional materials, etc. The single most intensively studied effect is probably high-temperature superconductivity in doped cuprates , e.g. La 2−x Sr x CuO 4 . Other ordering or magnetic phenomena and temperature-induced phase transitions in many transition-metal oxides are also gathered under the term "strongly correlated materials." Typically, strongly correlated materials have incompletely filled d - or f - electron shells with narrow energy bands. One can no longer consider any electron in the material as being in a " sea " of the averaged motion of the others (also known as mean field theory ). Each single electron has a complex influence on its neighbors. The term strong correlation refers to behavior of electrons in solids that is not well-described (often not even in a qualitatively correct manner) by simple one-electron theories such as the local-density approximation (LDA) of density-functional theory or Hartree–Fock theory . For instance, the seemingly simple material NiO has a partially filled 3 d band (the Ni atom has 8 of 10 possible 3 d -electrons) and therefore would be expected to be a good conductor. However, strong Coulomb repulsion (a correlation effect) between d electrons makes NiO instead a wide- band gap insulator. Thus, strongly correlated materials have electronic structures that are neither simply free-electron-like nor completely ionic, but a mixture of both. Extensions to the LDA (LDA+U, GGA, SIC, GW , etc.) as well as simplified models Hamiltonians (e.g. Hubbard-like models ) have been proposed and developed in order to describe phenomena that are due to strong electron correlation. Among them, dynamical mean field theory (DMFT) successfully captures the main features of correlated materials. Schemes that use both LDA and DMFT explain many experimental results in the field of correlated electrons. Experimentally, optical spectroscopy, high-energy electron spectroscopies , resonant photoemission , and more recently resonant inelastic (hard and soft) X-ray scattering ( RIXS ) and neutron spectroscopy have been used to study the electronic and magnetic structure of strongly correlated materials. Spectral signatures seen by these techniques that are not explained by one-electron density of states are often related to strong correlation effects. The experimentally obtained spectra can be compared to predictions of certain models or may be used to establish constraints on the parameter sets. One has for instance established a classification scheme of transition metal oxides within the so-called Zaanen–Sawatzky–Allen diagram . [ 3 ] The manipulation and use of correlated phenomena has applications like superconducting magnets and in magnetic storage (CMR) [ citation needed ] technologies. Other phenomena like the metal-insulator transition in VO 2 have been explored as a means to make smart windows to reduce the heating/cooling requirements of a room. [ 4 ] Furthermore, metal-insulator transitions in Mott insulating materials like LaTiO 3 can be tuned through adjustments in band filling to potentially be used to make transistors that would use conventional field effect transistor configurations to take advantage of the material's sharp change in conductivity. [ 5 ] Transistors using metal-insulator transitions in Mott insulators are often referred to as Mott transistors, and have been successfully fabricated using VO 2 before, but they have required the larger electric fields induced by ionic liquids as a gate material to operate. [ 6 ]
https://en.wikipedia.org/wiki/Strongly_correlated_material
Strong measurability has a number of different meanings, some of which are explained below. For a function f with values in a Banach space (or Fréchet space ), strong measurability usually means Bochner measurability . However, if the values of f lie in the space L ( X , Y ) {\displaystyle {\mathcal {L}}(X,Y)} of continuous linear operators from X to Y , then often strong measurability means that the operator f(x) is Bochner measurable for each fixed x in the domain of f , whereas the Bochner measurability of f is called uniform measurability (cf. " uniformly continuous " vs. " strongly continuous "). A family of bounded linear operators combined with the direct integral is strongly measurable, when each of the individual operators is strongly measurable. A semigroup of linear operators can be strongly measurable yet not strongly continuous. [ 1 ] It is uniformly measurable if and only if it is uniformly continuous, i.e., if and only if its generator is bounded. This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Strongly_measurable_function
In model theory —a branch of mathematical logic —a minimal structure is an infinite one-sorted structure such that every subset of its domain that is definable with parameters is either finite or cofinite . A strongly minimal theory is a complete theory all models of which are minimal. A strongly minimal structure is a structure whose theory is strongly minimal. Thus a structure is minimal only if the parametrically definable subsets of its domain cannot be avoided, because they are already parametrically definable in the pure language of equality. Strong minimality was one of the early notions in the new field of classification theory and stability theory that was opened up by Morley's theorem on totally categorical structures. The nontrivial standard examples of strongly minimal theories are the one-sorted theories of infinite-dimensional vector spaces , and the theories ACF p of algebraically closed fields of characteristic p . As the example ACF p shows, the parametrically definable subsets of the square of the domain of a minimal structure can be relatively complicated (" curves "). More generally, a subset of a structure that is defined as the set of realizations of a formula φ ( x ) is called a minimal set if every parametrically definable subset of it is either finite or cofinite. It is called a strongly minimal set if this is true even in all elementary extensions . A strongly minimal set, equipped with the closure operator given by algebraic closure in the model-theoretic sense, is an infinite matroid, or pregeometry . A model of a strongly minimal theory is determined up to isomorphism by its dimension as a matroid. Totally categorical theories are controlled by a strongly minimal set; this fact explains (and is used in the proof of) Morley's theorem. Boris Zilber conjectured that the only pregeometries that can arise from strongly minimal sets are those that arise in vector spaces, projective spaces, or algebraically closed fields. This conjecture was refuted by Ehud Hrushovski , who developed a method known as "Hrushovski construction" to build new strongly minimal structures from finite structures.
https://en.wikipedia.org/wiki/Strongly_minimal_theory
In graph theory , a strongly regular graph ( SRG ) is a regular graph G = ( V , E ) with v vertices and degree k such that for some given integers λ , μ ≥ 0 {\displaystyle \lambda ,\mu \geq 0} Such a strongly regular graph is denoted by srg( v , k , λ, μ) . Its complement graph is also strongly regular: it is an srg( v , v − k − 1, v − 2 − 2 k + μ, v − 2 k + λ) . A strongly regular graph is a distance-regular graph with diameter 2 whenever μ is non-zero. It is a locally linear graph whenever λ = 1 . A strongly regular graph is denoted as an srg( v , k , λ, μ) in the literature. By convention, graphs which satisfy the definition trivially are excluded from detailed studies and lists of strongly regular graphs. These include the disjoint union of one or more equal-sized complete graphs , [ 1 ] [ 2 ] and their complements , the complete multipartite graphs with equal-sized independent sets. Andries Brouwer and Hendrik van Maldeghem (see #References ) use an alternate but fully equivalent definition of a strongly regular graph based on spectral graph theory : a strongly regular graph is a finite regular graph that has exactly three eigenvalues, only one of which is equal to the degree k , of multiplicity 1. This automatically rules out fully connected graphs (which have only two distinct eigenvalues, not three) and disconnected graphs (for which the multiplicity of the degree k is equal to the number of different connected components, which would therefore exceed one). Much of the literature, including Brouwer, refers to the larger eigenvalue as r (with multiplicity f ) and the smaller one as s (with multiplicity g ). Strongly regular graphs were introduced by R.C. Bose in 1963. [ 3 ] They built upon earlier work in the 1950s in the then-new field of spectral graph theory . A strongly regular graph is called primitive if both the graph and its complement are connected. All the above graphs are primitive, as otherwise μ = 0 or λ = k . Conway's 99-graph problem asks for the construction of an srg(99, 14, 1, 2). It is unknown whether a graph with these parameters exists, and John Horton Conway offered a $1000 prize for the solution to this problem. [ 5 ] The strongly regular graphs with λ = 0 are triangle free . Apart from the complete graphs on fewer than 3 vertices and all complete bipartite graphs, the seven listed earlier (pentagon, Petersen, Clebsch, Hoffman-Singleton, Gewirtz, Mesner-M22, and Higman-Sims) are the only known ones. Every strongly regular graph with μ = 1 {\displaystyle \mu =1} is a geodetic graph , a graph in which every two vertices have a unique unweighted shortest path . [ 6 ] The only known strongly regular graphs with μ = 1 {\displaystyle \mu =1} are those where λ {\displaystyle \lambda } is 0, therefore triangle-free as well. These are called the Moore graphs and are explored below in more detail . Other combinations of parameters such as (400, 21, 2, 1) have not yet been ruled out. Despite ongoing research on the properties that a strongly regular graph with μ = 1 {\displaystyle \mu =1} would have, [ 7 ] [ 8 ] it is not known whether any more exist or even whether their number is finite. [ 6 ] Only the elementary result is known, that λ {\displaystyle \lambda } cannot be 1 for such a graph. The four parameters in an srg( v , k , λ, μ) are not independent: In order for an srg( v , k , λ, μ) to exist, the parameters must obey the following relation: The above relation is derived through a counting argument as follows: This relation is a necessary condition for the existence of a strongly regular graph, but not a sufficient condition . For instance, the quadruple (21,10,4,5) obeys this relation, but there does not exist a strongly regular graph with these parameters. [ 9 ] Let I denote the identity matrix and let J denote the matrix of ones , both matrices of order v . The adjacency matrix A of a strongly regular graph satisfies two equations. First: which is a restatement of the regularity requirement. This shows that k is an eigenvalue of the adjacency matrix with the all-ones eigenvector. Second: which expresses strong regularity. The ij -th element of the left hand side gives the number of two-step paths from i to j . The first term of the right hand side gives the number of two-step paths from i back to i , namely k edges out and back in. The second term gives the number of two-step paths when i and j are directly connected. The third term gives the corresponding value when i and j are not connected. Since the three cases are mutually exclusive and collectively exhaustive , the simple additive equality follows. Conversely, a graph whose adjacency matrix satisfies both of the above conditions and which is not a complete or null graph is a strongly regular graph. [ 10 ] Since the adjacency matrix A is symmetric, it follows that its eigenvectors are orthogonal . We already observed one eigenvector above which is made of all ones, corresponding to the eigenvalue k . Therefore the other eigenvectors x must all satisfy J x = 0 {\displaystyle Jx=0} where J is the all-ones matrix as before. Take the previously established equation: and multiply the above equation by eigenvector x : Call the corresponding eigenvalue p (not to be confused with λ {\displaystyle \lambda } the graph parameter) and substitute A x = p x {\displaystyle Ax=px} , J x = 0 {\displaystyle Jx=0} and I x = x {\displaystyle Ix=x} : Eliminate x and rearrange to get a quadratic: This gives the two additional eigenvalues 1 2 [ ( λ − μ ) ± ( λ − μ ) 2 + 4 ( k − μ ) ] {\displaystyle {\frac {1}{2}}\left[(\lambda -\mu )\pm {\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}\,\right]} . There are thus exactly three eigenvalues for a strongly regular matrix. Conversely, a connected regular graph with only three eigenvalues is strongly regular. [ 11 ] Following the terminology in much of the strongly regular graph literature, the larger eigenvalue is called r with multiplicity f and the smaller one is called s with multiplicity g . Since the sum of all the eigenvalues is the trace of the adjacency matrix , which is zero in this case, the respective multiplicities f and g can be calculated: As the multiplicities must be integers, their expressions provide further constraints on the values of v , k , μ , and λ . Strongly regular graphs for which 2 k + ( v − 1 ) ( λ − μ ) ≠ 0 {\displaystyle 2k+(v-1)(\lambda -\mu )\neq 0} have integer eigenvalues with unequal multiplicities. Strongly regular graphs for which 2 k + ( v − 1 ) ( λ − μ ) = 0 {\displaystyle 2k+(v-1)(\lambda -\mu )=0} are called conference graphs because of their connection with symmetric conference matrices . Their parameters reduce to Their eigenvalues are r = − 1 + v 2 {\displaystyle r={\frac {-1+{\sqrt {v}}}{2}}} and s = − 1 − v 2 {\displaystyle s={\frac {-1-{\sqrt {v}}}{2}}} , both of whose multiplicities are equal to v − 1 2 {\displaystyle {\frac {v-1}{2}}} . Further, in this case, v must equal the sum of two squares, related to the Bruck–Ryser–Chowla theorem . Further properties of the eigenvalues and their multiplicities are: [ 12 ] If the above condition(s) are violated for any set of parameters, then there exists no strongly regular graph for those parameters. Brouwer has compiled such lists of existence or non-existence here with reasons for non-existence if any. As noted above, the multiplicities of the eigenvalues are given by which must be integers. In 1960, Alan Hoffman and Robert Singleton examined those expressions when applied on Moore graphs that have λ = 0 and μ = 1. Such graphs are free of triangles (otherwise λ would exceed zero) and quadrilaterals (otherwise μ would exceed 1), hence they have a girth (smallest cycle length) of 5. Substituting the values of λ and μ in the equation ( v − k − 1 ) μ = k ( k − λ − 1 ) {\displaystyle (v-k-1)\mu =k(k-\lambda -1)} , it can be seen that v = k 2 + 1 {\displaystyle v=k^{2}+1} , and the eigenvalue multiplicities reduce to For the multiplicities to be integers, the quantity 2 k − k 2 4 k − 3 {\displaystyle {\frac {2k-k^{2}}{\sqrt {4k-3}}}} must be rational, therefore either the numerator 2 k − k 2 {\displaystyle 2k-k^{2}} is zero or the denominator 4 k − 3 {\displaystyle {\sqrt {4k-3}}} is an integer. If the numerator 2 k − k 2 {\displaystyle 2k-k^{2}} is zero, the possibilities are: If the denominator 4 k − 3 {\displaystyle {\sqrt {4k-3}}} is an integer t , then 4 k − 3 {\displaystyle 4k-3} is a perfect square t 2 {\displaystyle t^{2}} , so k = t 2 + 3 4 {\displaystyle k={\frac {t^{2}+3}{4}}} . Substituting: Since both sides are integers, 15 t {\displaystyle {\frac {15}{t}}} must be an integer, therefore t is a factor of 15, namely t ∈ { ± 1 , ± 3 , ± 5 , ± 15 } {\displaystyle t\in \{\pm 1,\pm 3,\pm 5,\pm 15\}} , therefore k ∈ { 1 , 3 , 7 , 57 } {\displaystyle k\in \{1,3,7,57\}} . In turn: The Hoffman-Singleton theorem states that there are no strongly regular girth-5 Moore graphs except the ones listed above.
https://en.wikipedia.org/wiki/Strongly_regular_graph
The strontian process is an obsolete chemical method to recover sugar from molasses . Its use in Europe peaked in the middle of the 19th century. The name strontian comes from the Scottish village Strontian where the source mineral strontianite ( strontium carbonate ) was first found. Strontium carbonate is a recycled coreactant in this process. There are two types of strontium saccharide : one at low temperature, the strontium monosaccharide ; and the second at high temperature, the strontium disaccharide . [ 3 ] Molasses is the first stage output of several different sugar production processes, and contains more than 50% sugar. The French chemists Hippolyte Leplay and Augustin-Pierre Dubrunfaut developed a process for extracting sugar from molasses, reacting them with barium oxide , to give the insoluble barium-saccharates. [ 4 ] In 1849, they expanded their patent to include strontium salts. Apparently, this patent application had the only purpose to legally secure the so-called baryte process , since the strontian process from Leplay and Dubrunfaut probably wouldn't work as described. [ 5 ] Only later, through the work of Carl Scheibler (patents dated 1881, 1882, and 1883), was it possible to apply the strontian process on an industrial basis. [ 6 ] [ 7 ] According to Scheibler the procedure must be carried out at boiling temperatures. The Scheibler procedure came into use in the Dessauer Sugar Refinery (in Dessau ), through Emil Fleischer . In the Münsterland region, its arrival caused a ″gold fever″ breakout, regarding the strontianite mining. [ 8 ] One of the biggest mines, at Drensteinfurt , was named after Dr. Reichardt, the director of the Dessauer Sugar Refinery. A further place the strontian process came to be used was the Sugar Factory Rositz (in Rositz ). [ citation needed ] Yet by 1883, the demand for strontianite had begun to shrink. First, it was replaced by another strontium mineral ( celestine ), that could be imported from England, in a cheaper way. Second, the prices for sugar decreased so much, that the production from molasses was no longer worthwhile. [ citation needed ]
https://en.wikipedia.org/wiki/Strontian_process
Strontium-90 ( 90 Sr ) is a radioactive isotope of strontium produced by nuclear fission , with a half-life of 28.79 years. It undergoes β − decay into yttrium-90 , with a decay energy of 0.546 MeV. [ 2 ] Strontium-90 has applications in medicine and industry and is an isotope of concern in fallout from nuclear weapons , nuclear weapons testing , and nuclear accidents . [ 3 ] Naturally occurring strontium is nonradioactive and nontoxic at levels normally found in the environment, but 90 Sr is a radiation hazard. [ 4 ] 90 Sr undergoes β − decay with a half-life of 28.79 years and a decay energy of 0.546 MeV distributed to an electron , an antineutrino , and the yttrium isotope 90 Y , which in turn undergoes β − decay with a half-life of 64 hours and a decay energy of 2.28 MeV distributed to an electron, an antineutrino, and 90 Zr (zirconium), which is stable. [ 5 ] Note that 90 Sr/Y is almost a pure beta particle source; the gamma photon emission from the decay of 90 Y is so infrequent that it can normally be ignored. 90 Sr has a specific activity of 5.21 TBq /g. [ 6 ] 90 Sr is a product of nuclear fission . It is present in significant amount in spent nuclear fuel , in radioactive waste from nuclear reactors and in nuclear fallout from nuclear tests . For thermal neutron fission as in today's nuclear power plants, the fission product yield from uranium-235 is 5.7%, from uranium-233 6.6%, but from plutonium-239 only 2.0%. [ 7 ] Strontium-90 is classified as high-level waste. Its 29-year half-life means that it can take hundreds of years to decay to negligible levels. Exposure from contaminated water and food may increase the risk of leukemia and bone cancer . [ 8 ] Reportedly, thousands of capsules of radioactive strontium containing millions of curies are stored at Hanford Site's Waste Encapsulation and Storage Facility. [ 9 ] Algae has shown selectivity for strontium in studies, where most plants used in bioremediation have not shown selectivity between calcium and strontium, often becoming saturated with calcium, which is greater in quantity and also present in nuclear waste. [ 8 ] Researchers have looked at the bioaccumulation of strontium by Scenedesmus spinosus ( algae ) in simulated wastewater. The study claims a highly selective biosorption capacity for strontium of S. spinosus , suggesting that it may be appropriate for use of nuclear wastewater. [ 10 ] A study of the pond alga Closterium moniliferum using stable strontium found that varying the ratio of barium to strontium in water improved strontium selectivity. [ 8 ] Strontium-90 is a " bone seeker " that exhibits biochemical behavior similar to calcium , the next lighter group 2 element . [ 4 ] [ 11 ] After entering the organism, most often by ingestion with contaminated food or water, about 70–80% of the dose gets excreted. [ 3 ] Virtually all remaining strontium-90 is deposited in bones and bone marrow , with the remaining 1% remaining in blood and soft tissues. [ 3 ] Its presence in bones can cause bone cancer , cancer of nearby tissues, and leukemia . [ 12 ] Exposure to 90 Sr can be tested by a bioassay , most commonly by urinalysis . [ 4 ] The biological half-life of strontium-90 in humans has variously been reported as from 14 to 600 days, [ 13 ] [ 14 ] 1000 days, [ 15 ] 18 years, [ 16 ] 30 years [ 17 ] and, at an upper limit, 49 years. [ 18 ] The wide-ranging published biological half life figures are explained by strontium's complex metabolism within the body. However, by averaging all excretion paths, the overall biological half life is estimated to be about 18 years. [ 19 ] The elimination rate of strontium-90 is strongly affected by age and sex, due to differences in bone metabolism . [ 20 ] Together with the caesium isotopes 134 Cs and 137 Cs , and the iodine isotope 131 I , it was among the most important isotopes regarding health impacts after the Chernobyl disaster . As strontium has an affinity to the calcium-sensing receptor of parathyroid cells that is similar to that of calcium, the increased risk of liquidators of the Chernobyl power plant to suffer from primary hyperparathyroidism could be explained by binding of strontium-90. [ 21 ] The radioactive decay of strontium-90 generates a significant amount of heat, 0.95 W/g in the form of pure strontium metal or approximately 0.460 W/g as strontium titanate [ 22 ] and is cheaper than the alternative 238 Pu . It is used as a heat source in many Russian/Soviet radioisotope thermoelectric generators , usually in the form of strontium titanate. [ 23 ] It was also used in the US "Sentinel" series of RTGs. [ 24 ] Startup company Zeno Power is developing RTGs that use strontium-90 from the DOD , and is aiming to ship product by 2026. [ 25 ] 90 Sr finds use in industry as a radioactive source for thickness gauges. [ 3 ] 90 Sr finds extensive use in medicine as a radioactive source for superficial radiotherapy of some cancers. Controlled amounts of 90 Sr and 89 Sr can be used in treatment of bone cancer , and to treat coronary restenosis via vascular brachytherapy . It is also used as a radioactive tracer in medicine and agriculture. [ 3 ] 90 Sr is used as a blade inspection method in some helicopters with hollow blade spars to indicate if a crack has formed. [ 26 ] In April 1943, Enrico Fermi suggested to Robert Oppenheimer the possibility of using the radioactive byproducts from enrichment to contaminate the German food supply. The background was fear that the German atomic bomb project was already at an advanced stage, and Fermi was also skeptical at the time that an atomic bomb could be developed quickly enough. Oppenheimer discussed the proposal with Edward Teller , who suggested the use of strontium-90. James Bryant Conant and Leslie R. Groves were also briefed, but Oppenheimer wanted to proceed with the plan only if enough food could be contaminated with the weapon to kill half a million people. [ 27 ] Strontium-90 is not quite as likely as caesium-137 to be released as a part of a nuclear reactor accident because it is much less volatile, but is probably the most dangerous component of the radioactive fallout from a nuclear weapon. [ 28 ] A study of hundreds of thousands of deciduous teeth , collected by Dr. Louise Reiss and her colleagues as part of the Baby Tooth Survey , found a large increase in 90 Sr levels through the 1950s and early 1960s. The study's final results showed that children born in St. Louis, Missouri , in 1963 had levels of 90 Sr in their deciduous teeth that was 50 times higher than that found in children born in 1950, before the advent of large-scale atomic testing. Reviewers of the study predicted that the fallout would cause increased incidence of disease in those who absorbed strontium-90 into their bones. [ 29 ] However, no follow up studies of the subjects have been performed, so the claim is untested. An article with the study's initial findings was circulated to U.S. President John F. Kennedy in 1961, and helped convince him to sign the Partial Nuclear Test Ban Treaty with the United Kingdom and Soviet Union , ending the above-ground nuclear weapons testing that placed the greatest amounts of nuclear fallout into the atmosphere. [ 30 ] The Chernobyl disaster released roughly 10 PBq , or about 5% of the core inventory, of strontium-90 into the environment. [ 31 ] The Kyshtym disaster released strontium-90 and other radioactive material into the environment. It is estimated to have released 20 MCi (800 PBq) of radioactivity. The Fukushima Daiichi disaster had from the accident until 2013 released 0.1 to 1 PBq of strontium-90 in the form of contaminated cooling water into the Pacific Ocean . [ 32 ]
https://en.wikipedia.org/wiki/Strontium-90
Strontium aluminate is an aluminate compound with the chemical formula SrAl 2 O 4 (sometimes written as SrO·Al 2 O 3 ). It is a pale yellow, monoclinic crystalline powder that is odourless and non-flammable. When activated with a suitable dopant (e.g. europium , written as Eu:SrAl 2 O 4 ), it acts as a photoluminescent phosphor with long persistence of phosphorescence . Strontium aluminates exist in a variety of other compositions including SrAl 4 O 7 (monoclinic), Sr 3 Al 2 O 6 ( cubic ), SrAl 12 O 19 ( hexagonal ), and Sr 4 Al 14 O 25 ( orthorhombic ). The different compositions cause different colours of light to be emitted. Phosphorescent materials were discovered in the 1700s , and people have been studying them and making improvements over the centuries. The development of strontium aluminate pigments in 1993 was spurred on by the need to find a substitute for glow-in-the-dark materials with high luminance and long phosphorescence, especially those that used promethium . This led to the discovery by Yasumitsu Aoki (Nemoto & Co.) of materials with luminance approximately 10 times greater than zinc sulfide and phosphorescence approximately 10 times longer, and 10 times more expensive. The invention was patented by Nemoto & Co., Ltd. in 1994 and licensed to other manufacturers and watch brands. [ 1 ] Strontium aluminates are now the longest lasting and brightest phosphorescent material commercially available. For many phosphorescence-based purposes, strontium aluminate is a superior phosphor to its predecessor, copper -activated zinc sulfide , being about 10 times brighter and 10 times longer glowing. [ citation needed ] It is frequently used in glow in the dark objects, where it replaces the cheaper but less efficient Cu:ZnS that many people recognize with nostalgia – this is what made 'glow in the dark stars' stickers glow. Advancements in understanding of phosphorescent mechanisms, as well as advancements in molecular imaging, have enabled the development of novel, state-of-the-art strontium aluminates. [ 2 ] Strontium aluminate phosphors produce green and aqua hues, where green gives the highest brightness and aqua the longest glow time. Different aluminates can be used as the host matrix. This influences the wavelength of emission of the europium ion, by its covalent interaction with surrounding oxygens, and crystal field splitting of the 5d orbital energy levels. [ 3 ] The excitation wavelengths for strontium aluminate range from 200 to 450 nm, and the emission wavelengths range from 420 to 520 nm. The wavelength for its green formulation is 520 nm, its aqua, or blue-green, version emits at 505 nm, and its blue emits at 490 nm. Strontium aluminate can be formulated to phosphoresce at longer (yellow to red) wavelengths as well, though such emission is often dimmer than that of more common phosphorescence at shorter wavelengths. For europium-dysprosium doped aluminates, the peak emission wavelengths are 520 nm for SrAl 2 O 4 , 480 nm for SrAl 4 O 7 , and 400 nm for SrAl 12 O 19 . [ 4 ] Eu 2+ ,Dy 3+ :SrAl 2 O 4 is important as a persistently luminescent phosphor for industrial applications. It can be produced by molten salt assisted process at 900 °C. [ 5 ] The most described type is the stoichiometric green-emitting (approx. 530 nm) Eu 2+ :SrAl 2 O 4 . Eu 2+ ,Dy 3+ ,B:SrAl 2 O 4 shows significantly longer afterglow than the europium-only doped material. The Eu 2+ dopant shows high afterglow, while Eu 3+ has almost none. Polycrystalline Mn:SrAl 12 O 19 is used as a green phosphor for plasma displays , and when doped with praseodymium or neodymium it can act as a good active laser medium . Sr 0.95 Ce 0.05 Mg 0.05 Al 11.95 O 19 is a phosphor emitting at 305 nm, with quantum efficiency of 70%. Several strontium aluminates can be prepared by the sol-gel process. [ 6 ] The wavelengths produced depend on the internal crystal structure of the material. Slight modifications in the manufacturing process (the type of reducing atmosphere, small variations of stoichiometry of the reagents, addition of carbon or rare-earth halides ) can significantly influence the emission wavelengths. Strontium aluminate phosphor is usually fired at about 1250 °C, though higher temperatures are possible. Subsequent exposure to temperatures above 1090 °C is likely to cause loss of its phosphorescent properties. At higher firing temperatures, the Sr 3 Al 2 O 6 undergoes transformation to SrAl 2 O 4 . [ 7 ] Cerium and manganese doped strontium aluminate (Ce,Mn:SrAl 12 O 19 ) shows intense narrowband (22 nm wide) phosphorescence at 515 nm when excited by ultraviolet radiation (253.7 nm mercury emission line, to lesser degree 365 nm). It can be used as a phosphor in fluorescent lamps in photocopiers and other devices. A small amount of silicon substituting the aluminium can increase emission intensity by about 5%; the preferred composition of the phosphor is Ce 0.15 Mn 0.15 :SrAl 11 Si 0.75 O 19 . [ 8 ] However, the material has high hardness, causing abrasion to the machinery used in processing it; manufacturers frequently coat the particles with a suitable lubricant when adding them to a plastic. Coating also prevents the phosphor from water degradation over time. The glow intensity depends on the particle size; generally, the bigger the particles, the better the glow. Strontium aluminate is insoluble in water and has an approximate pH of 8 (very slightly basic). Strontium aluminate cement can be used as refractory structural material. It can be prepared by sintering of a blend of strontium oxide or strontium carbonate with alumina in a roughly equimolar ratio at about 1500 °C. It can be used as a cement for refractory concrete for temperatures up to 2000 °C as well as for radiation shielding . The use of strontium aluminate cements is limited by the availability of the raw materials. [ 9 ] Strontium aluminates have been examined as proposed materials for immobilization of fission products of radioactive waste , namely strontium-90 . [ 10 ] Europium-doped strontium aluminate nanoparticles are proposed as indicators of stress and cracks in materials, as they emit light when subjected to mechanical stress ( mechanoluminescence ). They are also useful for fabricating mechano-optical nanodevices. Non-agglomerated particles are needed for this purpose; they are difficult to prepare conventionally but can be made by ultrasonic spray pyrolysis of a mixture of strontium acetylacetonate , aluminium acetylacetonate and europium acetylacetonate in reducing atmosphere (argon with 5% of hydrogen). [ 11 ] Strontium aluminate based afterglow pigments are marketed under numerous brand names such as Core Glow, Super-LumiNova [ 12 ] and Lumibrite , developed by Seiko . Many companies additionally sell products that contain a mix of strontium aluminate particles and a 'host material'. Due to the nearly endless ability to recharge, strontium aluminate products cross many industries. Some of the most popular uses are for street lighting, such as the viral bike path. [ 13 ] Companies offer an industrial marble aggregate mixed with the strontium aluminate, to enable ease of using within standard construction processes. The glowing marble aggregates are often pressed into the cement or asphalt during the final stages of construction . Reusable and non-toxic glow stick alternatives are now being developed using strontium aluminate particles. Cubic strontium aluminate can be used as a water-soluble sacrificial layer for the production of free-standing films of complex oxide materials. [ 14 ] [ 15 ] Strontium aluminates are considered non-toxic, and are biologically and chemically inert. [ 16 ] Care should be used when handling loose powder, which can cause irritation if inhaled or exposed to mucous membranes. [ 16 ]
https://en.wikipedia.org/wiki/Strontium_aluminate
Strontium azide is an inorganic chemical compound with the formula Sr(N 3 ) 2 . It is composed of the strontium cation ( Sr 2+ ) and the azide anions ( N − 3 ). [ 1 ] Strontium azide crystallizes in an orthorhombic Fddd space group . [ 2 ] Unlike the azides of alkali metals which have a linear azide ion formation, strontium azide possesses bent azide ions, which can continue to bend further when under higher pressure. [ 3 ] This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Strontium_azide
Strontium bromate is a rarely considered chemical in the laboratory or in industries. It is, however, mentioned in the book Uncle Tungsten: Memories of a Chemical Boyhood by Oliver Sacks . There it is said that this salt glows when crystallized from a saturated aqueous solution . [ 1 ] Chemically this salt is soluble in water, and is a moderately strong oxidizing agent . [ 2 ] Strontium bromate is toxic if ingested and irritates the skin and respiratory tract if come into contact with or inhaled, respectively. Its chemical formula is Sr(BrO 3 ) 2 . This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Strontium_bromate
Strontium nitrate is an inorganic compound composed of the elements strontium , nitrogen and oxygen with the formula Sr ( NO 3 ) 2 . This colorless solid is used as a red colorant and oxidizer in pyrotechnics . Strontium nitrate is typically generated by the reaction of nitric acid on strontium carbonate . [ 2 ] Like many other strontium salts, strontium nitrate is used to produce a rich red flame in fireworks and road flares . The oxidizing properties of this salt are advantageous in such applications. [ 3 ] Strontium nitrate can aid in eliminating and lessening skin irritations. When mixed with glycolic acid , strontium nitrate reduces the sensation of skin irritation significantly better than using glycolic acid alone. [ 4 ] As a divalent ion with an ionic radius similar to that of Ca 2+ (1.13 Å and 0.99 Å respectively), Sr 2+ ions resembles calcium's ability to traverse calcium-selective ion channels and trigger neurotransmitter release from nerve endings. It is thus used in electrophysiology experiments. In his short story " A Germ-Destroyer ", Rudyard Kipling refers to strontium nitrate as the main ingredient of the titular fumigant .
https://en.wikipedia.org/wiki/Strontium_nitrate
Strontium oxalate is a compound with the chemical formula SrC 2 O 4 . Strontium oxalate can exist either in a hydrated form ( SrC 2 O 4 · n H 2 O ) or as the acidic salt of strontium oxalate ( SrC 2 O 4 · m H 2 C 2 O 4 · n H 2 O ). [ 2 ] Strontium oxalate is soluble in 20 000 parts of water; in 1 900 parts of 3.5% acetic acid, in 115 parts of the 23% acid, but less soluble in the 35% acid; readily soluble in diluted HCl or nitric acid. [ 3 ] With the addition of heat, strontium oxalate will decompose based on the following reaction: [ 4 ] Strontium oxalate is a good agent for use in pyrotechnics since it decomposes readily with the addition of heat. When it decomposes into strontium oxide , it produces a red flame color . Since this reaction produces carbon monoxide , which can undergo a further reduction with magnesium oxide , strontium oxalate is an excellent red flame color producing agent in the presence of magnesium . If it is not in the presence of magnesium, strontium carbonate has been found to be a better option to produce an even greater effect.
https://en.wikipedia.org/wiki/Strontium_oxalate
Strontium peroxide is an inorganic compound with the formula Sr O 2 that exists in both anhydrous and octahydrate form, both of which are white solids. The anhydrous form adopts a structure similar to that of calcium carbide . [ 4 ] [ 5 ] It is an oxidizing agent used for bleaching . It is used in some pyrotechnic compositions as an oxidizer and a vivid red pyrotechnic colorant . It can also be used as an antiseptic and in tracer munitions. [ citation needed ] Strontium peroxide is produced by passing oxygen over heated strontium oxide . Upon heating in the absence of O 2 , it degrades to SrO and O 2 . It is more thermally labile than BaO 2 . [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Strontium_peroxide
In dimensional analysis , the Strouhal number ( St , or sometimes Sr to avoid the conflict with the Stanton number ) is a dimensionless number describing oscillating flow mechanisms. The parameter is named after Vincenc Strouhal , a Czech physicist who experimented in 1878 with wires experiencing vortex shedding and singing in the wind. [ 1 ] [ 2 ] The Strouhal number is an integral part of the fundamentals of fluid mechanics . The Strouhal number is often given as where f is the frequency of vortex shedding in Hertz , [ 3 ] L is the characteristic length (for example, hydraulic diameter or the airfoil thickness ) and U is the flow velocity . In certain cases, like heaving (plunging) flight, this characteristic length is the amplitude of oscillation. This selection of characteristic length can be used to present a distinction between Strouhal number and reduced frequency: where k is the reduced frequency , and A is amplitude of the heaving oscillation. For large Strouhal numbers (order of 1), viscosity dominates fluid flow, resulting in a collective oscillating movement of the fluid "plug". For low Strouhal numbers (order of 10 −4 and below), the high-speed, quasi-steady-state portion of the movement dominates the oscillation. Oscillation at intermediate Strouhal numbers is characterized by the buildup and rapidly subsequent shedding of vortices. [ 5 ] For spheres in uniform flow in the Reynolds number range of 8×10 2 < Re < 2×10 5 there co-exist two values of the Strouhal number. The lower frequency is attributed to the large-scale instability of the wake, is independent of the Reynolds number Re and is approximately equal to 0.2. The higher-frequency Strouhal number is caused by small-scale instabilities from the separation of the shear layer. [ 6 ] [ 7 ] Knowing Newton's second law stating force is equivalent to mass times acceleration, or F = m a {\displaystyle F=ma} , and that acceleration is the derivative of velocity, or U t {\displaystyle {\tfrac {U}{t}}} (characteristic speed/time) in the case of fluid mechanics, we see Since characteristic speed can be represented as length per unit time, L t {\displaystyle {\tfrac {L}{t}}} , we get where, Dividing both sides by m U 2 L {\displaystyle {\tfrac {mU^{2}}{L}}} , we get where, This provides a dimensionless basis for a relationship between mass, characteristic speed, net external forces, and length (size) which can be used to analyze the effects of fluid mechanics on a body with mass. If the net external forces are predominantly elastic, we can use Hooke's law to see where, Assuming Δ L ∝ L {\displaystyle \Delta L\propto L} , then F ≈ k L {\displaystyle F\approx kL} . With the natural resonant frequency of the elastic system, ω 0 2 {\displaystyle \omega _{0}^{2}} , being equal to k m {\displaystyle {\tfrac {k}{m}}} , we get where, Given that cyclic motion frequency can be represented by f = ω 0 2 L U {\displaystyle f={\tfrac {\omega _{0}^{2}L}{U}}} we get, where, In the field of micro and nanorobotics, the Strouhal number is used alongside the Reynolds number in analyzing the impact of an external oscillatory fluidic flow on the body of a microrobot. When considering a microrobot with cyclic motion, the Strouhal number can be evaluated as where, The analysis of a microrobot using the Strouhal number allows one to assess the impact that the motion of the fluid it is in has on its motion in relation to the inertial forces acting on the robot–regardless of the dominant forces being elastic or not. [ 8 ] In the medical field, microrobots that use swimming motions to move may make micromanipulations in unreachable environments. The equation used for a blood vessel: [ 9 ] where, The Strouhal number is used as a ratio of the Deborah number (De) and Weissenberg number (Wi): [ 9 ] The Strouhal number may also be used to obtain the Womersley number (Wo). The case for blood flow can be categorized as an unsteady viscoelastic flow, therefore the Womersley number is [ 9 ] Or considering both equations, In metrology , specifically axial-flow turbine meters , the Strouhal number is used in combination with the Roshko number to give a correlation between flow rate and frequency. The advantage of this method over the frequency/viscosity versus K-factor method is that it takes into account temperature effects on the meter. where, This relationship leaves Strouhal dimensionless, although a dimensionless approximation is often used for C 3 , resulting in units of pulses/volume (same as K-factor). This relationship between flow and frequency can also be found in the aeronautical field. Considering pulsating methane-air coflow jet diffusion flames, we get where, For a small Strouhal number (St=0.1) the modulation forms a deviation in the flow that travels very far downstream. As the Strouhal number grows, the non-dimensional frequency approaches the natural frequency of a flickering flame, and eventually will have greater pulsation than the flame. [ 10 ] In swimming or flying animals, Strouhal number is defined as where, In animal flight or swimming, propulsive efficiency is high over a narrow range of Strouhal constants, generally peaking in the 0.2 < St < 0.4 range. [ 11 ] This range is used in the swimming of dolphins, sharks, and bony fish, and in the cruising flight of birds, bats and insects. [ 11 ] However, in other forms of flight other values are found. [ 11 ] Intuitively the ratio measures the steepness of the strokes, viewed from the side (e.g., assuming movement through a stationary fluid) – f is the stroke frequency, A is the amplitude, so the numerator fA is half the vertical speed of the wing tip, while the denominator V is the horizontal speed. Thus the graph of the wing tip forms an approximate sinusoid with aspect (maximal slope) twice the Strouhal constant. [ 12 ] The Strouhal number is most commonly used for assessing oscillating flow as a result of an object's motion through a fluid. The Strouhal number reflects the difficulty for animals to travel efficiently through a fluid with their cyclic propelling motions. The number relates to propulsive efficiency, which peaks between 70%–80% when within the optimal Strouhal number range of 0.2 to 0.4 . Through the use of factors such as the stroke frequency, the amplitude of each stroke, and velocity, the Strouhal number is able to analyze the efficiency and impact of an animal's propulsive forces through a fluid, such as those from swimming or flying. For instance, the value represents the constraints to achieve greater propulsive efficiency, which affects motion when cruising and aerodynamic forces when hovering. [ 13 ] Greater reactive forces and properties that act against the object, such as viscosity and density, reduce the ability of an animal's motion to fall within the ideal Strouhal number range when swimming. Through the assessment of different species that fly or swim, it was found that the motion of many species of birds and fish falls within the optimal Strouhal range. [ 13 ] However, the Strouhal number varies more within the same species than other species based on the method of how they move in a constrained manner in response to aerodynamic forces. [ 13 ] The Strouhal number has significant importance in analyzing the flight of animals since it is based on the streamlines and the animal's velocity as it travels through the fluid. Its significance is demonstrated through the motion of alcids as it passes through different mediums (air to water). The assessment of alcids determined the peculiarity of being able to fly under the efficient Strouhal number range in air and water despite a high mass relative to their wing area. [ 14 ] The alcid's efficient dual-medium motion developed through natural selection where the environment played a role in the evolution of animals over time to fall under a certain efficient range. The dual-medium motion demonstrates how alcids had two different flight patterns based on the stroke velocities as it moved through each fluid. [ 14 ] However, as the bird travels through a different medium, it has to face the influence of the fluid's density and viscosity. Furthermore, the alcid also has to resist the upward-acting buoyancy as it moves horizontally. In order to determine significance of the Strouhal number at varying scales, one may perform scale analysis –a simplification method to analyze the impact of factors as they change with respect to some scale. When considered in the context of microrobotics and nanorobotics, size is the factor of interest when performing scale analysis. Scale analysis of the Strouhal number allows for analysis of the relationship between mass and inertial forces as both change with respect to size. Taking its original underived form, m U 2 F L {\displaystyle {\tfrac {mU^{2}}{FL}}} , we can then relate each term to size and see how the ratio changes as size changes. Given m = V ρ {\displaystyle m=V\rho } where m is mass, V is volume, and ρ {\displaystyle \rho } is density, we can see mass is directly related to size as volume scales with length (L). Taking the volume to be L 3 {\displaystyle L^{3}} , we can directly relate mass and size as Characteristic speed ( U ) is in terms of distance time {\displaystyle {\tfrac {\text{distance}}{\text{time}}}} , and relative distance scales with size, therefore The net external forces ( F ) scales in relation to mass and acceleration, given by F = m ⋅ a {\displaystyle F=m\cdot a} . Acceleration is in terms of distance time 2 {\displaystyle {\tfrac {\text{distance}}{{\text{time}}^{2}}}} , therefore a ≈ L {\displaystyle a\approx L} . The mass-size relationship was established to be m ≈ L 3 {\displaystyle m\approx L^{3}} , so considering all three relationships, we get Length ( L ) already denotes size and remains L . Taking all of this together, we get With the Strouhal number relating the mass to inertial forces, this can be expected as these two factors will scale proportionately with size and neither will increase nor decrease in significance with respect to their contribution to the body's behavior in the cyclic motion of the fluid. The scaling relationship between the Richardson number and the Strouhal number is represented by the equation: [ 15 ] where a and b are constants depending on the condition. For round helium buoyant jets and plumes: [ 15 ] When Ri < 100 {\displaystyle {\text{Ri}}<100} , When 100 < Ri < 500 {\displaystyle 100<{\text{Ri}}<500} , For planar buoyant jets and plumes: [ 15 ] For shape-independent scaling: [ 15 ] The Strouhal number and Reynolds number must be considered when addressing the ideal method to develop a body made to move through a fluid. Furthermore, the relationship for these values is expressed through Lighthill's elongated-body theory, which relates the reactive forces experienced by a body moving through a fluid with its inertial forces. [ 16 ] The Strouhal number was determined to depend upon the dimensionless Lighthill number, which in turn relates to the Reynolds number. The value of the Strouhal number can then be seen to decrease with an increasing Reynolds number, and to increase with an increasing Lighthill number. [ 16 ]
https://en.wikipedia.org/wiki/Strouhal_number
The Structural Classification of Proteins (SCOP) database is a largely manual classification of protein structural domains based on similarities of their structures and amino acid sequences . A motivation for this classification is to determine the evolutionary relationship between proteins. Proteins with the same shapes but having little sequence or functional similarity are placed in different superfamilies , and are assumed to have only a very distant common ancestor. Proteins having the same shape and some similarity of sequence and/or function are placed in "families", and are assumed to have a closer common ancestor. Similar to CATH and Pfam databases, SCOP provides a classification of individual structural domains of proteins, rather than a classification of the entire proteins which may include a significant number of different domains. The SCOP database is freely accessible on the internet. SCOP was created in 1994 in the Centre for Protein Engineering and the Laboratory of Molecular Biology . [ 3 ] It was maintained by Alexey G. Murzin and his colleagues in the Centre for Protein Engineering until its closure in 2010 and subsequently at the Laboratory of Molecular Biology in Cambridge, England. [ 4 ] [ 5 ] [ 6 ] [ 1 ] The work on SCOP 1.75 has been discontinued in 2014. Since then SCOPe team from UC Berkeley has been responsible for updating the database in a compatible manner, with a combination of automated and manual methods. As of April 2019 [update] , the latest release is SCOPe 2.07 (March 2018). [ 2 ] The new Structural Classification of Proteins version 2 (SCOP2) database was released at the beginning of 2020. The new update featured an improved database schema, a new API and modernised web interface. This was the most significant update by the Cambridge group since SCOP 1.75 and builds on the advances in schema from the SCOP 2 prototype. [ 7 ] The source of protein structures is the Protein Data Bank . The unit of classification of structure in SCOP is the protein domain . What the SCOP authors mean by "domain" is suggested by their statement that small proteins and most medium-sized ones have just one domain, [ 8 ] and by the observation that human hemoglobin, [ 9 ] which has an α 2 β 2 structure, is assigned two SCOP domains, one for the α and one for the β subunit. The shapes of domains are called "folds" in SCOP. Domains belonging to the same fold have the same major secondary structures in the same arrangement with the same topological connections. 1195 folds are given in SCOP version 1.75. Short descriptions of each fold are given. For example, the "globin-like" fold is described as core: 6 helices; folded leaf, partly opened . The fold to which a domain belongs is determined by inspection, rather than by software. The levels of SCOP version 1.75 are as follows. The broadest groups on SCOP version 1.75 are the protein fold classes . These classes group structures with similar secondary structure composition, but different overall tertiary structures and evolutionarily origins. This is the top level "root" of the SCOP hierarchical classification. The number in brackets, called a "sunid", is a S COP un ique integer id entifier for each node in the SCOP hierarchy. The number in parentheses indicates how many elements are in each category. For example, there are 284 folds in the "All alpha proteins" class. Each member of the hierarchy is a link to the next level of the hierarchy. Each class contains a number of distinct folds. This classification level indicates similar tertiary structure, but not necessarily evolutionary relatedness. For example, the "All-α proteins" class contains >280 distinct folds, including: Globin -like (core: 6 helices; folded leaf, partly opened), long alpha-hairpin (2 helices; antiparallel hairpin, left-handed twist) and Type I dockerin domains (tandem repeat of two calcium-binding loop-helix motifs, distinct from the EF-hand). Domains within a fold are further classified into superfamilies . This is a largest grouping of proteins for which structural similarity is sufficient to indicate evolutionary relatedness and therefore share a common ancestor. However, this ancestor is presumed to be distant, because the different members of a superfamily have low sequence identities . For example, the two superfamilies of the "Globin-like" fold are: the Globin superfamily and alpha-helical ferredoxin superfamily (contains two Fe4-S4 clusters). Protein families are more closely related than superfamilies. Domains are placed in the same family if that have either: The similarity in sequence and structure is evidence that these proteins have a closer evolutionary relationship than do proteins in the same superfamily. Sequence tools, such as BLAST , are used to assist in placing domains into superfamilies and families. For example, the four families in the "globin-like" superfamily of the "globin-like" fold are truncated hemoglobin (lack the first helix), nerve tissue mini-hemoglobin (lack the first helix but otherwise is more similar to conventional globins than the truncated ones), globins (Heme-binding protein), and phycocyanin -like phycobilisome proteins (oligomers of two different types of globin-like subunits containing two extra helices at the N-terminus binds a bilin chromophore ). Families in SCOP are each assigned a concise classification string, sccs , where the letter identifies the class to which the domain belongs; the following integers identify the fold, superfamily, and family, respectively (e.g., a.1.1.2 for the "Globin" family). [ 10 ] A "TaxId" is the taxonomy ID number and links to the NCBI taxonomy browser, which provides more information about the species to which the protein belongs. Clicking on a species or isoform brings up a list of domains. For example, the "Hemoglobin, alpha-chain from Human (Homo sapiens)" protein has >190 solved protein structures, such as 2dn3 (complexed with cmo), and 2dn1 (complexed with hem, mbn, oxy). Clicking on the PDB numbers is supposed to display the structure of the molecule, but the links are currently broken (links work in pre-SCOP). Most pages in SCOP contain a search box. Entering "trypsin +human" retrieves several proteins, including the protein trypsinogen from humans. Selecting that entry displays a page that includes the "lineage", which is at the top of most SCOP pages. Searching for "Subtilisin" returns the protein, "Subtilisin from Bacillus subtilis, carlsberg", with the following lineage. Although both of these proteins are proteases, they do not even belong to the same fold, which is consistent with them being an example of convergent evolution . SCOP classification is more dependent on manual decisions than the semi-automatic classification by CATH , its chief rival. Human expertise is used to decide whether certain proteins are evolutionary related and therefore should be assigned to the same superfamily , or their similarity is a result of structural constraints and therefore they belong to the same fold . Another database, FSSP , is purely automatically generated (including regular automatic updates) but offers no classification, allowing the user to draw their own conclusion as to the significance of structural relationships based on the pairwise comparisons of individual protein structures. By 2009, the original SCOP database manually classified 38,000 PDB entries into a strictly hierarchical structure. With the accelerating pace of protein structure publications, the limited automation of classification could not keep up, leading to a non-comprehensive dataset. The Structural Classification of Proteins extended (SCOPe) database was released in 2012 with far greater automation of the same hierarchical system and is full backwards compatible with SCOP version 1.75. In 2014, manual curation was reintroduced into SCOPe to maintain accurate structure assignment. As of February 2015, SCOPe 2.05 classified 71,000 of the 110,000 total PDB entries. [ 11 ] SCOP2 prototype was a beta version of Structural classification of proteins and classification system that aimed to more the evolutionary complexity inherent in protein structure evolution. [ 12 ] It is therefore not a simple hierarchy, but a directed acyclic graph network connecting protein superfamilies representing structural and evolutionary relationships such as circular permutations , domain fusion and domain decay. Consequently, domains are not separated by strict fixed boundaries, but rather are defined by their relationships to the most similar other structures. The prototype was used for the development of the SCOP version 2 database. [ 7 ] The SCOP version 2, release January 2020, contains 5134 families and 2485 superfamilies compared to 3902 families and 1962 superfamilies in SCOP 1.75. The classification levels organise more than 41 000 non-redundant domains that represent more than 504 000 protein structures. The Evolutionary Classification of Protein Domains (ECOD) database released in 2014 is a similar to SCOPe expansion of SCOP version 1.75. Unlike the compatible SCOPe, it renames the class-fold-superfamily-family hierarchy into an architecture-X-homology-topology-family (A-XHTF) grouping, with the last level mostly defined by Pfam and supplemented by HHsearch clustering for uncategorized sequences. [ 13 ] ECOD has the best PDB coverage of all three successors: it covers every PDB structure, and is updated biweekly. [ 14 ] The direct mapping to Pfam has proven useful to Pfam curators who use the homology-level category to supplement their "clan" grouping. [ 15 ]
https://en.wikipedia.org/wiki/Structural_Classification_of_Proteins_database
CSIR-Structural Engineering Research Centre ( CSIR-SERC ), Chennai is one of the 38 constituent laboratories of the Council of Scientific and Industrial Research in India. The institute is a certified ISO:9001 quality institute. [ 1 ] CSIR-SERC is involved in research and development in the field of designing, construction and rehabilitation of structures. The institute provides services including design consultancy and proof checking to various public and private sector organizations. Specialized courses for practicing engineers are also provided by the institute. The institute has various laboratories, which are listed as follows.
https://en.wikipedia.org/wiki/Structural_Engineering_Research_Centre
The Structural Engineering exam is a written examination given by state licensing boards in the United States as part of the testing for licensing structural engineers . This exam is written by the National Council of Examiners for Engineering and Surveying . It consists of 4 separate exams covering vertical and lateral forces, which are split into "breadth" and "depth" exams. The "depth" exams are offered twice per year while the "breadth" exams are offered year round. All four exams are fully digital. [ 1 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Structural_Engineering_exam
The Structural Engineers Association of Northern California ( SEAONC ) is a structural engineering association established in 1930. Its headquarters are in San Francisco, California . Initially a club for structural engineers to exchange technical information, it evolved into a professional organization advising on the development of building code requirements and California legislation related to earthquake hazard reduction such as the Field Act and Alquist Priolo Special Studies Zone Act . [ 1 ] SEAONC is the northern California section of the statewide Structural Engineers Association of California (SEAOC). SEAOC's Recommended Lateral Force Requirements, a.k.a. "Blue Book", first published in 1959, has since influenced the development of seismic analysis and design provisions in building codes nationwide. [ 2 ]
https://en.wikipedia.org/wiki/Structural_Engineers_Association_of_Northern_California
In mathematics, structural Ramsey theory is a categorical generalisation of Ramsey theory , rooted in the idea that many important results of Ramsey theory have "similar" logical structures. The key observation is noting that these Ramsey-type theorems can be expressed as the assertion that a certain category (or class of finite structures) has the Ramsey property (defined below). Structural Ramsey theory began in the 1970s [ 1 ] with the work of Nešetřil and Rödl , and is intimately connected to Fraïssé theory . It received some renewed interest in the mid-2000s due to the discovery of the Kechris–Pestov–Todorčević correspondence , which connected structural Ramsey theory to topological dynamics . Leeb [ de ] is given credit [ 2 ] for inventing the idea of a Ramsey property in the early 70s. The first publication of this idea appears to be Graham , Leeb and Rothschild 's 1972 paper on the subject. [ 3 ] Key development of these ideas was done by Nešetřil and Rödl in their series of 1977 [ 4 ] and 1983 [ 5 ] papers, including the famous Nešetřil–Rödl theorem. This result was reproved independently by Abramson and Harrington , [ 6 ] and further generalised by Prömel [ de ] . [ 7 ] More recently, Mašulović [ 8 ] [ 9 ] [ 10 ] and Solecki [ 11 ] [ 12 ] [ 13 ] have done some pioneering work in the field. This article will use the set theory convention that each natural number n ∈ N {\displaystyle n\in \mathbb {N} } can be considered as the set of all natural numbers less than it: i.e. n = { 0 , 1 , … , n − 1 } {\displaystyle n=\{0,1,\ldots ,n-1\}} . For any set A {\displaystyle A} , an r {\displaystyle r} -colouring of A {\displaystyle A} is an assignment of one of r {\displaystyle r} labels to each element of A {\displaystyle A} . This can be represented as a function Δ : A → r {\displaystyle \Delta :A\to r} mapping each element to its label in r = { 0 , 1 , … , r − 1 } {\displaystyle r=\{0,1,\ldots ,r-1\}} (which this article will use), or equivalently as a partition of A = A 0 ⊔ ⋯ ⊔ A r − 1 {\displaystyle A=A_{0}\sqcup \cdots \sqcup A_{r-1}} into r {\displaystyle r} pieces. Here are some of the classic results of Ramsey theory: These "Ramsey-type" theorems all have a similar idea: we fix two integers k {\displaystyle k} and m {\displaystyle m} , and a set of colours r {\displaystyle r} . Then, we want to show there is some n {\displaystyle n} large enough, such that for every r {\displaystyle r} -colouring of the "substructures" of size k {\displaystyle k} inside n {\displaystyle n} , we can find a suitable "structure" A {\displaystyle A} inside n {\displaystyle n} , of size m {\displaystyle m} , such that all the "substructures" B {\displaystyle B} of A {\displaystyle A} with size k {\displaystyle k} have the same colour. What types of structures are allowed depends on the theorem in question, and this turns out to be virtually the only difference between them. This idea of a "Ramsey-type theorem" leads itself to the more precise notion of the Ramsey property (below). Let C {\displaystyle \mathbf {C} } be a category . C {\displaystyle \mathbf {C} } has the Ramsey property if for every natural number r {\displaystyle r} , and all objects A , B {\displaystyle A,B} in C {\displaystyle \mathbf {C} } , there exists another object D {\displaystyle D} in C {\displaystyle \mathbf {C} } , such that for every r {\displaystyle r} -colouring Δ : Hom ⁡ ( A , D ) → r {\displaystyle \Delta :\operatorname {Hom} (A,D)\to r} , there exists a morphism f : B → D {\displaystyle f:B\to D} which is Δ {\displaystyle \Delta } -monochromatic, i.e. the set is Δ {\displaystyle \Delta } -monochromatic. [ 10 ] Often, C {\displaystyle \mathbf {C} } is taken to be a class of finite L {\displaystyle {\mathcal {L}}} -structures over some fixed language L {\displaystyle {\mathcal {L}}} , with embeddings as morphisms. In this case, instead of colouring morphisms, one can think of colouring "copies" of A {\displaystyle A} in D {\displaystyle D} , and then finding a copy of B {\displaystyle B} in D {\displaystyle D} , such that all copies of A {\displaystyle A} in this copy of B {\displaystyle B} are monochromatic. This may lend itself more intuitively to the earlier idea of a "Ramsey-type theorem". There is also a notion of a dual Ramsey property; C {\displaystyle \mathbf {C} } has the dual Ramsey property if its dual category C o p {\displaystyle \mathbf {C} ^{\mathrm {op} }} has the Ramsey property as above. More concretely, C {\displaystyle \mathbf {C} } has the dual Ramsey property if for every natural number r {\displaystyle r} , and all objects A , B {\displaystyle A,B} in C {\displaystyle \mathbf {C} } , there exists another object D {\displaystyle D} in C {\displaystyle \mathbf {C} } , such that for every r {\displaystyle r} -colouring Δ : Hom ⁡ ( D , A ) → r {\displaystyle \Delta :\operatorname {Hom} (D,A)\to r} , there exists a morphism f : D → B {\displaystyle f:D\to B} for which Hom ⁡ ( B , A ) ∘ f {\displaystyle \operatorname {Hom} (B,A)\circ f} is Δ {\displaystyle \Delta } -monochromatic. In 2005, Kechris , Pestov and Todorčević [ 14 ] discovered the following correspondence (hereafter called the KPT correspondence ) between structural Ramsey theory, Fraïssé theory, and ideas from topological dynamics. Let G {\displaystyle G} be a topological group . For a topological space X {\displaystyle X} , a G {\displaystyle G} -flow (denoted G ↷ X {\displaystyle G\curvearrowright X} ) is a continuous action of G {\displaystyle G} on X {\displaystyle X} . We say that G {\displaystyle G} is extremely amenable if any G {\displaystyle G} -flow G ↷ X {\displaystyle G\curvearrowright X} on a compact space X {\displaystyle X} admits a fixed point x ∈ X {\displaystyle x\in X} , i.e. the stabiliser of x {\displaystyle x} is G {\displaystyle G} itself. For a Fraïssé structure F {\displaystyle \mathbf {F} } , its automorphism group Aut ⁡ ( F ) {\displaystyle \operatorname {Aut} (\mathbf {F} )} can be considered a topological group, given the topology of pointwise convergence , or equivalently, the subspace topology induced on Aut ⁡ ( F ) {\displaystyle \operatorname {Aut} (\mathbf {F} )} by the space F F = { f : F → F } {\displaystyle \mathbf {F} ^{\mathbf {F} }=\{f:\mathbf {F} \to \mathbf {F} \}} with the product topology . The following theorem illustrates the KPT correspondence: Theorem (KPT). For a Fraïssé structure F {\displaystyle \mathbf {F} } , the following are equivalent:
https://en.wikipedia.org/wiki/Structural_Ramsey_theory
A structural analog , also known as a chemical analog or simply an analog , is a compound having a structure similar to that of another compound, but differing from it in respect to a certain component. [ 1 ] [ 2 ] [ 3 ] It can differ in one or more atoms , functional groups , or substructures, which are replaced with other atoms, groups, or substructures. A structural analog can be imagined to be formed, at least theoretically, from the other compound. Structural analogs are often isoelectronic . Despite a high chemical similarity, structural analogs are not necessarily functional analogs and can have very different physical, chemical, biochemical, or pharmacological properties. [ 4 ] In drug discovery , either a large series of structural analogs of an initial lead compound are created and tested as part of a structure–activity relationship study [ 5 ] or a database is screened for structural analogs of a lead compound . [ 6 ] Chemical analogues of illegal drugs are developed and sold in order to circumvent laws. Such substances are often called designer drugs . Because of this, the United States passed the Federal Analogue Act in 1986. This bill banned the production of any chemical analogue of a Schedule I or Schedule II substance that has substantially similar pharmacological effects, with the intent of human consumption. A neurotransmitter analog is a structural analogue of a neurotransmitter , typically a drug . Some examples include:
https://en.wikipedia.org/wiki/Structural_analog
Structural biology deals with structural analysis of living material (formed, composed of, and/or maintained and refined by living cells) at every level of organization. [ 1 ] Early structural biologists throughout the 19th and early 20th centuries were primarily only able to study structures to the limit of the naked eye's visual acuity and through magnifying glasses and light microscopes. In the 20th century, a variety of experimental techniques were developed to examine the 3D structures of biological molecules. The most prominent techniques are X-ray crystallography , nuclear magnetic resonance , and electron microscopy . Through the discovery of X-rays and its applications to protein crystals, structural biology was revolutionized, as now scientists could obtain the three-dimensional structures of biological molecules in atomic detail. [ 2 ] Likewise, NMR spectroscopy allowed information about protein structure and dynamics to be obtained. [ 3 ] Finally, in the 21st century, electron microscopy also saw a drastic revolution with the development of more coherent electron sources, aberration correction for electron microscopes, and reconstruction software that enabled the successful implementation of high resolution cryo-electron microscopy, thereby permitting the study of individual proteins and molecular complexes in three-dimensions at angstrom resolution. With the development of these three techniques, the field of structural biology expanded and also became a branch of molecular biology , biochemistry , and biophysics concerned with the molecular structure of biological macromolecules (especially proteins , made up of amino acids , RNA or DNA , made up of nucleotides , and membranes , made up of lipids ), how they acquire the structures they have, and how alterations in their structures affect their function. [ 4 ] This subject is of great interest to biologists because macromolecules carry out most of the functions of cells , and it is only by coiling into specific three-dimensional shapes that they are able to perform these functions. This architecture, the " tertiary structure " of molecules, depends in a complicated way on each molecule's basic composition, or " primary structure ." At lower resolutions, tools such as FIB-SEM tomography have allowed for greater understanding of cells and their organelles in 3-dimensions, and how each hierarchical level of various extracellular matrices contributes to function (for example in bone). In the past few years it has also become possible to predict highly accurate physical molecular models to complement the experimental study of biological structures. [ 5 ] Computational techniques such as molecular dynamics simulations can be used in conjunction with empirical structure determination strategies to extend and study protein structure, conformation and function. [ 6 ] In 1912 Max Von Laue directed X-rays at crystallized copper sulfate generating a diffraction pattern . [ 7 ] These experiments led to the development of X-ray crystallography , and its usage in exploring biological structures. [ 5 ] In 1951, Rosalind Franklin and Maurice Wilkins used X-ray diffraction patterns to capture the first image of deoxyribonucleic acid (DNA). Francis Crick and James Watson modeled the double helical structure of DNA using this same technique in 1953 and received the Nobel Prize in Medicine along with Wilkins in 1962. [ 8 ] Pepsin crystals were the first proteins to be crystallized for use in X-ray diffraction, by Theodore Svedberg who received the 1962 Nobel Prize in Chemistry. [ 9 ] The first tertiary protein structure , that of myoglobin , was published in 1958 by John Kendrew . [ 10 ] During this time, modeling of protein structures was done using balsa wood or wire models. [ 11 ] With the invention of modeling software such as CCP4 in the late 1970s, [ 12 ] modeling is now done with computer assistance. Recent developments in the field have included the generation of X-ray free electron lasers , allowing analysis of the dynamics and motion of biological molecules, [ 13 ] and the use of structural biology in assisting synthetic biology . [ 14 ] In the late 1930s and early 1940s, the combination of work done by Isidor Rabi , Felix Bloch , and Edward Mills Purcell led to the development of nuclear magnetic resonance (NMR). Currently, solid-state NMR is widely used in the field of structural biology to determine the structure and dynamic nature of proteins ( protein NMR ). [ 15 ] In 1990, Richard Henderson produced the first three-dimensional, high resolution image of bacteriorhodopsin using cryogenic electron microscopy (cryo-EM). [ 16 ] Since then, cryo-EM has emerged as an increasingly popular technique to determine three-dimensional, high resolution structures of biological images. [ 17 ] More recently, computational methods have been developed to model and study biological structures. For example, molecular dynamics (MD) is commonly used to analyze the dynamic movements of biological molecules. In 1975, the first simulation of a biological folding process using MD was published in Nature. [ 18 ] Recently, protein structure prediction was significantly improved by a new machine learning method called AlphaFold . [ 19 ] Some claim that computational approaches are starting to lead the field of structural biology research. [ 20 ] Biomolecules are too small to see in detail even with the most advanced light microscopes . The methods that structural biologists use to determine their structures generally involve measurements on vast numbers of identical molecules at the same time. These methods include: Most often researchers use them to study the " native states " of macromolecules. But variations on these methods are also used to watch nascent or denatured molecules assume or reassume their native states. See protein folding . A third approach that structural biologists take to understanding structure is bioinformatics to look for patterns among the diverse sequences that give rise to particular shapes. Researchers often can deduce aspects of the structure of integral membrane proteins based on the membrane topology predicted by hydrophobicity analysis . See protein structure prediction . Structural biologists have made significant contributions towards understanding the molecular components and mechanisms underlying human diseases. For example, cryo-EM and ssNMR have been used to study the aggregation of amyloid fibrils, which are associated with Alzheimer's disease , Parkinson's disease , and type II diabetes . [ 21 ] In addition to amyloid proteins, scientists have used cryo-EM to produce high resolution models of tau filaments in the brain of Alzheimer's patients which may help develop better treatments in the future. [ 22 ] Structural biology tools can also be used to explain interactions between pathogens and hosts. For example, structural biology tools have enabled virologists to understand how the HIV envelope allows the virus to evade human immune responses. [ 23 ] Structural biology is also an important component of drug discovery . [ 24 ] Scientists can identify targets using genomics, study those targets using structural biology, and develop drugs that are suited for those targets. Specifically, ligand- NMR , mass spectrometry , and X-ray crystallography are commonly used techniques in the drug discovery process. For example, researchers have used structural biology to better understand Met , a protein encoded by a protooncogene that is an important drug target in cancer . [ 25 ] Similar research has been conducted for HIV targets to treat people with AIDS . [ 24 ] Researchers are also developing new antimicrobials for mycobacterial infections using structure-driven drug discovery. [ 24 ]
https://en.wikipedia.org/wiki/Structural_biology
In structural engineering , structural elements are used in structural analysis to split a complex structure into simple elements (each bearing a structural load ). Within a structure, an element cannot be broken down (decomposed) into parts of different kinds (e.g., beam or column). [ 1 ] Structural building components are specialized structural building products designed, engineered and manufactured under controlled conditions for a specific application. They are incorporated into the overall building structural system by a building designer . Examples are wood or steel roof trusses , floor trusses, floor panels, I-joists , or engineered beams and headers. A structural building component manufacturer or truss manufacturer is an individual or company regularly engaged in the manufacturing of components. Structural elements can be lines, surfaces or volumes. [ 2 ] Line elements: Surface elements: Volumes:
https://en.wikipedia.org/wiki/Structural_building_components
The structural channel , C-channel or parallel flange channel ( PFC ), is a type of (usually structural steel ) beam , used primarily in building construction and civil engineering. Its cross section consists of a wide "web", usually but not always oriented vertically, and two "flanges" at the top and bottom of the web, only sticking out on one side of the web. It is distinguished from I-beam or H-beam or W-beam type steel cross sections in that those have flanges on both sides of the web. [ 1 ] The structural channel is not used as much in construction as symmetrical beams, in part because its bending axis is not centered on the width of the flanges. If a load is applied equally across its top, the beam will tend to twist away from the web. This may not be a weak point or problem for a particular design, but is a factor to be considered. [ 2 ] Channels or C-beams are often used where the flat, back side of the web can be mounted to another flat surface for maximum contact area. They are also sometimes welded together back-to-back to form a non-standard I-beam.
https://en.wikipedia.org/wiki/Structural_channel
Structural complexity is a science of applied mathematics that aims to relate fundamental physical or biological aspects of a complex system with the mathematical description of the morphological complexity that the system exhibits, by establishing rigorous relations between mathematical and physical properties of such system. [ 1 ] Structural complexity emerges from all systems that display morphological organization. [ 2 ] Filamentary structures, for instance, are an example of coherent structures that emerge, interact and evolve in many physical and biological systems, such as mass distribution in the Universe , vortex filaments in turbulent flows, neural networks in our brain and genetic material (such as DNA ) in a cell. In general information on the degree of morphological disorder present in the system tells us something important about fundamental physical or biological processes. Structural complexity methods are based on applications of differential geometry and topology (and in particular knot theory ) to interpret physical properties of dynamical systems . [ 3 ] [ 4 ] such as relations between kinetic energy and tangles of vortex filaments in a turbulent flow or magnetic energy and braiding of magnetic fields in the solar corona, including aspects of topological fluid dynamics .
https://en.wikipedia.org/wiki/Structural_complexity_(applied_mathematics)
Structural engineers analyze, design, plan, and research structural components and structural systems to achieve design goals and ensure the safety and comfort of users or occupants. Their work takes account mainly of safety, technical, economic, and environmental concerns, but they may also consider aesthetic and social factors. Structural engineering is usually considered a specialty discipline within civil engineering , but it can also be studied in its own right. In the United States, most practicing structural engineers are currently licensed as civil engineers , but the situation varies from state to state. Some states have a separate license for structural engineers who are required to design special or high-risk structures such as schools, hospitals, or skyscrapers. [ 1 ] [ 2 ] In the United Kingdom, most structural engineers in the building industry are members of the Institution of Structural Engineers or the Institution of Civil Engineers . Typical structures designed by a structural engineer include buildings, towers, stadiums, and bridges. Other structures such as oil rigs, space satellites, aircraft, and ships may also be designed by a structural engineer. [ 3 ] Most structural engineers are employed in the construction industry, however, there are also structural engineers in the aerospace, automobile, and shipbuilding industries. In the construction industry, they work closely with architects , civil engineers , mechanical engineers , electrical engineers , quantity surveyors , and construction managers . Structural engineers ensure that buildings and bridges are built to be strong enough and stable enough to resist all appropriate structural loads (e.g., gravity, wind, snow, rain, seismic ( earthquake ), earth pressure, temperature, and traffic) to prevent or reduce the loss of life or injury. They also design structures to be stiff enough to not deflect or vibrate beyond acceptable limits. Human comfort is an issue that is regularly considered limited. Fatigue is also an important consideration for bridges and aircraft design or for other structures that experience many stress cycles over their lifetimes. Consideration is also given to the durability of materials against possible deterioration which may impair performance over the design lifetime. The education of structural engineers is usually through a civil engineering bachelor's degree, and often a master's degree specializing in structural engineering. The fundamental core subjects for structural engineering are strength of materials or solid mechanics , structural analysis (static and dynamic), material science and numerical analysis . Reinforced concrete , composite structure , timber, masonry and structural steel designs are the general structural design courses that will be introduced in the next level of the education of structural engineering. The structural analysis courses which include structural mechanics , structural dynamics and structural failure analysis are designed to build up the fundamental analysis skills and theories for structural engineering students. At the senior year level or in graduate programs, prestressed concrete design, space frame design for building and aircraft, bridge engineering, civil and aerospace structure rehabilitation and other advanced structural engineering specializations are usually introduced. Recently in the United States, there have been discussions in the structural engineering community about the knowledge base of structural engineering graduates. Some have called for a master's degree to be the minimum standard for professional licensing as a civil engineer. [ 4 ] There are separate structural engineering undergraduate degrees at the University of California, San Diego and the University of Architecture, Civil Engineering, and Geodesy , Sofia, Bulgaria. Many students who later become structural engineers major in civil, mechanical, or aerospace engineering degree programs, with an emphasis on structural engineering. Architectural engineering programs do offer structural emphases and are often in combined academic departments with civil engineering. In many countries, structural engineering is a profession subject to licensure. Licensed engineers may receive the title of Professional Engineer, Chartered Engineer, Structural Engineer, or other title depending on the jurisdiction. The process to attain licensure to work as a structural engineer varies by location, but typically specifies university education, work experience, examination, and continuing education to maintain their mastery of the subject. Professional Engineers bear legal responsibility for their work to ensure the safety and performance of their structures and only practice within the scope of their expertise. In the United States, persons practicing structural engineering must be licensed in each state in which they practice. Licensure to practice as a structural engineer usually be obtained by the same qualifications as for a Civil Engineer , but some states require licensure specifically for structural engineering, with experience specific and non-concurrent with experience claimed for another engineering profession. The qualifications for licensure typically include a specified minimum level of practicing experience, as well as the successful completion of a nationally-administered 16-hour exam, and possibly an additional state-specific exam. For instance, California requires that candidates pass a national exam, written by the National Council of Examiners for Engineering and Surveying (NCEES), [ 5 ] as well as a state-specific exam which includes a seismic portion and a surveying portion. In most states, application for license exam is requires four years of work experience after the candidate graduated from an ABET-accredited university and passing the fundamentals of Engineering exam, three years after receiving a master's degree, or two years after receiving a Ph.D. degree. [ 6 ] Most US states do not have a separate structural engineering license. In 10 US states, including Alaska, California, Hawaii, Illinois, Nevada, Oregon, Utah, Washington, and others, there is an additional license or authority for Structural Engineering, [ 7 ] obtained after the engineer has obtained a Civil Engineering license and practiced an additional amount of time with the Civil Engineering license. The scope of what structures must be designed by a Structural Engineer, not by a Civil Engineer without the S.E. license, is limited in Alaska, California, Nevada, Oregon, Utah, and Washington to some high importance structures such as stadiums, bridges, hospitals, and schools. The practice of structural engineering is reserved entirely to S.E. licensees in Hawaii and Illinois. The United Kingdom has one of the oldest professional institutions for structural engineers, the Institution of Structural Engineers . Founded as the Concrete Institute in 1908, it was renamed the Institution of Structural Engineers (IStructE) in 1922. It now has 22,000 members with branches in 32 countries. The IStructE is one of several UK professional bodies empowered to grant the title of Chartered Engineer ; its members are granted the title of Chartered Structural Engineer . The overall process to become chartered begins after graduation from a UK MEng degree, or a BEng with an MSc degree. To qualify as a chartered structural engineer, a graduate needs to go through four years of Initial Professional Development followed by a professional review interview. After passing the interview, the candidate sits an eight-hour professional review examination. The election to chartered membership (MIStructE) depends on the examination result. The candidate can register at the Engineering Council UK as a Chartered Structural Engineer once he or she has been elected as a Chartered Member. Legally it is not necessary to be a member of the IStructE when working on structures in the UK, however, industry practice, insurance, and liabilities dictate that an appropriately qualified engineer be responsible for such work. A 2010 survey of professionals occupying jobs in the construction industry [ 8 ] showed that structural engineers in the UK earn an average wage of £35,009. The salary of structural engineers varies from sector to sector within the construction and built environment industry worldwide, depending on the project. For example, structural engineers working in public sector projects earn on average £37,083 per annum compared to the £43,947 average earned by those in commercial projects. Certain regions also represent higher average salaries, with structural engineers in the Middle East in all sectors, and of every level of experience, earning £45,083, compared to UK and EU countries where the average is £35,164. [ 9 ]
https://en.wikipedia.org/wiki/Structural_engineer
Structural engineering is a sub-discipline of civil engineering in which structural engineers are trained to design the 'bones and joints' that create the form and shape of human-made structures . Structural engineers also must understand and calculate the stability , strength, rigidity and earthquake-susceptibility of built structures for buildings [ 1 ] and nonbuilding structures . The structural designs are integrated with those of other designers such as architects and building services engineer and often supervise the construction of projects by contractors on site. [ 2 ] They can also be involved in the design of machinery, medical equipment, and vehicles where structural integrity affects functioning and safety. See glossary of structural engineering . Structural engineering theory is based upon applied physical laws and empirical knowledge of the structural performance of different materials and geometries. Structural engineering design uses a number of relatively simple structural concepts to build complex structural systems . Structural engineers are responsible for making creative and efficient use of funds, structural elements and materials to achieve these goals. [ 2 ] Structural engineering dates back to 2700 B.C. when the step pyramid for Pharaoh Djoser was built by Imhotep , the first engineer in history known by name. Pyramids were the most common major structures built by ancient civilizations because the structural form of a pyramid is inherently stable and can be almost infinitely scaled (as opposed to most other structural forms, which cannot be linearly increased in size in proportion to increased loads). [ 3 ] The structural stability of the pyramid, whilst primarily gained from its shape, relies also on the strength of the stone from which it is constructed, and its ability to support the weight of the stone above it. [ 4 ] The limestone blocks were often taken from a quarry near the building site and have a compressive strength from 30 to 250 MPa (MPa = Pa × 10 6 ). [ 5 ] Therefore, the structural strength of the pyramid stems from the material properties of the stones from which it was built rather than the pyramid's geometry. Throughout ancient and medieval history most architectural design and construction were carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. No theory of structures existed, and understanding of how structures stood up was extremely limited, and based almost entirely on empirical evidence of 'what had worked before' and intuition . Knowledge was retained by guilds and seldom supplanted by advances. Structures were repetitive, and increases in scale were incremental. [ 3 ] No record exists of the first calculations of the strength of structural members or the behavior of structural material, but the profession of a structural engineer only really took shape with the Industrial Revolution and the re-invention of concrete (see History of Concrete ). The physical sciences underlying structural engineering began to be understood in the Renaissance and have since developed into computer-based applications pioneered in the 1970s. [ 6 ] The history of structural engineering contains many collapses and failures. Sometimes this is due to obvious negligence, as in the case of the Pétion-Ville school collapse , in which Rev. Fortin Augustin " constructed the building all by himself, saying he didn't need an engineer as he had good knowledge of construction" following a partial collapse of the three-story schoolhouse that sent neighbors fleeing. The final collapse killed 94 people, mostly children. In other cases structural failures require careful study, and the results of these inquiries have resulted in improved practices and a greater understanding of the science of structural engineering. Some such studies are the result of forensic engineering investigations where the original engineer seems to have done everything in accordance with the state of the profession and acceptable practice yet a failure still eventuated. A famous case of structural knowledge and practice being advanced in this manner can be found in a series of failures involving box girders which collapsed in Australia during the 1970s. Structural engineering depends upon a detailed knowledge of applied mechanics , materials science , and applied mathematics to understand and predict how structures support and resist self-weight and imposed loads. To apply the knowledge successfully a structural engineer generally requires detailed knowledge of relevant empirical and theoretical design codes , the techniques of structural analysis , as well as some knowledge of the corrosion resistance of the materials and structures, especially when those structures are exposed to the external environment. Since the 1990s, specialist software has become available to aid in the design of structures, with the functionality to assist in the drawing, analyzing and designing of structures with maximum precision; examples include AutoCAD , StaadPro, ETABS , Prokon, Revit Structure, Inducta RCB, etc. Such software may also take into consideration environmental loads, such as earthquakes and winds. [ citation needed ] Structural engineers are responsible for engineering design and structural analysis. Entry-level structural engineers may design the individual structural elements of a structure, such as the beams and columns of a building. More experienced engineers may be responsible for the structural design and integrity of an entire system, such as a building. [ citation needed ] Structural engineers often specialize in particular types of structures, such as buildings, bridges, pipelines, industrial, tunnels, vehicles, ships, aircraft, and spacecraft. Structural engineers who specialize in buildings may specialize in particular construction materials such as concrete, steel, wood, masonry, alloys and composites. [ citation needed ] Structural engineering has existed since humans first started to construct their structures. It became a more defined and formalized profession with the emergence of architecture as a distinct profession from engineering during the industrial revolution in the late 19th century. Until then, the architect and the structural engineer were usually one and the same thing – the master builder. Only with the development of specialized knowledge of structural theories that emerged during the 19th and early 20th centuries, did the professional structural engineers come into existence. [ citation needed ] The role of a structural engineer today involves a significant understanding of both static and dynamic loading and the structures that are available to resist them. The complexity of modern structures often requires a great deal of creativity from the engineer in order to ensure the structures support and resist the loads they are subjected to. A structural engineer will typically have a four or five-year undergraduate degree, followed by a minimum of three years of professional practice before being considered fully qualified. Structural engineers are licensed or accredited by different learned societies and regulatory bodies around the world (for example, the Institution of Structural Engineers in the UK). Depending on the degree course they have studied and/or the jurisdiction they are seeking licensure in, they may be accredited (or licensed) as just structural engineers, or as civil engineers, or as both civil and structural engineers. Another international organisation is IABSE(International Association for Bridge and Structural Engineering). [ 7 ] The aim of that association is to exchange knowledge and to advance the practice of structural engineering worldwide in the service of the profession and society. Structural building engineering is primarily driven by the creative manipulation of materials and forms and the underlying mathematical and scientific ideas to achieve an end that fulfills its functional requirements and is structurally safe when subjected to all the loads it could reasonably be expected to experience. This is subtly different from architectural design, which is driven by the creative manipulation of materials and forms, mass, space, volume, texture, and light to achieve an end which is aesthetic, functional, and often artistic. The structural design for a building must ensure that the building can stand up safely, able to function without excessive deflections or movements which may cause fatigue of structural elements, cracking or failure of fixtures, fittings or partitions, or discomfort for occupants. It must account for movements and forces due to temperature, creep , cracking, and imposed loads. It must also ensure that the design is practically buildable within acceptable manufacturing tolerances of the materials. It must allow the architecture to work, and the building services to fit within the building and function (air conditioning, ventilation, smoke extract, electrics, lighting, etc.). The structural design of a modern building can be extremely complex and often requires a large team to complete. Structural engineering specialties for buildings include: Earthquake engineering structures are those engineered to withstand earthquakes . The main objectives of earthquake engineering are to understand the interaction of structures with the shaking ground, foresee the consequences of possible earthquakes, and design and construct the structures to perform during an earthquake. Earthquake-proof structures are not necessarily extremely strong like the El Castillo pyramid at Chichen Itza shown above. One important tool of earthquake engineering is base isolation , which allows the base of a structure to move freely with the ground. Civil structural engineering includes all structural engineering related to the built environment. It includes: The structural engineer is the lead designer on these structures, and often the sole designer. In the design of structures such as these, structural safety is of paramount importance (in the UK, designs for dams, nuclear power stations and bridges must be signed off by a chartered engineer ). Civil engineering structures are often subjected to very extreme forces, such as large variations in temperature, dynamic loads such as waves or traffic, or high pressures from water or compressed gases. They are also often constructed in corrosive environments, such as at sea, in industrial facilities, or below ground. The forces which parts of a machine are subjected to can vary significantly and can do so at a great rate. The forces which a boat or aircraft are subjected to vary enormously and will do so thousands of times over the structure's lifetime. The structural design must ensure that such structures can endure such loading for their entire design life without failing. These works can require mechanical structural engineering: Aerospace structure types include launch vehicles, ( Atlas , Delta , Titan), missiles (ALCM, Harpoon), Hypersonic vehicles (Space Shuttle), military aircraft (F-16, F-18) and commercial aircraft ( Boeing 777, MD-11). Aerospace structures typically consist of thin plates with stiffeners for the external surfaces, bulkheads, and frames to support the shape and fasteners such as welds, rivets, screws, and bolts to hold the components together. A nanostructure is an object of intermediate size between molecular and microscopic (micrometer-sized) structures. In describing nanostructures it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometer range. The term 'nanostructure' is often used when referring to magnetic technology. Medical equipment (also known as armamentarium) is designed to aid in the diagnosis, monitoring or treatment of medical conditions. There are several basic types: diagnostic equipment includes medical imaging machines, used to aid in diagnosis; equipment includes infusion pumps, medical lasers, and LASIK surgical machines ; medical monitors allow medical staff to measure a patient's medical state. Monitors may measure patient vital signs and other parameters including ECG , EEG , blood pressure, and dissolved gases in the blood; diagnostic medical equipment may also be used in the home for certain purposes, e.g. for the control of diabetes mellitus. A biomedical equipment technician (BMET) is a vital component of the healthcare delivery system. Employed primarily by hospitals, BMETs are the people responsible for maintaining a facility's medical equipment. Any structure is essentially made up of only a small number of different types of elements: Many of these elements can be classified according to form (straight, plane / curve) and dimensionality (one-dimensional / two-dimensional): Columns are elements that carry only axial force (compression) or both axial force and bending (which is technically called a beam-column but practically, just a column). The design of a column must check the axial capacity of the element and the buckling capacity. The buckling capacity is the capacity of the element to withstand the propensity to buckle. Its capacity depends upon its geometry, material, and the effective length of the column, which depends upon the restraint conditions at the top and bottom of the column. The effective length is K ∗ l {\displaystyle K*l} where l {\displaystyle l} is the real length of the column and K is the factor dependent on the restraint conditions. The capacity of a column to carry axial load depends on the degree of bending it is subjected to, and vice versa. This is represented on an interaction chart and is a complex non-linear relationship. A beam may be defined as an element in which one dimension is much greater than the other two and the applied loads are usually normal to the main axis of the element. Beams and columns are called line elements and are often represented by simple lines in structural modeling. Beams are elements that carry pure bending only. Bending causes one part of the section of a beam (divided along its length) to go into compression and the other part into tension. The compression part must be designed to resist buckling and crushing, while the tension part must be able to adequately resist the tension. A truss is a structure comprising members and connection points or nodes. When members are connected at nodes and forces are applied at nodes members can act in tension or compression. Members acting in compression are referred to as compression members or struts while members acting in tension are referred to as tension members or ties . Most trusses use gusset plates to connect intersecting elements. Gusset plates are relatively flexible and unable to transfer bending moments . The connection is usually arranged so that the lines of force in the members are coincident at the joint thus allowing the truss members to act in pure tension or compression. Trusses are usually used in large-span structures, where it would be uneconomical to use solid beams. Plates carry bending in two directions. A concrete flat slab is an example of a plate. Plate behavior is based on continuum mechanics . Due to the complexity involved they are most often analyzed using a finite element analysis. They can also be designed with yield line theory, where an assumed collapse mechanism is analyzed to give an upper bound on the collapse load. This technique is used in practice [ 8 ] but because the method provides an upper-bound (i.e. an unsafe prediction of the collapse load) for poorly conceived collapse mechanisms, great care is needed to ensure that the assumed collapse mechanism is realistic. [ 9 ] Shells derive their strength from their form and carry forces in compression in two directions. A dome is an example of a shell. They can be designed by making a hanging-chain model, which will act as a catenary in pure tension and inverting the form to achieve pure compression. Arches carry forces in compression in one direction only, which is why it is appropriate to build arches out of masonry. They are designed by ensuring that the line of thrust of the force remains within the depth of the arch. It is mainly used to increase the bountifulness of any structure. Catenaries derive their strength from their form and carry transverse forces in pure tension by deflecting (just as a tightrope will sag when someone walks on it). They are almost always cable or fabric structures. A fabric structure acts as a catenary in two directions. Structural engineering depends on the knowledge of materials and their properties, in order to understand how different materials support and resist loads. It also involves a knowledge of Corrosion engineering to avoid for example galvanic coupling of dissimilar materials. Common structural materials are:
https://en.wikipedia.org/wiki/Structural_engineering
Structural engineering depends upon a detailed knowledge of loads , physics and materials to understand and predict how structures support and resist self-weight and imposed loads. To apply the knowledge successfully structural engineers will need a detailed knowledge of mathematics and of relevant empirical and theoretical design codes. They will also need to know about the corrosion resistance of the materials and structures, especially when those structures are exposed to the external environment. The criteria which govern the design of a structure are either serviceability (criteria which define whether the structure is able to adequately fulfill its function) or strength (criteria which define whether a structure is able to safely support and resist its design loads). A structural engineer designs a structure to have sufficient strength and stiffness to meet these criteria. Loads imposed on structures are supported by means of forces transmitted through structural elements. These forces can manifest themselves as tension (axial force), compression (axial force), shear , and bending , or flexure (a bending moment is a force multiplied by a distance, or lever arm, hence producing a turning effect or torque ). Strength depends upon material properties. The strength of a material depends on its capacity to withstand axial stress , shear stress , bending, and torsion. The strength of a material is measured in force per unit area (newtons per square millimetre or N/mm², or the equivalent megapascals or MPa in the SI system and often pounds per square inch psi in the United States customary units system). A structure fails the strength criterion when the stress (force divided by area of material) induced by the loading is greater than the capacity of the structural material to resist the load without breaking, or when the strain (percentage extension) is so great that the element no longer fulfills its function ( yield ). See also: Stiffness depends upon material properties and geometry . The stiffness of a structural element of a given material is the product of the material's Young's modulus and the element's second moment of area . Stiffness is measured in force per unit length (newtons per millimetre or N/mm), and is equivalent to the 'force constant' in Hooke's law . The deflection of a structure under loading is dependent on its stiffness. The dynamic response of a structure to dynamic loads (the natural frequency of a structure) is also dependent on its stiffness. In a structure made up of multiple structural elements where the surface distributing the forces to the elements is rigid, the elements will carry loads in proportion to their relative stiffness - the stiffer an element, the more load it will attract. This means that load/stiffness ratio, which is deflection, remains same in two connected (jointed) elements. In a structure where the surface distributing the forces to the elements is flexible (like a wood-framed structure), the elements will carry loads in proportion to their relative tributary areas. A structure is considered to fail the chosen serviceability criteria if it is insufficiently stiff to have acceptably small deflection or dynamic response under loading. The inverse of stiffness is flexibility . The safe design of structures requires a design approach which takes account of the statistical likelihood of the failure of the structure. Structural design codes are based upon the assumption that both the loads and the material strengths vary with a normal distribution . [ citation needed ] The job of the structural engineer is to ensure that the chance of overlap between the distribution of loads on a structure and the distribution of material strength of a structure is acceptably small (it is impossible to reduce that chance to zero). It is normal to apply a partial safety factor to the loads and to the material strengths, to design using 95th percentiles (two standard deviations from the mean ). The safety factor applied to the load will typically ensure that in 95% of times the actual load will be smaller than the design load, while the factor applied to the strength ensures that 95% of times the actual strength will be higher than the design strength. The safety factors for material strength vary depending on the material and the use it is being put to and on the design codes applicable in the country or region. A more sophisticated approach of modeling structural safety is to rely on structural reliability , in which both loads and resistances are modeled as probabilistic variables. [ 1 ] [ 2 ] However, using this approach requires detailed modeling of the distribution of loads and resistances. Furthermore, its calculations are more computation intensive. A load case is a combination of different types of loads with safety factors applied to them. A structure is checked for strength and serviceability against all the load cases it is likely to experience during its lifetime. Typical load cases for design for strength (ultimate load cases; ULS) are: A typical load case for design for serviceability (characteristic load cases; SLS) is: Different load cases would be used for different loading conditions. For example, in the case of design for fire a load case of 1.0 x Dead Load + 0.8 x Live Load may be used, as it is reasonable to assume everyone has left the building if there is a fire. In multi-story buildings it is normal to reduce the total live load depending on the number of stories being supported, as the probability of maximum load being applied to all floors simultaneously is negligibly small. It is not uncommon for large buildings to require hundreds of different load cases to be considered in the design. The most important natural laws for structural engineering are Newton's laws of motion . Newton's first law states that every body perseveres in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by force impressed. Newton's second law states that the rate of change of momentum of a body is proportional to the resultant force acting on the body and is in the same direction. Mathematically, F=ma (force = mass x acceleration). Newton's third law states that all forces occur in pairs, and these two forces are equal in magnitude and opposite in direction. With these laws it is possible to understand the forces on a structure and how that structure will resist them. The third law requires that for a structure to be stable all the internal and external forces must be in equilibrium . This means that the sum of all internal and external forces on a free-body diagram must be zero: A structural engineer must understand the internal and external forces of a structural system consisting of structural elements and nodes at their intersections. A statically determinate structure can be fully analysed using only consideration of equilibrium, from Newton's laws of motion. A statically indeterminate structure has more unknowns than equilibrium considerations can supply equations for (see simultaneous equations ). Such a system can be solved using consideration of equations of compatibility between geometry and deflections in addition to equilibrium equations, or by using virtual work . If a system is made up of b {\displaystyle b} bars, j {\displaystyle j} pin joints and r {\displaystyle r} support reactions, then it cannot be statically determinate if the following relationship does not hold: r + b = 2 j {\displaystyle r+b=2j} Even if this relationship does hold, a structure can be arranged in such a way as to be statically indeterminate. [ 3 ] Much engineering design is based on the assumption that materials behave elastically. For most materials this assumption is incorrect, but empirical evidence has shown that design using this assumption can be safe. Materials that are elastic obey Hooke's law, and plasticity does not occur. For systems that obey Hooke's law, the extension produced is directly proportional to the load: where Some design is based on the assumption that materials will behave plastically . [ 4 ] A plastic material is one which does not obey Hooke's law, and therefore deformation is not proportional to the applied load. Plastic materials are ductile materials. Plasticity theory can be used for some reinforced concrete structures assuming they are underreinforced, meaning that the steel reinforcement fails before the concrete does. Plasticity theory states that the point at which a structure collapses (reaches yield) lies between an upper and a lower bound on the load, defined as follows: If the correct collapse load is found, the two methods will give the same result for the collapse load. [ 5 ] Plasticity theory depends upon a correct understanding of when yield will occur. A number of different models for stress distribution and approximations to the yield surface of plastic materials exist: [ 6 ] The Euler–Bernoulli beam equation defines the behaviour of a beam element (see below). It is based on five assumptions: A simplified version of Euler–Bernoulli beam equation is: Here w {\displaystyle w} is the deflection and q ( x ) {\displaystyle q(x)} is a load per unit length. E {\displaystyle E} is the elastic modulus and I {\displaystyle I} is the second moment of area , the product of these giving the flexural rigidity of the beam. This equation is very common in engineering practice: it describes the deflection of a uniform, static beam. Successive derivatives of w {\displaystyle w} have important meanings: A bending moment manifests itself as a tension force and a compression force, acting as a couple in a beam. The stresses caused by these forces can be represented by: where σ {\displaystyle \sigma } is the stress, M {\displaystyle M} is the bending moment, y {\displaystyle y} is the distance from the neutral axis of the beam to the point under consideration and I {\displaystyle I} is the second moment of area . Often the equation is simplified to the moment divided by the section modulus S {\displaystyle S} , which is I / y {\displaystyle I/y} . This equation allows a structural engineer to assess the stress in a structural element when subjected to a bending moment. When subjected to compressive forces it is possible for structural elements to deform significantly due to the destabilising effect of that load. The effect can be initiated or exacerbated by possible inaccuracies in manufacture or construction. The Euler buckling formula defines the axial compression force which will cause a strut (or column) to fail in buckling. where This value is sometimes expressed for design purposes as a critical buckling stress . where Other forms of buckling include lateral torsional buckling, where the compression flange of a beam in bending will buckle, and buckling of plate elements in plate girders due to compression in the plane of the plate.
https://en.wikipedia.org/wiki/Structural_engineering_theory
The structural formula of a chemical compound is a graphic representation of the molecular structure (determined by structural chemistry methods), showing how the atoms are connected to one another. [ 1 ] The chemical bonding within the molecule is also shown, either explicitly or implicitly. Unlike other chemical formula types, [ a ] which have a limited number of symbols and are capable of only limited descriptive power, structural formulas provide a more complete geometric representation of the molecular structure. For example, many chemical compounds exist in different isomeric forms, which have different enantiomeric structures but the same molecular formula . There are multiple types of ways to draw these structural formulas such as: Lewis structures , condensed formulas, skeletal formulas , Newman projections , Cyclohexane conformations , Haworth projections , and Fischer projections . [ 3 ] Several systematic chemical naming formats, as in chemical databases , are used that are equivalent to, and as powerful as, geometric structures. These chemical nomenclature systems include SMILES , InChI and CML . These systematic chemical names can be converted to structural formulas and vice versa, but chemists nearly always describe a chemical reaction or synthesis using structural formulas rather than chemical names, because the structural formulas allow the chemist to visualize the molecules and the structural changes that occur in them during chemical reactions. ChemSketch and ChemDraw are popular downloads/websites that allow users to draw reactions and structural formulas, typically in the Lewis Structure style. Bonds are often shown as a line that connects one atom to another. One line indicates a single bond . Two lines indicate a double bond , and three lines indicate a triple bond . In some structures the atoms in between each bond are specified and shown. However, in some structures, the carbon molecules are not written out specifically. Instead, these carbons are indicated by a corner that forms when two lines connect. Additionally, Hydrogen atoms are implied and not usually drawn out. These can be inferred based on how many other atoms the carbon is attached to. For example, if Carbon A is attached to one other Carbon B, Carbon A will have three hydrogens in order to fill its octet. [ 4 ] Electrons are usually shown as colored-in circles. One circle indicates one electron. Two circles indicate a pair of electrons. Typically, a pair of electrons will also indicate a negative charge. By using the colored circles, the number of electrons in the valence shell of each respective atom is indicated, providing further descriptive information regarding the reactive capacity of that atom in the molecule. [ 4 ] Oftentimes, atoms will have a positive or negative charge as their octet may not be complete. If the atom is missing a pair of electrons or has a proton, it will have a positive charge. If the atom has electrons that are not bonded to another atom, there will be a negative charge. In structural formulas, the positive charge is indicated by ⊕ , and the negative charge is indicated by ⊖ . [ 4 ] Chirality in skeletal formulas is indicated by the Natta projection method. Stereochemistry is used to show the relative spatial arrangement of atoms in a molecule. Wedges are used to show this, and there are two types: dashed and filled. A filled wedge indicates that the atom is in the front of the molecule; it is pointing above the plane of the paper towards the front. A dashed wedge indicates that the atom is behind the molecule; it is pointing below the plane of the paper. When a straight, un-dashed line is used, the atom is in the plane of the paper. This spatial arrangement provides an idea of the molecule in a 3-dimensional space and there are constraints as to how the spatial arrangements can be arranged. [ 4 ] Wavy single bonds represent unknown or unspecified stereochemistry or a mixture of isomers. For example, the adjacent diagram shows the fructose molecule with a wavy bond to the HOCH 2 − group at the left. In this case the two possible ring structures are in chemical equilibrium with each other and also with the open-chain structure. The ring automatically opens and closes, sometimes closing with one stereochemistry and sometimes with the other. [ citation needed ] Skeletal formulas can depict cis and trans isomers of alkenes. Wavy single bonds are the standard way to represent unknown or unspecified stereochemistry or a mixture of isomers (as with tetrahedral stereocenters). A crossed double-bond has been used sometimes, but is no longer considered an acceptable style for general use. [ 5 ] Lewis structures (or "Lewis dot structures") are flat graphical formulas that show atom connectivity and lone pair or unpaired electrons, but not three-dimensional structure. This notation is mostly used for small molecules. Each line represents the two electrons of a single bond . Two or three parallel lines between pairs of atoms represent double or triple bonds, respectively. Alternatively, pairs of dots may be used to represent bonding pairs. In addition, all non-bonded electrons (paired or unpaired) and any formal charges on atoms are indicated. Through the use of Lewis structures , the placement of electrons, whether it is in a bond or in lone pairs , will allow for the identification of the formal charges of the atoms in the molecule to understand the stability and determine the most likely molecule (based on molecular geometry difference) that would be formed in a reaction. Lewis structures do give some thought to the geometry of the molecule as oftentimes, the bonds are drawn at certain angles to represent the molecule in real life. Lewis structure is best used to calculate formal charges or how atoms bond to each other as both electrons and bonds are shown. Lewis structures give an idea of the molecular and electronic geometry which varies based on the presence of bonds and lone pairs and through this one could determine the bond angles and hybridization as well. In early organic-chemistry publications, where use of graphics was strongly limited, a typographic system arose to describe organic structures in a line of text. Although this system tends to be problematic in application to cyclic compounds, it remains a convenient way to represent simple structures: Parentheses are used to indicate multiple identical groups, indicating attachment to the nearest non-hydrogen atom on the left when appearing within a formula, or to the atom on the right when appearing at the start of a formula: In all cases, all atoms are shown, including hydrogen atoms. It is also helpful to show the carbonyls where the C=O is implied through the O being placed in the parentheses. For example: Therefore, it is important to look to the left of the atom in the parentheses to make sure what atom it is attached to. This is helpful when converting from condensed formula to another form of structural formula such as skeletal formula or Lewis structures . There are different ways to show the various functional groups in the condensed formulas such as aldehyde as CHO, carboxylic acids as CO 2 H or COOH, esters as CO 2 R or COOR. However, the use of condensed formulas does not give an immediate idea of the molecular geometry of the compound or the number of bonds between the carbons, it needs to be recognized based on the number of atoms attached to the carbons and if there are any charges on the carbon. [ 6 ] Skeletal formulas are the standard notation for more complex organic molecules. In this type of diagram, first used by the organic chemist Friedrich August Kekulé von Stradonitz , [ 7 ] the carbon atoms are implied to be located at the vertices (corners) and ends of line segments rather than being indicated with the atomic symbol C. Hydrogen atoms attached to carbon atoms are not indicated: each carbon atom is understood to be associated with enough hydrogen atoms to give the carbon atom four bonds. The presence of a positive or negative charge at a carbon atom takes the place of one of the implied hydrogen atoms. Hydrogen atoms attached to atoms other than carbon must be written explicitly. An additional feature of skeletal formulas is that by adding certain structures the stereochemistry , that is the three-dimensional structure, of the compound can be determined. Often times, the skeletal formula can indicate stereochemistry through the use of wedges instead of lines. Solid wedges represent bonds pointing above the plane of the paper, whereas dashed wedges represent bonds pointing below the plane. The Newman projection and the sawhorse projection are used to depict specific conformers or to distinguish vicinal stereochemistry. In both cases, two specific carbon atoms and their connecting bond are the center of attention. The only difference is a slightly different perspective: the Newman projection looking straight down the bond of interest, the sawhorse projection looking at the same bond but from a somewhat oblique vantage point. In the Newman projection, a circle is used to represent a plane perpendicular to the bond, distinguishing the substituents on the front carbon from the substituents on the back carbon. In the sawhorse projection, the front carbon is usually on the left and is always slightly lower. Sometimes, an arrow is used to indicate the front carbon. The sawhorse projection is very similar to a skeletal formula, and it can even use wedges instead of lines to indicate the stereochemistry of the molecule. The sawhorse projection is set apart from the skeletal formulas because the sawhorse projection is not a very good indicator of molecule geometry and molecular arrangement. Both a Newman and Sawhorse Projection can be used to create a Fischer Projection. [ citation needed ] Certain conformations of cyclohexane and other small-ring compounds can be shown using a standard convention. For example, the standard chair conformation of cyclohexane involves a perspective view from slightly above the average plane of the carbon atoms and indicates clearly which groups are axial (pointing vertically up or down) and which are equatorial (almost horizontal, slightly slanted up or down). Bonds in front may or may not be highlighted with stronger lines or wedges. The conformations progress as follows: chair to half-chair to twist-boat to boat to twist-boat to half-chair to chair. The cyclohexane conformations may also be used to show the potential energy present at each stage as shown in the diagram. The chair conformations (A) have the lowest energy, whereas the half-chair conformations (D) have the highest energy. There is a peak/local maximum at the boat conformation (C), and there are valleys/local minimums at the twist-boat conformations (B). In addition, cyclohexane conformations can be used to indicate if the molecule has any 1,3 diaxial-interactions which are steric interactions between axial substituents on the 1,3, and 5 carbons. [ 8 ] The Haworth projection is used for cyclic sugars . Axial and equatorial positions are not distinguished; instead, substituents are positioned directly above or below the ring atom to which they are connected. Hydrogen substituents are typically omitted. However, an important thing to keep in mind while reading an Haworth projection is that the ring structures are not flat. Therefore, Haworth does not provide 3-D shape. Sir Norman Haworth , was a British Chemist, who won a Nobel Prize for his work on Carbohydrates and discovering the structure of Vitamin C. During his discovery, he also deducted different structural formulas which are now referred to as Haworth Projections. In a Haworth Projection a pyranose sugar is depicted as a hexagon and a furanose sugar is depicted as a pentagon. Usually an oxygen is placed at the upper right corner in pyranose and in the upper center in a furanose sugar. The thinner bonds at the top of the ring refer to the bonds as being farther away and the thicker bonds at the bottom of the ring refer to the end of the ring that is closer to the viewer. [ 9 ] The Fischer projection is mostly used for linear monosaccharides . At any given carbon center, vertical bond lines are equivalent to stereochemical hashed markings, directed away from the observer, while horizontal lines are equivalent to wedges, pointing toward the observer. The projection is unrealistic, as a saccharide would never adopt this multiply eclipsed conformation. Nonetheless, the Fischer projection is a simple way of depicting multiple sequential stereocenters that does not require or imply any knowledge of actual conformation. A Fischer projection will restrict a 3-D molecule to 2-D, and therefore, there are limitations to changing the configuration of the chiral centers. Fischer projections are used to determine the R and S configuration on a chiral carbon and it is done using the Cahn Ingold Prelog rules . It is a convenient way to represent and distinguish between enantiomers and diastereomers . [ 9 ] A structural formula is a simplified model that cannot represent certain aspects of chemical structures. For example, formalized bonding may not be applicable to dynamic systems such as delocalized bonds . Aromaticity is such a case and relies on convention to represent the bonding. Different styles of structural formulas may represent aromaticity in different ways, leading to different depictions of the same chemical compound. Another example is formal double bonds where the electron density is spread outside the formal bond, leading to partial double bond character and slow inter-conversion at room temperature. For all dynamic effects, temperature will affect the inter-conversion rates and may change how the structure should be represented. There is no explicit temperature associated with a structural formula, although many assume that it would be standard temperature . [ citation needed ]
https://en.wikipedia.org/wiki/Structural_formula
Structural fracture mechanics is the field of structural engineering concerned with the study of load-carrying structures that includes one or several failed or damaged components. It uses methods of analytical solid mechanics , structural engineering, safety engineering , probability theory , and catastrophe theory to calculate the load and stress in the structural components and analyze the safety of a damaged structure. There is a direct analogy between fracture mechanics of solid and structural fracture mechanics: There are different causes of the first component failure: There are two typical scenarios: If the structure does not collapse immediately there is a limited period of time until the catastrophic structural failure of the entire structure. There is a critical number of structural elements that defines whether the system has reserve ability or not. [ citation needed ] Safety engineers use the failure of the first component as an indicator and try to intervene during the given period of time to avoid the catastrophe of the entire structure. For example, “Leak-Before-Break” [ 1 ] methodology means that a leak will be discovered prior to a catastrophic failure of the entire piping system occurring in service. It has been applied to pressure vessels, nuclear piping, gas and oil pipelines, etc. The methods of structural fracture mechanics are used as checking calculations to estimate sensitivity of a structure to its component failure. [ citation needed ] The failure of a complex system with parallel redundancy can be estimated based on probabilistic properties of the system elements.
https://en.wikipedia.org/wiki/Structural_fracture_mechanics
A structural gene is a gene that codes for any RNA or protein product other than a regulatory factor (i.e. regulatory protein ). Structural genes are typically viewed as those containing sequences of DNA corresponding to the amino acids of a protein that will be produced, as long as said protein does not function to regulate gene expression. Structural gene products include enzymes and structural proteins. Also encoded by structural genes are non-coding RNAs, such as rRNAs and tRNAs (but excluding any regulatory miRNAs and siRNAs ). The distinction between structural and regulatory genes can be traced back to 1959 and work by Pardee , Jacob , and Monod —the so-called PaJaMo experiment —on the lac operon and the synthesis of proteins in E. coli . In that system, a single regulatory protein was detected that affected the transcription of the other proteins now known to compose the lac operon. [ 1 ] In prokaryotes , structural genes of related function are typically adjacent to one another on a single strand of DNA, forming an operon . This permits simpler regulation of gene expression, as a single regulatory factor can affect transcription of all associated genes. This is best illustrated by the well-studied lac operon, in which three structural genes ( lacZ , lacY , and lacA ) are all regulated by a single promoter and a single operator. Prokaryotic structural genes are transcribed into a polycistronic mRNA and subsequently translated. [ 2 ] In eukaryotes , structural genes are not sequentially placed. Each gene is instead composed of coding exons and interspersed non-coding introns . Regulatory sequences are typically found in non-coding regions upstream and downstream from the gene. Structural gene mRNAs must be spliced prior to translation to remove intronic sequences. This in turn lends itself to the eukaryotic phenomenon of alternative splicing , in which a single mRNA from a single structural gene can produce several different proteins based on which exons are included. Despite the complexity of this process, it is estimated that up to 94% of human genes are spliced in some way. [ 3 ] Furthermore, different splicing patterns occur in different tissue types. [ 4 ] An exception to this layout in eukaryotes are genes for histone proteins, which lack introns entirely. [ 5 ] Also distinct are the rDNA clusters of structural genes, in which 28S, 5.8S, and 18S sequences are adjacent, separated by short internally transcribed spacers, and likewise the 45S rDNA occurs five distinct places on the genome, but is clustered into adjacent repeats. In eubacteria these genes are organized into operons. However, in archaebacteria these genes are non-adjacent and exhibit no linkage. [ 6 ] The identification of the genetic basis for the causative agent of a disease can be an important component of understanding its effects and spread. Location and content of structural genes can elucidate the evolution of virulence, [ 7 ] as well as provide necessary information for treatment. Likewise understanding the specific changes in structural gene sequences underlying a gain or loss of virulence aids in understanding the mechanism by which diseases affect their hosts. [ 8 ] For example, Yersinia pestis (the bubonic plague ) was found to carry several virulence and inflammation-related structural genes on plasmids. [ 9 ] Likewise, the structural gene responsible for tetanus was determined to be carried on a plasmid as well. [ 10 ] Diphtheria is caused by a bacterium, but only after that bacterium has been infected by a bacteriophage carrying the structural genes for the toxin. [ 11 ] In Herpes simplex virus , the structural gene sequence responsible for virulence was found in two locations in the genome despite only one location actually producing the viral gene product. This was hypothesized to serve as a potential mechanism for strains to regain virulence if lost through mutation. [ 12 ] Understanding the specific changes in structural genes underlying a gain or loss of virulence is a necessary step in the formation of specific treatments, as well the study of possible medicinal uses of toxins. [ 11 ] As far back as 1974, DNA sequence similarity was recognized as a valuable tool for determining relationships among taxa. [ 13 ] Structural genes in general are more highly conserved due to functional constraint, and so can prove useful in examinations of more disparate taxa. Original analyses enriched samples for structural genes via hybridization to mRNA. [ 14 ] More recent phylogenetic approaches focused on structural genes of known function, conserved to varying degrees. rRNA sequences frequent targets, as they are conserved in all species. [ 15 ] Microbiology has specifically targeted the 16S gene to determine species level differences. [ 16 ] In higher-order taxa, COI is now considered the “barcode of life,” and is applied for most biological identification. [ 17 ] Despite the widespread classification of genes as either structural or regulatory, these categories are not an absolute division. Recent genetic discoveries call into question the distinction between regulatory and structural genes, [ 18 ] suggesting greater complexity. Structural gene expression is regulated by numerous factors including epigenetics (e.g. methylation) and RNA interference (RNAi). Structural genes and even regulatory genes themselves can be epigenetically regulated identically, so not all regulation is coded for by “regulatory genes”. [ 18 ] There are also examples of proteins that do not decidedly fit either category, such as chaperone proteins . These proteins aid in the folding of other proteins, a seemingly regulatory role. [ 19 ] [ 20 ] Yet these same proteins also aid in the movement of their chaperoned proteins across membranes, [ 21 ] and have now been implicated in immune responses (see Hsp60 ) [ 22 ] and in the apoptotic pathway (see Hsp70 ). [ 23 ] More recently, microRNAs were found to be produced from the internal transcribed spacers of rRNA genes. [ 24 ] Thus an internal component of a structural gene is, in fact, regulatory. Binding sites for microRNAs were also detected within coding sequences of genes. Typically interfering RNAs target the 3’UTR, but inclusion of binding sites within the sequence of the protein itself allows the transcripts of these proteins to effectively regulate the microRNAs within the cell. This interaction was demonstrated to have an effect on expression, and thus again a structural gene contains a regulatory component. [ 25 ]
https://en.wikipedia.org/wiki/Structural_gene
Structural genomics seeks to describe the 3-dimensional structure of every protein encoded by a given genome . This genome-based approach allows for a high-throughput method of structure determination by a combination of experimental and modeling approaches . The principal difference between structural genomics and traditional structural prediction is that structural genomics attempts to determine the structure of every protein encoded by the genome, rather than focusing on one particular protein. With full-genome sequences available, structure prediction can be done more quickly through a combination of experimental and modeling approaches, especially because the availability of large number of sequenced genomes and previously solved protein structures allows scientists to model protein structure on the structures of previously solved homologs. Because protein structure is closely linked with protein function, the structural genomics has the potential to inform knowledge of protein function. In addition to elucidating protein functions, structural genomics can be used to identify novel protein folds and potential targets for drug discovery. Structural genomics involves taking a large number of approaches to structure determination, including experimental methods using genomic sequences or modeling-based approaches based on sequence or structural homology to a protein of known structure or based on chemical and physical principles for a protein with no homology to any known structure. As opposed to traditional structural biology , the determination of a protein structure through a structural genomics effort often (but not always) comes before anything is known regarding the protein function. This raises new challenges in structural bioinformatics , i.e. determining protein function from its 3D structure. Structural genomics emphasizes high throughput determination of protein structures. This is performed in dedicated centers of structural genomics . While most structural biologists pursue structures of individual proteins or protein groups, specialists in structural genomics pursue structures of proteins on a genome wide scale. This implies large-scale cloning, expression and purification. One main advantage of this approach is economy of scale. On the other hand, the scientific value of some resultant structures is at times questioned. A Science article from January 2006 analyzes the structural genomics field. [ 1 ] One advantage of structural genomics, such as the Protein Structure Initiative , is that the scientific community gets immediate access to new structures, as well as to reagents such as clones and protein. A disadvantage is that many of these structures are of proteins of unknown function and do not have corresponding publications. This requires new ways of communicating this structural information to the broader research community. The Bioinformatics core of the Joint center for structural genomics (JCSG) has recently developed a wiki-based approach namely Open protein structure annotation network (TOPSAN) for annotating protein structures emerging from high-throughput structural genomics centers. One goal of structural genomics is to identify novel protein folds. Experimental methods of protein structure determination require proteins that express and/or crystallize well, which may inherently bias the kinds of proteins folds that this experimental data elucidate. A genomic, modeling-based approach such as ab initio modeling may be better able to identify novel protein folds than the experimental approaches because they are not limited by experimental constraints. Protein function depends on 3-D structure and these 3-D structures are more highly conserved than sequences . Thus, the high-throughput structure determination methods of structural genomics have the potential to inform our understanding of protein functions. This also has potential implications for drug discovery and protein engineering. [ 2 ] Furthermore, every protein that is added to the structural database increases the likelihood that the database will include homologous sequences of other unknown proteins. The Protein Structure Initiative (PSI) is a multifaceted effort funded by the National Institutes of Health with various academic and industrial partners that aims to increase knowledge of protein structure using a structural genomics approach and to improve structure-determination methodology. Structural genomics takes advantage of completed genome sequences in several ways in order to determine protein structures. The gene sequence of the target protein can also be compared to a known sequence and structural information can then be inferred from the known protein's structure. Structural genomics can be used to predict novel protein folds based on other structural data. Structural genomics can also take modeling-based approach that relies on homology between the unknown protein and a solved protein structure. Completed genome sequences allow every open reading frame (ORF), the part of a gene that is likely to contain the sequence for the messenger RNA and protein, to be cloned and expressed as protein. These proteins are then purified and crystallized, and then subjected to one of two types of structure determination: X-ray crystallography and nuclear magnetic resonance (NMR). The whole genome sequence allows for the design of every primer required in order to amplify all of the ORFs, clone them into bacteria, and then express them. By using a whole-genome approach to this traditional method of protein structure determination, all of the proteins encoded by the genome can be expressed at once. This approach allows for the structural determination of every protein that is encoded by the genome. This approach uses protein sequence data and the chemical and physical interactions of the encoded amino acids to predict the 3-D structures of proteins with no homology to solved protein structures. One highly successful method for ab initio modeling is the Rosetta program, which divides the protein into short segments and arranges short polypeptide chain into a low-energy local conformation. Rosetta is available for commercial use and for non-commercial use through its public program, Robetta. This modeling technique compares the gene sequence of an unknown protein with sequences of proteins with known structures. Depending on the degree of similarity between the sequences, the structure of the known protein can be used as a model for solving the structure of the unknown protein. Highly accurate modeling is considered to require at least 50% amino acid sequence identity between the unknown protein and the solved structure. 30-50% sequence identity gives a model of intermediate-accuracy, and sequence identity below 30% gives low-accuracy models. It has been predicted that at least 16,000 protein structures will need to be determined in order for all structural motifs to be represented at least once and thus allowing the structure of any unknown protein to be solved accurately through modeling. [ 3 ] One disadvantage of this method, however, is that structure is more conserved than sequence and thus sequence-based modeling may not be the most accurate way to predict protein structures. Threading bases structural modeling on fold similarities rather than sequence identity. This method may help identify distantly related proteins and can be used to infer molecular functions. There are currently a number of on-going efforts to solve the structures for every protein in a given proteome. One current goal of the Joint Center for Structural Genomics (JCSG), a part of the Protein Structure Initiative (PSI) is to solve the structures for all the proteins in Thermotoga maritima , a thermophillic bacterium. T. maritima was selected as a structural genomics target based on its relatively small genome consisting of 1,877 genes and the hypothesis that the proteins expressed by a thermophilic bacterium would be easier to crystallize. Lesley et al used Escherichia coli to express all the open-reading frames (ORFs) of T. martima . These proteins were then crystallized and structures were determined for successfully crystallized proteins using X-ray crystallography. Among other structures, this structural genomics approach allowed for the determination of the structure of the TM0449 protein, which was found to exhibit a novel fold as it did not share structural homology with any known protein. [ 4 ] The goal of the TB Structural Genomics Consortium is to determine the structures of potential drug targets in Mycobacterium tuberculosis , the bacterium that causes tuberculosis. The development of novel drug therapies against tuberculosis are particularly important given the growing problem of multi-drug-resistant tuberculosis . The fully sequenced genome of M. tuberculosis has allowed scientists to clone many of these protein targets into expression vectors for purification and structure determination by X-ray crystallography. Studies have identified a number of target proteins for structure determination, including extracellular proteins that may be involved in pathogenesis, iron-regulatory proteins, current drug targets, and proteins predicted to have novel folds. So far, structures have been determined for 708 of the proteins encoded by M. tuberculosis .
https://en.wikipedia.org/wiki/Structural_genomics
Structural health monitoring ( SHM ) involves the observation and analysis of a system over time using periodically sampled response measurements to monitor changes to the material and geometric properties of engineering structures such as bridges and buildings. In an operational environment, structures degrade with age and use. Long term SHM outputs periodically updated information regarding the ability of the structure to continue performing its intended function. After extreme events, such as earthquakes or blast loading, SHM is used for rapid condition screening. SHM is intended to provide reliable information regarding the integrity of the structure in near real time. [ 1 ] The SHM process involves selecting the excitation methods, the sensor types, number and locations, and the data acquisition/storage/transmittal hardware commonly called health and usage monitoring systems . Measurements may be taken to either directly detect any degradation or damage that may occur to a system or indirectly by measuring the size and frequency of loads experienced to allow the state of the system to be predicted. To directly monitor the state of a system it is necessary to identify features in the acquired data that allows one to distinguish between the undamaged and damaged structure. One of the most common feature extraction methods is based on correlating measured system response quantities , such a vibration amplitude or frequency, with observations of the degraded system. Damage accumulation testing, during which significant structural components of the system under study are degraded by subjecting them to realistic loading conditions, can also be used to identify appropriate features. This process may involve induced-damage testing, fatigue testing , corrosion growth , or temperature cycling to accumulate certain types of damage in an accelerated fashion. Qualitative and non-continuous methods have long been used to evaluate structures for their capacity to serve their intended purpose. Since the beginning of the 19th century, railroad wheel-tappers have used the sound of a hammer striking the train wheel to evaluate if damage was present. In rotating machinery, vibration monitoring has been used for decades as a performance evaluation technique. [ 1 ] Two techniques in the field of SHM are wave propagation based techniques [ 2 ] and vibration based techniques. [ 3 ] [ 4 ] [ 5 ] Broadly the literature for vibration based SHM can be divided into two aspects, the first wherein models are proposed for the damage to determine the dynamic characteristics, also known as the direct problem, and the second, wherein the dynamic characteristics are used to determine damage characteristics, also known as the inverse problem. Several fundamental axioms, or general principles, have emerged: [ 6 ] SHM System's elements typically include: An example of this technology is embedding sensors in structures like bridges and aircraft . These sensors provide real time monitoring of various structural changes like stress and strain . In the case of civil engineering structures, the data provided by the sensors is usually transmitted to a remote data acquisition centres. With the aid of modern technology, real time control of structures (Active Structural Control) based on the information of sensors is possible Commonly known as Structural Health Assessment (SHA) or SHM, this concept is widely applied to various forms of infrastructures, especially as countries all over the world enter into an even greater period of construction of various infrastructures ranging from bridges to skyscrapers. Especially so when damages to structures are concerned, it is important to note that there are stages of increasing difficulty that require the knowledge of previous stages, namely: It is necessary to employ signal processing and statistical classification to convert sensor data on the infrastructural health status into damage info for assessment. Operational evaluation attempts to answer four questions regarding the implementation of a damage identification capability: Operational evaluation begins to set the limitations on what will be monitored and how the monitoring will be accomplished. This evaluation starts to tailor the damage identification process to features that are unique to the system being monitored and tries to take advantage of unique features of the damage that is to be detected. The data acquisition portion of the SHM process involves selecting the excitation methods, the sensor types, number and locations, and the data acquisition/storage/transmittal hardware. Again, this process will be application specific. Economic considerations will play a major role in making these decisions. The intervals at which data should be collected is another consideration that must be addressed. Because data can be measured under varying conditions, the ability to normalize the data becomes very important to the damage identification process. As it applies to SHM, data normalization is the process of separating changes in sensor reading caused by damage from those caused by varying operational and environmental conditions. One of the most common procedures is to normalize the measured responses by the measured inputs. When environmental or operational variability is an issue, the need can arise to normalize the data in some temporal fashion to facilitate the comparison of data measured at similar times of an environmental or operational cycle. Sources of variability in the data acquisition process and with the system being monitored need to be identified and minimized to the extent possible. In general, not all sources of variability can be eliminated. Therefore, it is necessary to make the appropriate measurements such that these sources can be statistically quantified. Variability can arise from changing environmental and test conditions, changes in the data reduction process, and unit-to-unit inconsistencies. Data cleansing is the process of selectively choosing data to pass on to or reject from the feature selection process. The data cleansing process is usually based on knowledge gained by individuals directly involved with the data acquisition. As an example, an inspection of the test setup may reveal that a sensor was loosely mounted and, hence, based on the judgment of the individuals performing the measurement, this set of data or the data from that particular sensor may be selectively deleted from the feature selection process. Signal processing techniques such as filtering and re-sampling can also be thought of as data cleansing procedures. Finally, the data acquisition, normalization, and cleansing portion of SHM process should not be static. Insight gained from the feature selection process and the statistical model development process will provide information regarding changes that can improve the data acquisition process. The area of the SHM process that receives the most attention in the technical literature is the identification of data features that allows one to distinguish between the undamaged and damaged structure. Inherent in this feature selection process is the condensation of the data. The best features for damage identification are, again, application specific. One of the most common feature extraction methods is based on correlating measured system response quantities, such a vibration amplitude or frequency, with the first-hand observations of the degrading system. Another method of developing features for damage identification is to apply engineered flaws, similar to ones expected in actual operating conditions, to systems and develop an initial understanding of the parameters that are sensitive to the expected damage. The flawed system can also be used to validate that the diagnostic measurements are sensitive enough to distinguish between features identified from the undamaged and damaged system. The use of analytical tools such as experimentally-validated finite element models can be a great asset in this process. In multiple cases the analytical tools are used to perform numerical experiments where the flaws are introduced through computer simulation. Damage accumulation testing, during which significant structural components of the system under study are degraded by subjecting them to realistic loading conditions, can also be used to identify appropriate features. This process may involve induced-damage testing, fatigue testing, corrosion growth, or temperature cycling to accumulate certain types of damage in an accelerated fashion. Insight into the appropriate features can be gained from several types of analytical and experimental studies as described above and is usually the result of information obtained from some combination of these studies. The operational implementation and diagnostic measurement technologies needed to perform SHM produce more data than traditional uses of structural dynamics information. A condensation of the data is advantageous and necessary when comparisons of multiple feature sets obtained over the lifetime of the structure are envisioned. Also, because data will be acquired from a structure over an extended period of time and in an operational environment, robust data reduction techniques must be developed to retain feature sensitivity to the structural changes of interest in the presence of environmental and operational variability. To further aid in the extraction and recording of quality data needed to perform SHM, the statistical significance of the features should be characterized and used in the condensation process. The portion of the SHM process that has received the least attention in the technical literature is the development of statistical models for discrimination between features from the undamaged and damaged structures. Statistical model development is concerned with the implementation of the algorithms that operate on the extracted features to quantify the damage state of the structure. The algorithms used in statistical model development usually fall into three categories. When data are available from both the undamaged and damaged structure, the statistical pattern recognition algorithms fall into the general classification category, commonly referred to as supervised learning. Group classification and regression analysis are categories of supervised learning algorithms. Unsupervised learning refers to algorithms that are applied to data not containing examples from the damaged structure. Outlier or novelty detection is the primary class of algorithms applied in unsupervised learning applications. All of the algorithms analyze statistical distributions of the measured or derived features to enhance the damage identification process. Health monitoring of large bridges can be performed by simultaneous measurement of loads on the bridge and effects of these loads. It typically includes monitoring of: Provided with this knowledge, the engineer can: The state of Oregon in the United States, Department of Transportation Bridge Engineering Department has developed and implemented a Structural Health Monitoring (SHM) program as referenced in this technical paper by Steven Lovejoy, Senior Engineer. [ 7 ] References are available that provide an introduction to the application of fiber optic sensors to Structural Health Monitoring on bridges. [ 8 ] The following projects are currently known as some of the biggest on-going bridge monitoring
https://en.wikipedia.org/wiki/Structural_health_monitoring
Structural holes is a concept from social network research, originally developed by Ronald Stuart Burt . A structural hole is understood as a gap between two individuals who have complementary sources to information. The study of structural holes spans the fields of sociology, economics, and computer science. Burt introduced this concept in an attempt to explain the origin of differences in social capital . Burt’s theory suggests that individuals hold certain positional advantages/disadvantages from how they are embedded in neighborhoods or other social structures. Most social structures tend to be characterized by dense clusters of strong connections, also known as network closure . The theory relies on a fundamental idea that the homogeneity of information, new ideas, and behavior is generally higher within any group of people as compared to that in between two groups of people. [ 1 ] An individual who acts as a mediator between two or more closely connected groups of people could gain important comparative advantages . In particular, the position of a bridge between distinct groups allows him or her to transfer or gatekeep valuable information from one group to another. [ 2 ] In addition, the individual can combine all the ideas he or she receives from different sources and come up with the most innovative idea among all. [ 1 ] At the same time, a broker also occupies a precarious position, as ties with disparate groups can be fragile and time consuming to maintain. If we compare two nodes, node A is more likely to get novel information than node B, even though they have the same number of links. This is so because nodes connected to B are also highly connected between each other. Therefore, any information that any of them could get from B, it could easily get from other nodes as well. Furthermore, the information, which B gets from different connections, is likely to be overlapping, so connections involving node B are said to be redundant. On contrary, the position of node A makes it serve as a bridge or a ‘broker’ between three different clusters. Thus, node A is likely to receive some non-redundant information from its contacts. The term ‘structural holes’ is used for the separation between non-redundant contacts. As a result of the hole between two contacts, they provide network benefits to the third party (to node A). Bridge count is a simple and intuitive measure of structural holes in a network. Bridge is defined as a relation between two individuals if there is no indirect connection between them through mutual contacts. [ 3 ] Burt introduced the measure of a network’s redundancy. He aims to estimate to what extent contact j is redundant with other contacts of node i . Redundancy is understood as an investment of time and energy in a relationship with another node q , with whom node j is strongly connected. [ 2 ] Redundancy = p i q m j q {\displaystyle {\text{Redundancy}}=p_{iq}m_{jq}} Where p iq is proportion of i ’s energy invested in relationship with q , And m jq is calculated as j ’s interaction with q divided by j ’s strongest relationship with anyone. The redundancy in network is calculated by summing up this product across all nodes q . One minus this expression expresses the non-redundant portion of relationship. Effective size of i ’s network is defined as a sum of the j ’s nonredundant contacts. Effective size of i's network = ∑ j [ 1 − ∑ q p i q m j q ] , q ≠ i , j , {\displaystyle {\text{Effective size of i's network}}=\sum _{j}\left[1-\sum _{q}p_{iq}m_{jq}\right],\quad q\neq i,j,} The more each node is disconnected from other primary contacts, the higher the effective size would be. This indicator varies from 1 (network only provides a single link) to the total number of links N (each contact is non-redundant). Borgatti developed a simplified formula to calculate effective size for unweighted networks. [ 4 ] Redundancy = 2 t n {\displaystyle {\text{Redundancy}}={\frac {2t}{n}}} Where t is the number of the total ties in the egocentric network (excluding those ties to the ego) and n is the number of total nodes in the egocentric network (excluding the ego). This formula can be modified to calculate the effective size of the ego's network. Effective size of ego's network = n − 2 t n {\displaystyle {\text{Effective size of ego's network}}=n-{\frac {2t}{n}}} Network constraint of a network is a sum of each connection's constraints c ij : c i j = ( p i j + ∑ q p i q p q j ) 2 , i ≠ q ≠ j {\displaystyle c_{ij}=(p_{ij}+\sum _{q}p_{iq}p_{qj})^{2},\quad i\neq q\neq j} This indicator measures the extent to which time and energy is concentrated within a single cluster. It consists of two components: direct, when a contact consumes a large proportion of a network's time and energy, and indirect, when a contact controls other individuals, who consume a large proportion of a network's time and energy. Network constraint measures the extent to which the manager’s network of colleagues is like a straitjacket around the manager, limiting his or her vision of alternative ideas and sources of support. It depends on three network characteristics: size, density, and hierarchy. Constraint on an individual would be generally higher in case of a small network (he or she has just few contacts), and if contacts are highly connected between each other (either directly as in a dense network, or indirectly, through the mutual central contact as in a hierarchical network). [ 5 ] The idea behind structural holes theory is somewhat close to the strength of weak ties theory , famously developed by Mark Granovetter . According to weak ties argument, the stronger the tie between two people is, the more likely their contacts will overlap so that they will have common ties with the same third parties. [ 6 ] This implies that bridging ties are a potential source of novel ideas. Therefore, Granovetter argues that strong ties are unlikely to transfer any novel information. [ 6 ] Both concepts rely on the same underlying model, however, some differences between them can be distinguished. While Granovetter claims that whether a contact would serve as a bridge depends on a tie’s strength, Burt considers the opposite direction of causality. [ 7 ] Thus, he prefers the proximal cause (bridging ties), while Granovetter argues in favor of the distal cause (strength of ties). [ 7 ] The networks rich in structural holes were referred to as entrepreneurial networks, and the individual who benefits from structural holes is considered as an entrepreneur. Application for this theory can be found in one of Burt's studies of entrepreneurial network. He studied a network of 673 managers in the supply chain for the firm, and measured the degree of social brokerage. All the managers had to submit their ideas about the ways to improve supply chain management, which were then evaluated by judges. [ 1 ] The findings of this empirical study: There are several practical scenarios related to structural hole-related applications, including enterprise settings, information diffusion in social networks, software development, mobile applications, and machine learning (ML)-based social prediction. [ 8 ]
https://en.wikipedia.org/wiki/Structural_holes
Structural induction is a proof method that is used in mathematical logic (e.g., in the proof of Łoś' theorem ), computer science , graph theory , and some other mathematical fields. It is a generalization of mathematical induction over natural numbers and can be further generalized to arbitrary Noetherian induction . Structural recursion is a recursion method bearing the same relationship to structural induction as ordinary recursion bears to ordinary mathematical induction . Structural induction is used to prove that some proposition P ( x ) holds for all x of some sort of recursively defined structure, such as formulas , lists , or trees . A well-founded partial order is defined on the structures ("subformula" for formulas, "sublist" for lists, and "subtree" for trees). The structural induction proof is a proof that the proposition holds for all the minimal structures and that if it holds for the immediate substructures of a certain structure S , then it must hold for S also. (Formally speaking, this then satisfies the premises of an axiom of well-founded induction , which asserts that these two conditions are sufficient for the proposition to hold for all x .) A structurally recursive function uses the same idea to define a recursive function: "base cases" handle each minimal structure and a rule for recursion. Structural recursion is usually proved correct by structural induction; in particularly easy cases, the inductive step is often left out. The length and ++ functions in the example below are structurally recursive. For example, if the structures are lists, one usually introduces the partial order "<", in which L < M whenever list L is the tail of list M . Under this ordering, the empty list [] is the unique minimal element. A structural induction proof of some proposition P ( L ) then consists of two parts: A proof that P ([]) is true and a proof that if P ( L ) is true for some list L , and if L is the tail of list M , then P ( M ) must also be true. Eventually, there may exist more than one base case and/or more than one inductive case, depending on how the function or structure was constructed. In those cases, a structural induction proof of some proposition P ( L ) then consists of: An ancestor tree is a commonly known data structure, showing the parents, grandparents, etc. of a person as far as known (see picture for an example). It is recursively defined: As an example, the property "An ancestor tree extending over g generations shows at most 2 g − 1 persons" can be proven by structural induction as follows: Hence, by structural induction, each ancestor tree satisfies the property. As another, more formal example, consider the following property of lists : Here ++ denotes the list concatenation operation, len() the list length, and L and M are lists. In order to prove this, we need definitions for length and for the concatenation operation. Let ( h : t ) denote a list whose head (first element) is h and whose tail (list of remaining elements) is t , and let [] denote the empty list. The definitions for length and the concatenation operation are: Our proposition P ( l ) is that EQ is true for all lists M when L is l . We want to show that P ( l ) is true for all lists l . We will prove this by structural induction on lists. First we will prove that P ([]) is true; that is, EQ is true for all lists M when L happens to be the empty list [] . Consider EQ : So this part of the theorem is proved; EQ is true for all M , when L is [] , because the left-hand side and the right-hand side are equal. Next, consider any nonempty list I . Since I is nonempty, it has a head item, x , and a tail list, xs , so we can express it as ( x : xs ) . The induction hypothesis is that EQ is true for all values of M when L is xs : We would like to show that if this is the case, then EQ is also true for all values of M when L = I = ( x : xs ) . We proceed as before: Thus, from structural induction, we obtain that P ( L ) is true for all lists L . Just as standard mathematical induction is equivalent to the well-ordering principle , structural induction is also equivalent to a well-ordering principle. If the set of all structures of a certain kind admits a well-founded partial order, then every nonempty subset must have a minimal element. (This is the definition of " well-founded ".) The significance of the lemma in this context is that it allows us to deduce that if there are any counterexamples to the theorem we want to prove, then there must be a minimal counterexample. If we can show the existence of the minimal counterexample implies an even smaller counterexample, we have a contradiction (since the minimal counterexample isn't minimal) and so the set of counterexamples must be empty. As an example of this type of argument, consider the set of all binary trees . We will show that the number of leaves in a full binary tree is one more than the number of interior nodes. Suppose there is a counterexample; then there must exist one with the minimal possible number of interior nodes. This counterexample, C , has n interior nodes and l leaves, where n + 1 ≠ l . Moreover, C must be nontrivial, because the trivial tree has n = 0 and l = 1 and is therefore not a counterexample. C therefore has at least one leaf whose parent node is an interior node. Delete this leaf and its parent from the tree, promoting the leaf's sibling node to the position formerly occupied by its parent. This reduces both n and l by 1, so the new tree also has n + 1 ≠ l and is therefore a smaller counterexample. But by hypothesis, C was already the smallest counterexample; therefore, the supposition that there were any counterexamples to begin with must have been false. The partial ordering implied by 'smaller' here is the one that says that S < T whenever S has fewer nodes than T . Early publications about structural induction include:
https://en.wikipedia.org/wiki/Structural_induction
Structural information theory ( SIT ) is a theory about human perception and in particular about visual perceptual organization, which is a neuro-cognitive process. It has been applied to a wide range of research topics, [ 1 ] mostly in visual form perception but also in, for instance, visual ergonomics, data visualization , and music perception . SIT began as a quantitative model of visual pattern classification . Nowadays, it includes quantitative models of symmetry perception and amodal completion , and is theoretically sustained by a perceptually adequate formalization of visual regularity, a quantitative account of viewpoint dependencies, and a powerful form of neurocomputation. [ 2 ] SIT has been argued to be the best defined and most successful extension of Gestalt ideas . [ 3 ] It is the only Gestalt approach providing a formal calculus that generates plausible perceptual interpretations . A simplest code is a code with minimum information load, that is, a code that enables a reconstruction of the stimulus using a minimum number of descriptive parameters. Such a code is obtained by capturing a maximum amount of visual regularity and yields a hierarchical organization of the stimulus in terms of wholes and parts. The assumption that the visual system prefers simplest interpretations is called the simplicity principle. [ 4 ] Historically, the simplicity principle is an information-theoretical translation of the Gestalt law of Prägnanz, [ 5 ] which was inspired by the natural tendency of physical systems to settle into relatively stable states defined by a minimum of free-energy. Furthermore, just as the later-proposed minimum description length principle in algorithmic information theory (AIT), a.k.a. the theory of Kolmogorov complexity , it can be seen as a formalization of Occam's Razor , according to which the simplest interpretation of data is the best one. Crucial to the latter finding is the distinction between, and integration of, viewpoint-independent and viewpoint-dependent factors in vision, as proposed in SIT's empirically successful model of amodal completion. [ 6 ] In the Bayesian framework, these factors correspond to prior probabilities and conditional probabilities, respectively. In SIT's model, however, both factors are quantified in terms of complexities, that is, complexities of objects and of their spatial relationships, respectively. [ 7 ] [ 8 ] [ 9 ] In SIT's formal coding model, candidate interpretations of a stimulus are represented by symbol strings , in which identical symbols refer to identical perceptual primitives (e.g., blobs or edges). Every substring of such a string represents a spatially contiguous part of an interpretation, so that the entire string can be read as a reconstruction recipe for the interpretation and, thereby, for the stimulus. These strings then are encoded (i.e., they are searched for visual regularities) to find the interpretation with the simplest code. This encoding is performed by way of symbol manipulation, which, in psychology, has led to critical statements of the sort of "SIT assumes that the brain performs symbol manipulation". Such statements, however, fall in the same category as statements such as "physics assumes that nature applies formulas such as Einstein's E=mc 2 or Newton's F=ma " and "DST models assume that dynamic systems apply differential equations". To obtain simplest codes, SIT applies coding rules that capture the kinds of regularity called iteration, symmetry, and alternation. These have been shown to be the only regularities that satisfy the formal criteria of (a) being holographic regularities that (b) allow for hierarchically transparent codes. [ 10 ] A crucial difference with respect to the traditionally considered transformational formalization of visual regularity is that, holographically, mirror symmetry is composed of many relationships between symmetry pairs rather than one relationship between symmetry halves. Whereas the transformational characterization may be suited better for object recognition , the holographic characterization seems more consistent with the buildup of mental representations in object perception. The perceptual relevance of the criteria of holography and transparency has been verified in the holographic approach to visual regularity. [ 11 ] It also explains that the detectability of mirror symmetries and Glass pattens in the presence of noise follows a psychophysical law that improves on Weber's law . [ 12 ]
https://en.wikipedia.org/wiki/Structural_information_theory
A structural insulated panel , or structural insulating panel , ( SIP ), is a form of sandwich panel used as a building material in the construction industry. SIP is a sandwich structured composite , consisting of an insulating layer of rigid core sandwiched between two layers of structural board. The board can be sheet metal , fibre cement , magnesium oxide board (MgO), plywood or oriented strand board (OSB), and the core can either be expanded polystyrene foam (EPS), extruded polystyrene foam (XPS), polyisocyanurate foam, polyurethane foam , or be composite honeycomb (HSC). The sheathing accepts all tensile forces while the core material has to withstand only some compressive as well as shear forces . In a SIP several components of conventional building, such as studs and joists, insulation, vapor barrier and air barrier can be combined. The panel can be used for many different applications, such as exterior wall, roof, floor and foundation systems. Although foam-core panels gained attention in the 1970s, the idea of using stress-skinned panels for construction began in the 1930s. Research and testing of the technology was done primarily by Forest Products Laboratory (FPL) in Madison, Wisconsin , as part of a U.S. Forest Service attempt to conserve forest resources. In 1937, a small stressed-skin house was constructed and garnered enough attention to bring in First Lady Eleanor Roosevelt to dedicate the house. In a testament to the durability of such panel structures, it endured the Wisconsin climate and was used by University of Wisconsin–Madison as a day care center until 1998, when it was removed to make way for a new Pharmacy School building. With the success of the stress-skinned panels, it was suggested stronger skins could take all the structural load and eliminate the frame altogether. Thus in 1947, structural insulated panel development began when corrugated paperboard cores were tested with various skin materials of plywood, tempered hardboard and treated paperboard. The building was dismantled in 1978, and most of the panels retained their original strength with the exception of paperboard, which is unsuited to outdoor exposure. Panels consisting of polystyrene core and paper overlaid with plywood skins were used in a building in 1967, and as of 2005 [update] the panels performed well. SIP systems were used by Woods Constructors of Santa Paula, California , in their homes and apartments from 1965 until 1984. This work was the basis for John Thomas Woods, Paul Flather Woods, John David Woods, and Frederick Thomas Woods when they used a similar concept to patent the Footing Form for Modular homes (US Patent No. 4817353) issued on April 4, 1989. Numerous homes in Santa Paula, Fillmore , Palm Springs , and surrounding areas use SIPs as the primary method of construction. The design was awarded approval from (then) ICBO and SBCCI, now ICC. SIPs are most commonly made of OSB panels sandwiched around a foam core made of expanded polystyrene (EPS), extruded polystyrene (XPS) or rigid polyurethane foam. Other materials can be used in replacement of OSB, such as plywood, pressure-treated plywood for below-grade foundation walls, steel, [ 1 ] aluminum, cement board such as Hardiebacker, and even exotic materials like stainless steel, fiber-reinforced plastic, and magnesium oxide. Some SIPs use fiber-cement or plywood sheets for the panels, and agricultural fiber, such as wheat straw, for the core. The third component in SIPs is the spline or connector piece between SIPs. Dimensional lumber is commonly used but creates thermal bridging and lowers insulation values. To maintain higher insulation values through the spline, manufacturers use Insulated Lumber, Composite Splines, Mechanical Locks, Overlapping OSB Panels, or other creative methods. Depending on the method selected, other advantages such as full nailing surfaces or increased structural strength may become available. SIP's are most often manufactured in a traditional factory. Processing equipment is used to regulate pressures and heat in a uniform and consistent manner. There are two main processing methods which correspond to the materials used for the SIP core. When manufacturing a panel with a polystyrene core both pressure and heat are required to ensure the bonding glue has penetrated and set completely. Although a number of variations exist, in general, the foam core is first covered with an adhesive and the skin is set in place. The three pieces are set into a large clamping device and pressure and heat are applied. The three pieces must stay in the clamping device until the glue has cured. When manufacturing a panel with a polyurethane core pressure and heat are both generated from the expansion of the foam during the foaming process. The skins are set in a large clamping device which functions as a mold. The skins must be held apart from each other to allow the liquid polyurethane materials to flow into the device. Once in the device, the foam begins to rise. The mold/press is generally configured to withstand the heat and the pressures generated from the chemical foaming. The SIP is left in the mold/press to cure slightly and when removed will continue to cure for several days. Until recently, both of these processes required a factory setting. However, recent advancements have presented an alternative with SIP processing equipment that allows SIPs to be manufactured on the job-site. This is welcome news for builders in developing countries where the technology may be best suited to reduce greenhouse emissions and improve sustainability in housing but are unavailable. The use of SIPs brings many benefits and some drawbacks compared to a conventional framed building. The costs of SIPs are higher than the materials for a comparable framed building in the United States; however, this may not be true elsewhere. A well-built home using SIPs will have a tighter building envelope and the walls will have higher insulating properties, which leads to fewer drafts and a decrease in operating costs. Also, due to the standardized and all-in-one nature of SIPs, construction time can be less than for a frame home, as well as requiring fewer tradespeople. The panels can be used as floor, wall, and roof, with the use of the panels as floors being of particular benefit when used above an uninsulated space below. As a result, the total life-cycle cost of a SIP-constructed building will, in general, be lower than for a conventional framed one—by as much as 40%. Whether the total construction cost (materials and labor) is lower than for conventional framing appears to depend on the circumstances, including local labor conditions and the degree to which the building design is optimized for one or the other technology. An OSB skinned system structurally outperforms conventional stick framed construction in some cases; primarily in axial load strength. SIPs maintain similar versatility to stick framed houses when incorporating custom designs. Also, since SIPs work as framing, insulation, and exterior sheathing, and can come precut from the factory for the specific job, the exterior building envelope can be built quite quickly. SIPs panels also tend to be lightweight and compact which aids this offsite construction. The environmental performance of SIPs, moreover, is very good due to their exceptional thermal insulation . They also offer a resistance to damp and cold problems like compression shrinkage and cold bridging that cannot be matched by timber and more traditional building materials. [ 2 ] When tested under laboratory conditions, the SIP, included in a wall, foundation, floor, or roof system, is installed in a steady-state (no air infiltration) environment; systems incorporating fiberglass insulation are not installed in steady-state environments as they require ventilation to remove moisture. With the exception of structural metals, such as steel, all structural materials creep over time. In the case of SIPs, the creep potential of OSB faced SIPs with EPS or polyurethane foam cores has been studied and creep design recommendations exist. [ 3 ] [ 4 ] The long-term effects of using unconventional facing and core materials require material specific testing to quantify creep design values. In the United States, SIPs tend to come in sizes from 4 to 24 feet (1.2–7.3 m) in width. Elsewhere, typical product dimensions are 300, 600, or 1,200 mm wide and 2.4, 2.7, and 3 m long, with roof SIPs up to 6 m long. Smaller sections ease transportation and handling, but the use of the largest panel possible will create the best insulated building. At 15−20 kg/m 2 , longer panels can become difficult to handle without the use of a crane to position them, and this is a consideration that must be taken into account due to cost and site limitations. Also of note is that when needed for special circumstances longer spans can often be requested, such as for a long roof span. Typical U.S. height for panels is 8 or 9 feet (2.4 or 2.7 m). Panels come in widths ranging from 4 to 12 inches (100–300 mm) thick and a rough cost is $4–$6/ft 2 in the U.S. [ 5 ] In 4Q 2010, new methods of forming radius, sine curve, arches and tubular SIPs were commercialized. Due to the custom nature and technical difficulty of forming and curing specialty shapes, pricing is typically three or four times that of standard panels per foot. [ 6 ] EPS is the most common of the foams used and has an R-value ( thermal resistance ) of about 4 °F·ft 2 ·h/Btu (equivalent to about 0.7 K·m 2 /W) per 25 mm thickness, which would give the 3.5 inches (89 mm) of foam in a 4.5-inch-thick (110 mm) panel an R value of 13.8 (caution: extrapolating R-values over thickness may be imprecise due to non-linear thermal properties of most materials). This at face value appears to be comparable to an R-13 batt of fiberglass, but because in a standard stick frame house there is significantly more wall containing low R value wood that acts as a cold bridge, the thermal performance of the R-13.8 SIP wall will be considerably better. The air sealing features of SIP homes resulted in the US Environmental Protection Agency's Energy Star program to establish an inspection protocol in lieu of the typically required blower door test to assess the home's air leakage. This serves to speed the process and save the builder/homeowner money. The International Building Code references APA, Plywood Design Specification 4—Design & Fabrication of Plywood Sandwich Panels for the design of SIPs. [ 7 ] This document addressed the basic engineering mechanics of SIPs but does not provide design properties for the panels provided by any specific manufacturer. In 2007, prescriptive design provisions for OSB faced SIPs were first introduced in the 2006 International Residential Code. These provisions provide guidance on the use of SIPs as walls panels only. Aside from these non-proprietary standards, the SIP industry has relied heavily on proprietary code evaluation reports. In early 2009, SIPA partnered with ICC NTA, LLC , a third-party product evaluation certification agency, to produce the first industry wide code report which is available to all SIPA members who qualify. Unlike previous code reports, the prescriptive provisions provided in the SIPA code report are derived from an engineering design methodology which permits the design professional to consider loading conditions not addressed in the code report. [ 4 ] [ 8 ]
https://en.wikipedia.org/wiki/Structural_insulated_panel
Structural integrity and failure is an aspect of engineering that deals with the ability of a structure to support a designed structural load (weight, force, etc.) without breaking and includes the study of past structural failures in order to prevent failures in future designs. Structural integrity is the ability of an item—either a structural component or a structure consisting of many components—to hold together under a load, including its own weight, without breaking or deforming excessively. It assures that the construction will perform its designed function during reasonable use, for as long as its intended life span. Items are constructed with structural integrity to prevent catastrophic failure , which can result in injuries, severe damage, death, and/or monetary losses. Structural failure refers to the loss of structural integrity, or the loss of load -carrying structural capacity in either a structural component or the structure itself. Structural failure is initiated when a material is stressed beyond its strength limit, causing fracture or excessive deformations ; one limit state that must be accounted for in structural design is ultimate failure strength. In a well designed system, a localized failure should not cause immediate or even progressive collapse of the entire structure. Structural integrity is the ability of a structure to withstand an intended load without failing due to fracture, deformation, or fatigue. It is a concept often used in engineering to produce items that will serve their designed purposes and remain functional for a desired service life . To construct an item with structural integrity, an engineer must first consider a material's mechanical properties, such as toughness , strength , weight, hardness , and elasticity, and then determine the size and shape necessary for the material to withstand the desired load for a long life. Since members can neither break nor bend excessively, they must be both stiff and tough. A very stiff material may resist bending, but unless it is sufficiently tough, it may have to be very large to support a load without breaking. On the other hand, a highly elastic material will bend under a load even if its high toughness prevents fracture. Furthermore, each component's integrity must correspond to its individual application in any load-bearing structure. Bridge supports need a high yield strength , whereas the bolts that hold them need good shear and tensile strength . Springs need good elasticity, but lathe tooling needs high rigidity. In addition, the entire structure must be able to support its load without its weakest links failing, as this can put more stress on other structural elements and lead to cascading failures . [ 1 ] [ 2 ] The need to build structures with integrity goes back as far as recorded history. Houses needed to be able to support their own weight, plus the weight of the inhabitants. Castles needed to be fortified to withstand assaults from invaders. Tools needed to be strong and tough enough to do their jobs. In ancient times there were no mathematical formulas to predict the integrity of a structure. Builders, blacksmiths, carpenters, and masons relied on a system of trial and error (learning from past failures), experience, and apprenticeship to make safe and sturdy structures. Historically, safety and longevity were ensured by overcompensating, for example, using 20 tons of concrete when 10 tons would do. Galileo was one of the first to take the strength of materials into account in 1638, in his treatise Dialogues of Two New Sciences . However, mathematical ways to calculate such material properties did not begin to develop until the 19th century. [ 3 ] The science of fracture mechanics , as it exists today, was not developed until the 1920s, when Alan Arnold Griffith studied the brittle fracture of glass. Starting in the 1940s, the infamous failures of several new technologies made a more scientific method for analyzing structural failures necessary. During World War II, over 200 welded-steel ships broke in half due to brittle fracture, caused by stresses created from the welding process, temperature changes, and by the stress concentrations at the square corners of the bulkheads. In the 1950s, several De Havilland Comets exploded in mid-flight due to stress concentrations at the corners of their squared windows, which caused cracks to form and the pressurized cabins to explode. Boiler explosions , caused by failures in pressurized boiler tanks, were another common problem during this era, and caused severe damage. The growing sizes of bridges and buildings led to even greater catastrophes and loss of life. This need to build constructions with structural integrity led to great advances in the fields of material sciences and fracture mechanics. [ 4 ] [ 5 ] Structural failure can occur from many types of problems, most of which are unique to different industries and structural types. However, most can be traced to one of five main causes. The Dee Bridge was designed by Robert Stephenson , using cast iron girders reinforced with wrought iron struts. On 24 May 1847, it collapsed as a train passed over it, killing five people. Its collapse was the subject of one of the first formal inquiries into a structural failure. This inquiry concluded that the design of the structure was fundamentally flawed, as the wrought iron did not reinforce the cast iron, and that the casting had failed due to repeated flexing. [ 6 ] The Dee bridge disaster was followed by a number of cast iron bridge collapses, including the collapse of the first Tay Rail Bridge on 28 December 1879. Like the Dee bridge, the Tay collapsed when a train passed over it, killing 75 people. The bridge failed because it was constructed from poorly made cast iron, and because designer Thomas Bouch failed to consider wind loading on it. Its collapse resulted in cast iron being replaced by steel construction, and a complete redesign in 1890 of the Forth Railway Bridge , which became the first bridge in the world entirely made of steel. [ 7 ] The 1940 collapse of the original Tacoma Narrows Bridge is sometimes characterized in physics textbooks as a classic example of resonance, although this description is misleading. The catastrophic vibrations that destroyed the bridge were not due to simple mechanical resonance, but to a more complicated oscillation between the bridge and winds passing through it, known as aeroelastic flutter . Robert H. Scanlan , a leading contributor to the understanding of bridge aerodynamics, wrote an article about this misunderstanding. [ 8 ] This collapse, and the research that followed, led to an increased understanding of wind/structure interactions. Several bridges were altered following the collapse to prevent a similar event occurring again. The only fatality was a dog. [ 7 ] The I-35W Mississippi River bridge (officially known simply as Bridge 9340) was an eight-lane steel truss arch bridge that carried Interstate 35W across the Mississippi River in Minneapolis , Minnesota, United States. The bridge was completed in 1967, and its maintenance was performed by the Minnesota Department of Transportation . The bridge was Minnesota's fifth–busiest, [ 9 ] [ 10 ] carrying 140,000 vehicles daily. [ 11 ] The bridge catastrophically failed during the evening rush hour on 1 August 2007, collapsing to the river and riverbanks beneath. Thirteen people were killed and 145 were injured. Following the collapse, the Federal Highway Administration advised states to inspect the 700 U.S. bridges of similar construction [ 12 ] after a possible design flaw in the bridge was discovered, related to large steel sheets called gusset plates which were used to connect girders together in the truss structure. [ 13 ] [ 14 ] Officials expressed concern about many other bridges in the United States sharing the same design and raised questions as to why such a flaw would not have been discovered in over 40 years of inspections. [ 14 ] On 4 April 2013, a building collapsed on tribal land in Mumbra , a suburb of Thane in Maharashtra , India. [ 15 ] [ 16 ] It has been called the worst building collapse in the area [ 17 ] [ nb 1 ] : 74 people died, including 18 children, 23 women, and 33 men, while more than 100 people survived. [ 20 ] [ 21 ] [ 22 ] The building was under construction and did not have an occupancy certificate for its 100 to 150 low- to middle-income residents [ 23 ] ; its only occupants were the site construction workers and their families. The building was reported to have been illegally constructed because standard practices were not followed for safe, lawful construction, land acquisition and resident occupancy. By 11 April, a total of 15 suspects were arrested including builders , engineers, municipal officials, and other responsible parties. Governmental records indicate that there were two orders to manage the number of illegal buildings in the area: a 2005 Maharashtra state order to use remote sensing and a 2010 Bombay High Court order. Complaints were also made to state and municipal officials. On 9 April, the Thane Municipal Corporation began a campaign to demolish illegal buildings in the area, focusing on "dangerous" buildings, and set up a call center to accept and track the resolutions of complaints about illegal buildings. The forest department, meanwhile, promised to address encroachment of forest land in the Thane District. On 24 April 2013, Rana Plaza , an eight-storey commercial building, collapsed in Savar , a sub-district in the Greater Dhaka Area , the capital of Bangladesh . The search for the dead ended on 13 May with the death toll of 1,134. [ 24 ] Approximately 2,515 injured people were rescued from the building alive. [ 25 ] [ 26 ] It is considered to be the deadliest garment-factory accident in history, as well as the deadliest accidental structural failure in modern human history. [ 23 ] [ 27 ] The building contained clothing factories, a bank, apartments, and several other shops. The shops and the bank on the lower floors immediately closed after cracks were discovered in the building. [ 28 ] [ 29 ] [ 30 ] Warnings to avoid using the building after cracks appeared the day before had been ignored. Garment workers were ordered to return the following day and the building collapsed during the morning rush-hour. [ 31 ] On 29 June 1995, the five-story Sampoong Department Store in the Seocho District of Seoul , South Korea collapsed resulting in the deaths of 502 people, with another 1,445 being trapped. In April 1995, cracks began to appear in the ceiling of the fifth floor of the store's south wing due to the presence of an air-conditioning unit on the weakened roof of the poorly built structure. On the morning of 29 June, as the number of cracks in the ceiling increased dramatically, store managers closed the top floor and shut off the air conditioning, but failed to shut the building down or issue formal evacuation orders as the executives themselves left the premises as a precaution. Five hours before the collapse, the first of several loud bangs was heard emanating from the top floors, as the vibration of the air conditioning caused the cracks in the slabs to widen further. Amid customer reports of vibration in the building, the air conditioning was turned off but, the cracks in the floors had already grown to 10 cm wide. At about 5:00 p.m. local time, the fifth-floor ceiling began to sink, and at 5:57 p.m., the roof gave way, sending the air conditioning unit crashing through into the already-overloaded fifth floor. On 16 May 1968, the 22-story residential tower Ronan Point in the London Borough of Newham collapsed when a relatively small gas explosion on the 18th floor caused a structural wall panel to be blown away from the building. The tower was constructed of precast concrete, and the failure of the single panel caused one entire corner of the building to collapse. The panel was able to be blown out because there was insufficient reinforcement steel passing between the panels. This also meant that the loads carried by the panel could not be redistributed to other adjacent panels, because there was no route for the forces to follow. As a result of the collapse, building regulations were overhauled to prevent disproportionate collapse and the understanding of precast concrete detailing was greatly advanced. Many similar buildings were altered or demolished as a result of the collapse. [ 32 ] On 19 April 1995, the nine-story concrete framed Alfred P. Murrah Federal Building in Oklahoma was struck by a truck bomb causing partial collapse, resulting in the deaths of 168 people. The bomb, though large, caused a significantly disproportionate collapse of the structure. The bomb blew all the glass off the front of the building and completely shattered a ground floor reinforced concrete column (see brisance ). At second story level a wider column spacing existed, and loads from upper story columns were transferred into fewer columns below by girders at second floor level. The removal of one of the lower story columns caused neighbouring columns to fail due to the extra load, eventually leading to the complete collapse of the central portion of the building. The bombing was one of the first to highlight the extreme forces that blast loading from terrorism can exert on buildings, and led to increased consideration of terrorism in structural design of buildings. [ 33 ] The Versailles wedding hall ( Hebrew : אולמי ורסאי ), located in Talpiot , Jerusalem , is the site of the worst civil disaster in Israel 's history. At 22:43 on Thursday night, 24 May 2001 during the wedding of Keren and Asaf Dror, a large portion of the third floor of the four-story building collapsed, killing 23 people. The bride and the groom survived. In the September 11 attacks , two commercial airliners were deliberately crashed into the Twin Towers of the World Trade Center in New York City. The impact, explosion and resulting fires caused both towers to collapse within less than two hours. The impacts severed exterior columns and damaged core columns, redistributing the loads that these columns had carried. This redistribution of loads was greatly influenced by the hat trusses at the top of each building. [ 34 ] The impacts dislodged some of the fireproofing from the steel, increasing its exposure to the heat of the fires. Temperatures became high enough to weaken the core columns to the point of creep and plastic deformation under the weight of higher floors. The heat of the fires also weakened the perimeter columns and floors, causing the floors to sag and exerting an inward force on exterior walls of the building. WTC Building 7 also collapsed later that day; the 47-story skyscraper collapsed within seconds due to a combination of a large fire inside the building and heavy structural damage from the collapse of the North Tower. [ 35 ] [ 36 ] On 24 June 2021, Champlain Towers South, a 12-story condominium building in Surfside, Florida partially collapsed, causing dozens of injuries and 98 deaths. [ 37 ] The collapse was captured on video. [ 38 ] One person was rescued from the rubble, [ 39 ] and about 35 people were rescued on 24 June from the uncollapsed portion of the building. Long-term degradation of reinforced concrete-support structures in the underground parking garage, due to water penetration and corrosion of the reinforcing steel, has been considered as a factor in—or the cause of—the collapse. The issues had been reported in 2018 and noted as "much worse" in April 2021. A $15 million program of remedial works had been approved at the time of the collapse. On 24 January, 2024, the spire of this Gothic-revival stone church collapsed, bringing down the roof and irretrievably damaging the structure. [ 40 ] Repeat structural failures on the same type of aircraft occurred in 1954, when two de Havilland Comet C1 jet airliners crashed due to decompression caused by metal fatigue , and in 1963–64, when the vertical stabilizer on four Boeing B-52 bombers broke off in mid-air. On 8 August 1991 at 16:00 UTC Warsaw radio mast, the tallest man-made object ever built before the erection of Burj Khalifa , collapsed as a consequence of an error in exchanging the guy-wires on the highest stock. The mast first bent and then snapped at roughly half its height. It destroyed at its collapse a small mobile crane of Mostostal Zabrze. As all workers had left the mast before the exchange procedures, there were no fatalities, in contrast to the similar collapse of the WLBT Tower in 1997. On 17 July 1981, two suspended walkways through the lobby of the Hyatt Regency in Kansas City, Missouri , collapsed, killing 114 and injuring more than 200 people [ 41 ] at a tea dance. The collapse was due to a late change in design, altering the method in which the rods supporting the walkways were connected to them, and inadvertently doubling the forces on the connection. The failure highlighted the need for good communication between design engineers and contractors, and rigorous checks on designs and especially on contractor-proposed design changes. The failure is a standard case study on engineering courses around the world, and is used to teach the importance of ethics in engineering . [ 42 ] [ 43 ]
https://en.wikipedia.org/wiki/Structural_integrity_and_failure
In chemistry , a structural isomer (or constitutional isomer in the IUPAC nomenclature [ 1 ] ) of a compound is a compound that contains the same number and type of atoms, but with a different connectivity (i.e. arrangement of bonds ) between them. [ 2 ] [ 3 ] The term metamer was formerly used for the same concept. [ 4 ] For example, butanol H 3 C−(CH 2 ) 3 −OH , methyl propyl ether H 3 C−(CH 2 ) 2 −O−CH 3 , and diethyl ether (H 3 CCH 2 −) 2 O have the same molecular formula C 4 H 10 O but are three distinct structural isomers. The concept applies also to polyatomic ions with the same total charge. A classical example is the cyanate ion O=C=N − and the fulminate ion C − ≡N + −O − . It is also extended to ionic compounds, so that (for example) ammonium cyanate [NH 4 ] + [O=C=N] − and urea (H 2 N−) 2 C=O are considered structural isomers, [ 4 ] and so are methylammonium formate [H 3 C−NH 3 ] + [HCO 2 ] − and ammonium acetate [NH 4 ] + [H 3 C−CO 2 ] − . Structural isomerism is the most radical type of isomerism . It is opposed to stereoisomerism , in which the atoms and bonding scheme are the same, but only the relative spatial arrangement of the atoms is different. [ 5 ] [ 6 ] Examples of the latter are the enantiomers , whose molecules are mirror images of each other, and the cis and trans versions of 2-butene . Among the structural isomers, one can distinguish several classes including skeletal isomers , positional isomers (or regioisomers ), functional isomers , tautomers , [ 7 ] and structural isotopomers . [ 8 ] A skeletal isomer of a compound is a structural isomer that differs from it in the atoms and bonds that are considered to comprise the "skeleton" of the molecule. For organic compounds , such as alkanes , that usually means the carbon atoms and the bonds between them. For example, there are three skeletal isomers of pentane : n -pentane (often called simply "pentane"), isopentane (2-methylbutane) and neopentane (dimethylpropane). [ 9 ] If the skeleton is acyclic , as in the above example, one may use the term chain isomerism . Position isomers (also positional isomers or regioisomers ) are structural isomers that can be viewed as differing only on the position of a functional group , substituent , or some other feature on the same "parent" structure. [ 10 ] For example, replacing one of the 12 hydrogen atoms –H by a hydroxyl group –OH on the n -pentane parent molecule can give any of three different position isomers: Another example of regioisomers are α-linolenic and γ-linolenic acids , both octadecatrienoic acids , each of which has three double bonds, but on different positions along the chain. Functional isomers are structural isomers which have different functional groups , resulting in significantly different chemical and physical properties. [ 11 ] An example is the pair propanal H 3 C–CH 2 –C(=O)-H and acetone H 3 C–C(=O)–CH 3 : the first has a –C(=O)H functional group, which makes it an aldehyde , whereas the second has a C–C(=O)–C group, that makes it a ketone . Another example is the pair ethanol H 3 C–CH 2 –OH (an alcohol ) and dimethyl ether H 3 C–O–CH 2 H (an ether ). In contrast, 1-propanol and 2-propanol are structural isomers, but not functional isomers, since they have the same significant functional group (the hydroxyl –OH) and are both alcohols. Besides the different chemistry, functional isomers typically have very different infrared spectra . The infrared spectrum is largely determined by the vibration modes of the molecule, and functional groups like hydroxyl and esters have very different vibration modes. Thus 1-propanol and 2-propanol have relatively similar infrared spectra because of the hydroxyl group, which are fairly different from that of methyl ethyl ether. [ citation needed ] In chemistry, one usually ignores distinctions between isotopes of the same element. However, in some situations (for instance in Raman , NMR , or microwave spectroscopy ) one may treat different isotopes of the same element as different elements. In the second case, two molecules with the same number of atoms of each isotope but distinct bonding schemes are said to be structural isotopomers . Thus, for example, ethene would have no structural isomers under the first interpretation; but replacing two of the hydrogen atoms ( 1 H) by deuterium atoms ( 2 H) may yield any of two structural isotopomers (1,1-dideuteroethene and 1,2-dideuteroethene), if both carbon atoms are the same isotope. If, in addition, the two carbons are different isotopes (say, 12 C and 13 C), there would be three distinct structural isotopomers, since 1- 13 C-1,1-dideuteroethene would be different from 1- 13 C-2,2-dideuteroethene. And, in both cases, the 1,2-dideutero structural isotopomer would occur as two stereoisotopomers, cis and trans . Two molecules (including polyatomic ions) A and B have the same structure if each atom of A can be paired with an atom of B of the same element, in a one-to-one way, so that for every bond in A there is a bond in B, of the same type, between corresponding atoms; and vice versa. [ 3 ] This requirement applies also to complex bonds that involve three or more atoms, such as the delocalized bonding in the benzene molecule and other aromatic compounds. Depending on the context, one may require that each atom be paired with an atom of the same isotope, not just of the same element. Two molecules then can be said to be structural isomers (or, if isotopes matter, structural isotopomers) if they have the same molecular formula but do not have the same structure. Structural symmetry of a molecule can be defined mathematically as a permutation of the atoms that exchanges at least two atoms but does not change the molecule's structure. Two atoms then can be said to be structurally equivalent if there is a structural symmetry that takes one to the other. [ 12 ] Thus, for example, all four hydrogen atoms of methane are structurally equivalent, because any permutation of them will preserve all the bonds of the molecule. Likewise, all six hydrogens of ethane ( C 2 H 6 ) are structurally equivalent to each other, as are the two carbons; because any hydrogen can be switched with any other, either by a permutation that swaps just those two atoms, or by a permutation that swaps the two carbons and each hydrogen in one methyl group with a different hydrogen on the other methyl. Either operation preserves the structure of the molecule. That is the case also for the hydrogen atoms in cyclopentane , allene , 2-butyne , hexamethylenetetramine , prismane , cubane , dodecahedrane , etc. On the other hand, the hydrogen atoms of propane are not all structurally equivalent. The six hydrogens attached to the first and third carbons are equivalent, as in ethane, and the two attached to the middle carbon are equivalent to each other; but there is no equivalence between these two equivalence classes . Structural equivalences between atoms of a parent molecule reduce the number of positional isomers that can be obtained by replacing those atoms for a different element or group. Thus, for example, the structural equivalence between the six hydrogens of ethane C 2 H 6 means that there is just one structural isomer of ethanol C 2 H 5 OH , not 6. The eight hydrogens of propane C 3 H 8 are partitioned into two structural equivalence classes (the six on the methyl groups, and the two on the central carbon); therefore there are only two positional isomers of propanol ( 1-propanol and 2-propanol ). Likewise there are only two positional isomers of butanol , and three of pentanol or hexanol . Once a substitution is made on a parent molecule, its structural symmetry is usually reduced, meaning that atoms that were formerly equivalent may no longer be so. Thus substitution of two or more equivalent atoms by the same element may generate more than one positional isomer. The classical example is the derivatives of benzene . Its six hydrogens are all structurally equivalent, and so are the six carbons; because the structure is not changed if the atoms are permuted in ways that correspond to flipping the molecule over or rotating it by multiples of 60 degrees. Therefore, replacing any hydrogen by chlorine yields only one chlorobenzene . However, with that replacement, the atom permutations that moved that hydrogen are no longer valid. Only one permutation remains, that corresponds to flipping the molecule over while keeping the chlorine fixed. The five remaining hydrogens then fall into three different equivalence classes: the one opposite to the chlorine is a class by itself (called the para position), the two closest to the chlorine form another class ( ortho ), and the remaining two are the third class ( meta ). Thus a second substitution of hydrogen by chlorine can yield three positional isomers: 1,2- or ortho - , 1,3- or meta - , and 1,4- or para -dichlorobenzene . For the same reason, there is only one phenol (hydroxybenzene), but three benzenediols ; and one toluene (methylbenzene), but three toluols , and three xylenes . On the other hand, the second replacement (by the same substituent) may preserve or even increase the symmetry of the molecule, and thus may preserve or reduce the number of equivalence classes for the next replacement. Thus, the four remaining hydrogens in meta -dichlorobenzene still fall into three classes, while those of ortho - fall into two, and those of para - are all equivalent again. Still, some of these 3 + 2 + 1 = 6 substitutions end up yielding the same structure, so there are only three structurally distinct trichlorobenzenes : 1,2,3- , 1,2,4- , and 1,3,5- . If the substituents at each step are different, there will usually be more structural isomers. Xylenol , which is benzene with one hydroxyl substituent and two methyl substituents, has a total of 6 isomers: Enumerating or counting structural isomers in general is a difficult problem, since one must take into account several bond types (including delocalized ones), cyclic structures, and structures that cannot possibly be realized due to valence or geometric constraints, and non-separable tautomers. For example, there are nine structural isomers with molecular formula C 3 H 6 O having different bond connectivities. Seven of them are air-stable at room temperature, and these are given in the table below. Two structural isomers are the enol tautomers of the carbonyl isomers (propionaldehyde and acetone), but these are not stable. [ 13 ]
https://en.wikipedia.org/wiki/Structural_isomer
A structural load or structural action is a mechanical load (more generally a force ) applied to structural elements . [ 1 ] [ 2 ] A load causes stress , deformation , displacement or acceleration in a structure . Structural analysis , a discipline in engineering , analyzes the effects of loads on structures and structural elements. Excess load may cause structural failure , so this should be considered and controlled during the design of a structure. Particular mechanical structures—such as aircraft , satellites , rockets , space stations , ships , and submarines —are subject to their own particular structural loads and actions. [ 3 ] Engineers often evaluate structural loads based upon published regulations , contracts , or specifications . Accepted technical standards are used for acceptance testing and inspection . In civil engineering , specified loads are the best estimate of the actual loads a structure is expected to carry. These loads come in many different forms, such as people, equipment, vehicles, wind, rain, snow, earthquakes, the building materials themselves, etc. Specified loads also known as characteristic loads in many cases. Buildings will be subject to loads from various sources. The principal ones can be classified as live loads (loads which are not always present in the structure), dead loads (loads which are permanent and immovable excepting redesign or renovation) and wind load, as described below. In some cases structures may be subject to other loads, such as those due to earthquakes or pressures from retained material. The expected maximum magnitude of each is referred to as the characteristic load. Dead loads are static forces that are relatively constant for an extended time. They can be in tension or compression . The term can refer to a laboratory test method or to the normal usage of a material or structure. Live loads are usually variable or moving loads . These can have a significant dynamic element and may involve considerations such as impact , momentum , vibration , slosh dynamics of fluids, etc. An impact load is one whose time of application on a material is less than one-third of the natural period of vibration of that material. Cyclic loads on a structure can lead to fatigue damage, cumulative damage, or failure. These loads can be repeated loadings on a structure or can be due to vibration . Imposed loads are those associated with occupation and use of the building; their magnitude is less clearly defined and is generally related to the use of the building. Structural loads are an important consideration in the design of buildings. Building codes require that structures be designed and built to safely resist all actions that they are likely to face during their service life, while remaining fit for use. [ 4 ] Minimum loads or actions are specified in these building codes for types of structures, geographic locations, usage and building materials . [ 5 ] Structural loads are split into categories by their originating cause. In terms of the actual load on a structure, there is no difference between dead or live loading, but the split occurs for use in safety calculations or ease of analysis on complex models. To meet the requirement that design strength be higher than maximum loads, building codes prescribe that, for structural design, loads are increased by load factors. These load factors are, roughly, a ratio of the theoretical design strength to the maximum load expected in service. They are developed to help achieve the desired level of reliability of a structure [ 6 ] based on probabilistic studies that take into account the load's originating cause, recurrence, distribution, and static or dynamic nature. [ 7 ] The dead load includes loads that are relatively constant over time, including the weight of the structure itself, and immovable fixtures such as walls, plasterboard or carpet . The roof is also a dead load. Dead loads are also known as permanent or static loads . Building materials are not dead loads until constructed in permanent position. [ 8 ] [ 9 ] [ 10 ] IS875(part 1)-1987 give unit weight of building materials, parts, components. Live loads, or imposed loads, are temporary, of short duration, or a moving load . These dynamic loads may involve considerations such as impact , momentum , vibration , slosh dynamics of fluids and material fatigue . Live loads, sometimes also referred to as probabilistic loads, include all the forces that are variable within the object's normal operation cycle not including construction or environmental loads. Roof and floor live loads are produced during maintenance by workers, equipment and materials, and during the life of the structure by movable objects, such as planters and people. Bridge live loads are produced by vehicles traveling over the deck of the bridge. Environmental loads are structural loads caused by natural forces such as wind, rain, snow, earthquake or extreme temperatures. Engineers must also be aware of other actions that may affect a structure, such as: A load combination results when more than one load type acts on the structure. Building codes usually specify a variety of load combinations together with load factors (weightings) for each load type in order to ensure the safety of the structure under different maximum expected loading scenarios. For example, in designing a staircase , a dead load factor may be 1.2 times the weight of the structure, and a live load factor may be 1.6 times the maximum expected live load. These two "factored loads" are combined (added) to determine the "required strength" of the staircase. The size of the load factor is based on the probability of exceeding any specified design load. Dead loads have small load factors, such as 1.2, because weight is mostly known and accounted for, such as structural members, architectural elements and finishes, large pieces of mechanical, electrical and plumbing (MEP) equipment, and for buildings, it's common to include a Super Imposed Dead Load (SIDL) of around 5 pounds per square foot (psf) accounting for miscellaneous weight such as bolts and other fasteners, cabling, and various fixtures or small architectural elements. Live loads, on the other hand, can be furniture, moveable equipment, or the people themselves, and may increase beyond normal or expected amounts in some situations, so a larger factor of 1.6 attempts to quantify this extra variability. Snow will also use a maximum factor of 1.6, while lateral loads (earthquakes and wind) are defined such that a 1.0 load factor is practical. Multiple loads may be added together in different ways, such as 1.2*Dead + 1.0*Live + 1.0*Earthquake + 0.2*Snow, or 1.2*Dead + 1.6(Snow, Live(roof), OR Rain) + (1.0*Live OR 0.5*Wind). For aircraft, loading is divided into two major categories: limit loads and ultimate loads. [ 11 ] Limit loads are the maximum loads a component or structure may carry safely. Ultimate loads are the limit loads times a factor of 1.5 or the point beyond which the component or structure will fail. [ 11 ] Gust loads are determined statistically and are provided by an agency such as the Federal Aviation Administration . Crash loads are loosely bounded by the ability of structures to survive the deceleration of a major ground impact . [ 12 ] Other loads that may be critical are pressure loads (for pressurized, high-altitude aircraft) and ground loads. Loads on the ground can be from adverse braking or maneuvering during taxiing . Aircraft are constantly subjected to cyclic loading. These cyclic loads can cause metal fatigue . [ 13 ]
https://en.wikipedia.org/wiki/Structural_load
Structural engineering depends on the knowledge of materials and their properties, in order to understand how different materials resist and support loads. Common structural materials are: Wrought iron is the simplest form of iron, and is almost pure iron (typically less than 0.15% carbon). It usually contains some slag . Its uses are almost entirely obsolete, and it is no longer commercially produced. Wrought iron is very poor in fires. It is ductile, malleable and tough. It does not corrode as easily as steel. Cast iron is a brittle form of iron which is weaker in tension than in compression. It has a relatively low melting point, good fluidity, castability, excellent machinability and wear resistance. Though almost entirely replaced by steel in building structures, cast irons have become an engineering material with a wide range of applications, including pipes, machine and car parts. Cast iron retains high strength in fires, despite its low melting point. It is usually around 95% iron, with between 2.1% and 4% carbon and between 1% and 3% silicon. It does not corrode as easily as steel. Steel is an iron alloy with controlled level of carbon (between 0.0 and 1.7% carbon). Steel is used extremely widely in all types of structures, due to its relatively low cost, high strength-to-weight ratio and speed of construction. Steel is a ductile material, which will behave elastically until it reaches yield (point 2 on the stress–strain curve), when it becomes plastic and will fail in a ductile manner (large strains, or extensions, before fracture at point 3 on the curve). Steel is equally strong in tension and compression. Steel is weak in fires, and must be protected in most buildings. Despite its high strength to weight ratio, steel buildings have as much thermal mass as similar concrete buildings. The elastic modulus of steel is approximately 205 GPa . Steel is very prone to corrosion ( rust ). Stainless steel is an iron-carbon alloy with a minimum of 10.5% chromium content. There are different types of stainless steel, containing different proportions of iron, carbon, molybdenum , nickel . It has similar structural properties to steel, although its strength varies significantly. It is rarely used for primary structure, and more for architectural finishes and building cladding. It is highly resistant to corrosion and staining. Concrete is used extremely widely in building and civil engineering structures, due to its low cost, flexibility, durability, and high strength. It also has high resistance to fire. Concrete is a non-linear, non-elastic and brittle material. It is strong in compression and very weak in tension. It behaves non-linearly at all times. Because it has essentially zero strength in tension, it is almost always used as reinforced concrete , a composite material. It is a mixture of sand , aggregate, cement and water. It is placed in a mould, or form, as a liquid, and then it sets (goes off), due to a chemical reaction between the water and cement. The hardening of the concrete is called hydration. The reaction is exothermic (gives off heat). Concrete increases in strength continually from the day it is cast. Assuming it is not cast under water or in constantly 100% relative humidity, it shrinks over time as it dries out, and it deforms over time due to a phenomenon called creep . Its strength depends highly on how it is mixed, poured, cast, compacted, cured (kept wet while setting), and whether or not any admixtures were used in the mix. It can be cast into any shape that a form can be made for. Its colour, quality, and finish depend upon the complexity of the structure, the material used for the form, and the skill of the worker. The elastic modulus of concrete can vary widely and depends on the concrete mix, age, and quality, as well as on the type and duration of loading applied to it. It is usually taken as approximately 25 GPa for long-term loads once it has attained its full strength (usually considered to be at 28 days after casting). It is taken as approximately 38 GPa for very short-term loading, such as footfalls. Concrete has very favourable properties in fire – it is not adversely affected by fire until it reaches very high temperatures. It also has very high mass, so it is good for providing sound insulation and heat retention (leading to lower energy requirements for the heating of concrete buildings). This is offset by the fact that producing and transporting concrete is very energy intensive. To study the material behavior plenty of numerical models were developed, e.g. the microplane model for constitutive laws of materials . Reinforced concrete is concrete in which steel reinforcement bars ("rebars"), plates or fibers have been incorporated to strengthen a material that would otherwise be brittle. In industrialised countries, nearly all concrete used in construction is reinforced concrete. Due to its weakness in tension capacity, concrete will fail suddenly and in brittle manner under flexural (bending) or tensile force unless adequately reinforced with steel. Prestressed concrete is a method for overcoming the concrete's natural weakness in tension . [ 1 ] [ 2 ] It can be used to produce beams , floors or bridges with a longer span than is practical with ordinary reinforced concrete . Prestressing tendons (generally of high tensile steel cable or rods) are used to provide a clamping load which produces a compressive stress that offsets the tensile stress that the concrete compression member would otherwise experience due to a bending load. Aluminium is a soft, lightweight, malleable metal. The yield strength of pure aluminium is 7–11 MPa, while aluminium alloys have yield strengths ranging from 200 MPa to 600 MPa. Aluminium has about one-third the density and stiffness of steel. It is ductile, and easily machined, cast, and extruded. Corrosion resistance is excellent due to a thin surface layer of aluminium oxide that forms when the metal is exposed to air, effectively preventing further oxidation. The strongest aluminium alloys are less corrosion resistant due to galvanic reactions with alloyed copper. Aluminium is used in some building structures (mainly in facades) and very widely in aircraft engineering because of its good strength to weight ratio. It is a relatively expensive material. In aircraft it is gradually being replaced by carbon composite materials. Composite materials are used increasingly in vehicles and aircraft structures, and to some extent in other structures. They are increasingly used in bridges, especially for conservation of old structures such as Coalport cast iron bridge built in 1818. Composites are often anisotropic (they have different material properties in different directions) as they can be laminar materials. They most often behave non-linearly and will fail in a brittle manner when overloaded. They provide extremely good strength to weight ratios, but are also very expensive. The manufacturing processes, which are often extrusion, do not currently provide the economical flexibility that concrete or steel provide. The most commonly used in structural applications are glass-reinforced plastics . Masonry has been used in structures for thousands of years, and can take the form of stone, brick or blockwork. Masonry is very strong in compression but cannot carry tension (because the mortar between bricks or blocks is unable to carry tension). Because it cannot carry structural tension, it also cannot carry bending, so masonry walls become unstable at relatively small heights. High masonry structures require stabilisation against lateral loads from buttresses (as with the flying buttresses seen in many European medieval churches) or from windposts . Historically masonry was constructed with no mortar or with lime mortar. In modern times cement based mortars are used. The mortar glues the blocks together, and also smooths out the interface between the blocks, avoiding localised point loads that might have led to cracking. Since the widespread use of concrete, stone is rarely used as a primary structural material, often only appearing as a cladding, because of its cost and the high skills needed to produce it. Brick and concrete blockwork have taken its place. Masonry, like concrete, has good sound insulation properties and high thermal mass, but is generally less energy intensive to produce. It is just as energy intensive as concrete to transport. Timber is the oldest of structural materials, and though mainly supplanted by steel, masonry and concrete, it is still used in a significant number of buildings. The properties of timber are non-linear and very variable, depending on the quality, treatment of wood, and type of wood supplied. The design of wooden structures is based strongly on empirical evidence. Wood is strong in tension and compression but can be weak in bending due to its fibrous structure. Wood is relatively good in a fire as it chars, which provides the wood in the centre of the element with some protection and allows the structure to retain some strength for a reasonable length of time.
https://en.wikipedia.org/wiki/Structural_material
Structural mechanics or mechanics of structures is the computation of deformations , deflections , and internal forces or stresses ( stress equivalents ) within structures, either for design or for performance evaluation of existing structures. [ 1 ] It is one subset of structural analysis . Structural mechanics analysis needs input data such as structural loads , the structure's geometric representation and support conditions, and the materials' properties. Output quantities may include support reactions, stresses and displacements . Advanced structural mechanics may include the effects of stability and non-linear behaviors. Mechanics of structures is a field of study within applied mechanics that investigates the behavior of structures under mechanical loads, such as bending of a beam, buckling of a column, torsion of a shaft, deflection of a thin shell, and vibration of a bridge. There are three approaches to the analysis: the energy methods , flexibility method or direct stiffness method which later developed into finite element method and the plastic analysis approach.
https://en.wikipedia.org/wiki/Structural_mechanics
A structural pipe fitting , also known as a slip on pipe fitting , clamp or pipe clamp is used to build structures such as handrails, guardrails, and other types of pipe or tubular structure. They can also be used to build furniture and theatrical riggings. The fittings slip on the pipe and are usually locked down with a set screw . The set screw can then be tightened with a simple hex wrench . Because of the modular design of standard fittings, assembly is easy, only simple hand tools are required, and risks from welding a structure are eliminated. Other advantages of using structural pipe fittings are easy installation and reconfigurable design. [ 1 ] Since there are no permanent welds in the structure, the set screws of the fittings can simply be loosened, [ 2 ] allowing them to be repositioned. The project can be disassembled and stored if needed, or even taken apart with fittings and pipe recycled into a new project. Fittings used for strong structures are galvanised malleable iron castings, and come in many styles such as elbows, tees, crosses, reducers and flanges. The fittings are not threaded; they simply lock onto the pipe with the supplied hex set screws.
https://en.wikipedia.org/wiki/Structural_pipe_fitting
In mathematical logic , structural proof theory is the subdiscipline of proof theory that studies proof calculi that support a notion of analytic proof , a kind of proof whose semantic properties are exposed. When all the theorems of a logic formalised in a structural proof theory have analytic proofs, then the proof theory can be used to demonstrate such things as consistency , provide decision procedures , and allow mathematical or computational witnesses to be extracted as counterparts to theorems, the kind of task that is more often given to model theory . [ 1 ] The notion of analytic proof was introduced into proof theory by Gerhard Gentzen for the sequent calculus ; the analytic proofs are those that are cut-free . His natural deduction calculus also supports a notion of analytic proof, as was shown by Dag Prawitz ; the definition is slightly more complex—the analytic proofs are the normal forms , which are related to the notion of normal form in term rewriting . The term structure in structural proof theory comes from a technical notion introduced in the sequent calculus: the sequent calculus represents the judgement made at any stage of an inference using special, extra-logical operators called structural operators: in A 1 , … , A m ⊢ B 1 , … , B n {\displaystyle A_{1},\dots ,A_{m}\vdash B_{1},\dots ,B_{n}} , the commas to the left of the turnstile are operators normally interpreted as conjunctions, those to the right as disjunctions, whilst the turnstile symbol itself is interpreted as an implication. However, it is important to note that there is a fundamental difference in behaviour between these operators and the logical connectives they are interpreted by in the sequent calculus: the structural operators are used in every rule of the calculus, and are not considered when asking whether the subformula property applies. Furthermore, the logical rules go one way only: logical structure is introduced by logical rules, and cannot be eliminated once created, while structural operators can be introduced and eliminated in the course of a derivation. The idea of looking at the syntactic features of sequents as special, non-logical operators is not old, and was forced by innovations in proof theory: when the structural operators are as simple as in Getzen's original sequent calculus there is little need to analyse them, but proof calculi of deep inference such as display logic (introduced by Nuel Belnap in 1982) [ 2 ] support structural operators as complex as the logical connectives, and demand sophisticated treatment. The hypersequent framework extends the ordinary sequent structure to a multiset of sequents, using an additional structural connective | (called the hypersequent bar ) to separate different sequents. It has been used to provide analytic calculi for, e.g., modal , intermediate and substructural logics [ 3 ] [ 4 ] [ 5 ] A hypersequent is a structure Γ 1 ⊢ Δ 1 ∣ ⋯ ∣ Γ n ⊢ Δ n {\displaystyle \Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}} where each Γ i ⊢ Δ i {\displaystyle \Gamma _{i}\vdash \Delta _{i}} is an ordinary sequent, called a component of the hypersequent. As for sequents, hypersequents can be based on sets, multisets, or sequences, and the components can be single-conclusion or multi-conclusion sequents . The formula interpretation of the hypersequents depends on the logic under consideration, but is nearly always some form of disjunction. The most common interpretations are as a simple disjunction ( ⋀ Γ 1 → ⋁ Δ 1 ) ∨ ⋯ ∨ ( ⋀ Γ n → ⋁ Δ n ) {\displaystyle (\bigwedge \Gamma _{1}\rightarrow \bigvee \Delta _{1})\lor \dots \lor (\bigwedge \Gamma _{n}\rightarrow \bigvee \Delta _{n})} for intermediate logics, or as a disjunction of boxes ◻ ( ⋀ Γ 1 → ⋁ Δ 1 ) ∨ ⋯ ∨ ◻ ( ⋀ Γ n → ⋁ Δ n ) {\displaystyle \Box (\bigwedge \Gamma _{1}\rightarrow \bigvee \Delta _{1})\lor \dots \lor \Box (\bigwedge \Gamma _{n}\rightarrow \bigvee \Delta _{n})} for modal logics. In line with the disjunctive interpretation of the hypersequent bar, essentially all hypersequent calculi include the external structural rules , in particular the external weakening rule Γ 1 ⊢ Δ 1 ∣ ⋯ ∣ Γ n ⊢ Δ n Γ 1 ⊢ Δ 1 ∣ ⋯ ∣ Γ n ⊢ Δ n ∣ Σ ⊢ Π {\displaystyle {\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}}{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Sigma \vdash \Pi }}} and the external contraction rule Γ 1 ⊢ Δ 1 ∣ ⋯ ∣ Γ n ⊢ Δ n ∣ Γ n ⊢ Δ n Γ 1 ⊢ Δ 1 ∣ ⋯ ∣ Γ n ⊢ Δ n {\displaystyle {\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Gamma _{n}\vdash \Delta _{n}}{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}}}} The additional expressivity of the hypersequent framework is provided by rules manipulating the hypersequent structure. An important example is provided by the modalised splitting rule [ 4 ] Γ 1 ⊢ Δ 1 ∣ ⋯ ∣ Γ n ⊢ Δ n ∣ ◻ Σ , Ω ⊢ ◻ Π , Θ Γ 1 ⊢ Δ 1 ∣ ⋯ ∣ Γ n ⊢ Δ n ∣ ◻ Σ ⊢ ◻ Π ∣ Ω ⊢ Θ {\displaystyle {\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Box \Sigma ,\Omega \vdash \Box \Pi ,\Theta }{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Box \Sigma \vdash \Box \Pi \mid \Omega \vdash \Theta }}} for modal logic S5 , where ◻ Σ {\displaystyle \Box \Sigma } means that every formula in ◻ Σ {\displaystyle \Box \Sigma } is of the form ◻ A {\displaystyle \Box A} . Another example is given by the communication rule for the intermediate logic LC [ 4 ] Γ 1 ⊢ Δ 1 ∣ ⋯ ∣ Γ n ⊢ Δ n ∣ Ω ⊢ A Σ 1 ⊢ Π 1 ∣ ⋯ ∣ Σ m ⊢ Π m ∣ Θ ⊢ B Γ 1 ⊢ Δ 1 ∣ ⋯ ∣ Γ n ⊢ Δ n ∣ Σ 1 ⊢ Π 1 ∣ ⋯ ∣ Σ m ⊢ Π m ∣ Ω ⊢ B ∣ Θ ⊢ A {\displaystyle {\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Omega \vdash A\qquad \Sigma _{1}\vdash \Pi _{1}\mid \dots \mid \Sigma _{m}\vdash \Pi _{m}\mid \Theta \vdash B}{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Sigma _{1}\vdash \Pi _{1}\mid \dots \mid \Sigma _{m}\vdash \Pi _{m}\mid \Omega \vdash B\mid \Theta \vdash A}}} Note that in the communication rule the components are single-conclusion sequents. The nested sequent calculus is a formalisation that resembles a 2-sided calculus of structures.
https://en.wikipedia.org/wiki/Structural_proof_theory
Structural reliability is about applying reliability engineering theories to buildings and, more generally, structural analysis. [ 1 ] [ 2 ] Reliability is also used as a probabilistic measure of structural safety. The reliability of a structure is defined as the probability of complement of failure ( Reliability = 1 − Probability of Failure ) {\displaystyle ({\text{Reliability}}=1-{\text{Probability of Failure}})} . The failure occurs when the total applied load is larger than the total resistance of the structure. Structural reliability has become known as a design philosophy in the twenty-first century, and it might replace traditional deterministic ways of design [ 3 ] and maintenance. [ 2 ] In structural reliability studies, both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated. When loads and resistances are explicit and have their own independent function, the probability of failure could be formulated as follows. [ 1 ] [ 2 ] where P f {\displaystyle P_{f}} is the probability of failure, F R ( s ) {\displaystyle F_{R}(s)} is the cumulative distribution function of resistance (R), and f s ( s ) {\displaystyle f_{s}(s)} is the probability density of load (S). However, in most cases, the distribution of loads and resistances are not independent and the probability of failure is defined via the following more general formula. where 𝑋 is the vector of the basic variables, and G(X) that is called is the limit state function could be a line, surface or volume that the integral is taken on its surface. In some cases when load and resistance are explicitly expressed (such as equation ( 1 ) above), and their distributions are normal , the integral of equation ( 1 ) has a closed-form solution as follows. In most cases load and resistance are not normally distributed. Therefore, solving the integrals of equations ( 1 ) and ( 2 ) analytically is impossible. Using Monte Carlo simulation is an approach that could be used in such cases. [ 1 ] [ 4 ]
https://en.wikipedia.org/wiki/Structural_reliability
In construction , structural repairs is a technical term describing maintenance of a property structure in order to bring it up to local health and safety standards. It is contrasted to renovations or non-structural repairs. Unlike renovations, structural repairs add relatively little value to a property. [ 1 ] [ 2 ] [ 3 ] Leases often include provisions that define what types of changes amount to structural repairs and assign responsibility to either the tenant or the landlord . [ 4 ] This architecture -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Structural_repairs
In discrete geometry and mechanics , structural rigidity is a combinatorial theory for predicting the flexibility of ensembles formed by rigid bodies connected by flexible linkages or hinges . Rigidity is the property of a structure that it does not bend or flex under an applied force. The opposite of rigidity is flexibility . In structural rigidity theory, structures are formed by collections of objects that are themselves rigid bodies, often assumed to take simple geometric forms such as straight rods (line segments), with pairs of objects connected by flexible hinges. A structure is rigid if it cannot flex; that is, if there is no continuous motion of the structure that preserves the shape of its rigid components and the pattern of their connections at the hinges. There are two essentially different kinds of rigidity. Finite or macroscopic rigidity means that the structure will not flex, fold, or bend by a positive amount. Infinitesimal rigidity means that the structure will not flex by even an amount that is too small to be detected even in theory. (Technically, that means certain differential equations have no nonzero solutions.) The importance of finite rigidity is obvious, but infinitesimal rigidity is also crucial because infinitesimal flexibility in theory corresponds to real-world minuscule flexing, and consequent deterioration of the structure. A rigid graph is an embedding of a graph in a Euclidean space which is structurally rigid. [ 1 ] That is, a graph is rigid if the structure formed by replacing the edges by rigid rods and the vertices by flexible hinges is rigid. A graph that is not rigid is called flexible . More formally, a graph embedding is flexible if the vertices can be moved continuously, preserving the distances between adjacent vertices, with the result that the distances between some nonadjacent vertices are altered. [ 2 ] The latter condition rules out Euclidean congruences such as simple translation and rotation. It is also possible to consider rigidity problems for graphs in which some edges represent compression elements (able to stretch to a longer length, but not to shrink to a shorter length) while other edges represent tension elements (able to shrink but not stretch). A rigid graph with edges of these types forms a mathematical model of a tensegrity structure. The fundamental problem is how to predict the rigidity of a structure by theoretical analysis, without having to build it. Key results in this area include the following: However, in many other simple situations it is not yet always known how to analyze the rigidity of a structure mathematically despite the existence of considerable mathematical theory. One of the founders of the mathematical theory of structural rigidity was the physicist James Clerk Maxwell . The late twentieth century saw an efflorescence of the mathematical theory of rigidity, which continues in the twenty-first century. "[A] theory of the equilibrium and deflections of frameworks subjected to the action of forces is acting on the hardnes of quality... in cases in which the framework ... is strengthened by additional connecting pieces ... in cases of three dimensions, by the regular method of equations of forces, every point would have three equations to determine its equilibrium, so as to give 3 s equations between e unknown quantities, if s be the number of points and e the number of connexions[sic]. There are, however, six equations of equilibrium of the system which must be fulfilled necessarily by the forces, on account of the equality of action and reaction in each piece. Hence if e = 3 s − 6, the effect of any eternal force will be definite in producing tensions or pressures in the different pieces; but if e > 3 s − 6, these forces will be indeterminate...." [ 5 ]
https://en.wikipedia.org/wiki/Structural_rigidity
Robustness is the ability of a structure to withstand events like fire, explosions, impact or the consequences of human error , without being damaged to an extent disproportionate to the original cause – as defined in EN 1991-1-7 of the Accidental Actions Eurocode . [ 1 ] A structure designed and constructed to be robust should not suffer from disproportionate collapse (progressive collapse) under accidental loading. [ 2 ] Buildings of some kinds, especially large-panel systems and precast concrete buildings, are disproportionately more susceptible to collapse; others, such as in situ cast concrete structures , are disproportionately less susceptible. The method employed in making a structure robust will typically depend on and be tailored to the kind of structure it is, as in steel framed building structural robustness is typically achieved through appropriately designing the system of connections between the frame's constituents. [ 2 ] Three alternative measures are used, sometimes jointly, to achieve structural robustness and reduce the risk of disproportionate collapse. [ 3 ] [ 4 ] These are: The requirements for structures in consequence classes 2 and 3 can be found in EN 1991-1-7 Eurocode 1 - Actions on structures - Part 1-7: General actions - Accidental actions. Additional requirements and requirements for structures in consequence class 1 can be found in the material specific Eurocode parts, EN 1992 for concrete structures, EN 1993 for steel structures and so on. In EN 1991-1-7 buildings are categorised in consequences classes, considering the building type, occupancy and size. [ 1 ] Consequence class 1 , low consequences of failure: Consequence class 2a , lower risk group - medium consequences of failure: Consequence class 2b , upper risk group - medium consequences of failure: Consequence class 3 , high consequences of failure: For buildings intended for more than one type of use, the consequences class should be that of the most onerous type. [ 2 ] Buildings in consequence class 1 should be designed and constructed in accordance with EN 1990 - 1999 for satisfying stability in normal use, no specific consideration (required by EN 1991-1-7) is necessary with regard to accidental actions from unidentified causes. Buildings in consequence class 2a (in addition to what is recommended for consequence class 1) should be provided with effective horizontal ties, or effective anchorage of suspended floors to walls. Buildings in consequence class 2b (in addition to what is recommended for consequence class 1 and 2a) should be provided with effective vertical ties in all supporting columns and walls, or alternatively the building should be checked to ensure that upon the notional removal of each supporting column and each beam supporting a column, or any nominal section of load-bearing wall (one at a time in each storey of the building) the building remains stable and the local damage does not exceed a certain limit. Where the removal of such columns and sections of walls would result in an extent of damage in excess of the agreed limit, such elements should be designed as a key element. For buildings in consequence class 3 a systematic risk assessment of the building taking into account both foreseeable and unforeseeable hazards, is required.
https://en.wikipedia.org/wiki/Structural_robustness
In the logical discipline of proof theory , a structural rule is an inference rule of a sequent calculus that does not refer to any logical connective but instead operates on the sequents directly. [ 1 ] [ 2 ] Structural rules often mimic the intended meta-theoretic properties of the logic. Logics that deny one or more of the structural rules are classified as substructural logics . Three common structural rules are: [ 3 ] A logic without any of the above structural rules would interpret the sides of a sequent as pure sequences ; with exchange, they can be considered to be multisets ; and with both contraction and exchange they can be considered to be sets . These are not the only possible structural rules. A famous structural rule is known as cut . [ 1 ] Considerable effort is spent by proof theorists in showing that cut rules are superfluous in various logics. More precisely, what is shown is that cut is only (in a sense) a tool for abbreviating proofs, and does not add to the theorems that can be proved. The successful 'removal' of cut rules, known as cut elimination , is directly related to the philosophy of computation as normalization (see Curry–Howard correspondence ); it often gives a good indication of the complexity of deciding a given logic.
https://en.wikipedia.org/wiki/Structural_rule
In mathematics , structural stability is a fundamental property of a dynamical system which means that the qualitative behavior of the trajectories is unaffected by small perturbations (to be exact C 1 -small perturbations). Examples of such qualitative properties are numbers of fixed points and periodic orbits (but not their periods). Unlike Lyapunov stability , which considers perturbations of initial conditions for a fixed system, structural stability deals with perturbations of the system itself. Variants of this notion apply to systems of ordinary differential equations , vector fields on smooth manifolds and flows generated by them, and diffeomorphisms . Structurally stable systems were introduced by Aleksandr Andronov and Lev Pontryagin in 1937 under the name "systèmes grossiers", or rough systems . They announced a characterization of rough systems in the plane, the Andronov–Pontryagin criterion . In this case, structurally stable systems are typical , they form an open dense set in the space of all systems endowed with appropriate topology. In higher dimensions, this is no longer true, indicating that typical dynamics can be very complex (cf. strange attractor ). An important class of structurally stable systems in arbitrary dimensions is given by Anosov diffeomorphisms and flows. During the late 1950s and the early 1960s, Maurício Peixoto and Marília Chaves Peixoto , motivated by the work of Andronov and Pontryagin, developed and proved Peixoto's theorem , the first global characterization of structural stability. [ 1 ] Let G be an open domain in R n with compact closure and smooth ( n −1)-dimensional boundary . Consider the space X 1 ( G ) consisting of restrictions to G of C 1 vector fields on R n that are transversal to the boundary of G and are inward oriented. This space is endowed with the C 1 metric in the usual fashion. A vector field F ∈ X 1 ( G ) is weakly structurally stable if for any sufficiently small perturbation F 1 , the corresponding flows are topologically equivalent on G : there exists a homeomorphism h : G → G which transforms the oriented trajectories of F into the oriented trajectories of F 1 . If, moreover, for any ε > 0 the homeomorphism h may be chosen to be C 0 ε -close to the identity map when F 1 belongs to a suitable neighborhood of F depending on ε , then F is called (strongly) structurally stable . These definitions extend in a straightforward way to the case of n -dimensional compact smooth manifolds with boundary. Andronov and Pontryagin originally considered the strong property. Analogous definitions can be given for diffeomorphisms in place of vector fields and flows: in this setting, the homeomorphism h must be a topological conjugacy . It is important to note that topological equivalence is realized with a loss of smoothness: the map h cannot, in general, be a diffeomorphism. Moreover, although topological equivalence respects the oriented trajectories, unlike topological conjugacy, it is not time-compatible. Thus, the relevant notion of topological equivalence is a considerable weakening of the naïve C 1 conjugacy of vector fields. Without these restrictions, no continuous time system with fixed points or periodic orbits could have been structurally stable. Weakly structurally stable systems form an open set in X 1 ( G ), but it is unknown whether the same property holds in the strong case. Necessary and sufficient conditions for the structural stability of C 1 vector fields on the unit disk D that are transversal to the boundary and on the two-sphere S 2 have been determined in the foundational paper of Andronov and Pontryagin. According to the Andronov–Pontryagin criterion , such fields are structurally stable if and only if they have only finitely many singular points ( equilibrium states ) and periodic trajectories ( limit cycles ), which are all non-degenerate (hyperbolic), and do not have saddle-to-saddle connections. Furthermore, the non-wandering set of the system is precisely the union of singular points and periodic orbits. In particular, structurally stable vector fields in two dimensions cannot have homoclinic trajectories, which enormously complicate the dynamics, as discovered by Henri Poincaré . Structural stability of non-singular smooth vector fields on the torus can be investigated using the theory developed by Poincaré and Arnaud Denjoy . Using the Poincaré recurrence map , the question is reduced to determining structural stability of diffeomorphisms of the circle . As a consequence of the Denjoy theorem , an orientation preserving C 2 diffeomorphism ƒ of the circle is structurally stable if and only if its rotation number is rational, ρ ( ƒ ) = p / q , and the periodic trajectories, which all have period q , are non-degenerate: the Jacobian of ƒ q at the periodic points is different from 1, see circle map . Dmitri Anosov discovered that hyperbolic automorphisms of the torus, such as the Arnold's cat map , are structurally stable. He then generalized this statement to a wider class of systems, which have since been called Anosov diffeomorphisms and Anosov flows. One celebrated example of Anosov flow is given by the geodesic flow on a surface of constant negative curvature, cf Hadamard billiards . Structural stability of the system provides a justification for applying the qualitative theory of dynamical systems to analysis of concrete physical systems. The idea of such qualitative analysis goes back to the work of Henri Poincaré on the three-body problem in celestial mechanics . Around the same time, Aleksandr Lyapunov rigorously investigated stability of small perturbations of an individual system. In practice, the evolution law of the system (i.e. the differential equations) is never known exactly, due to the presence of various small interactions. It is, therefore, crucial to know that basic features of the dynamics are the same for any small perturbation of the "model" system, whose evolution is governed by a certain known physical law. Qualitative analysis was further developed by George Birkhoff in the 1920s, but was first formalized with introduction of the concept of rough system by Andronov and Pontryagin in 1937. This was immediately applied to analysis of physical systems with oscillations by Andronov, Witt, and Khaikin. The term "structural stability" is due to Solomon Lefschetz , who oversaw translation of their monograph into English. Ideas of structural stability were taken up by Stephen Smale and his school in the 1960s in the context of hyperbolic dynamics. Earlier, Marston Morse and Hassler Whitney initiated and René Thom developed a parallel theory of stability for differentiable maps, which forms a key part of singularity theory . Thom envisaged applications of this theory to biological systems. Both Smale and Thom worked in direct contact with Maurício Peixoto, who developed Peixoto's theorem in the late 1950s. When Smale started to develop the theory of hyperbolic dynamical systems, he hoped that structurally stable systems would be "typical". This would have been consistent with the situation in low dimensions: dimension two for flows and dimension one for diffeomorphisms. However, he soon found examples of vector fields on higher-dimensional manifolds that cannot be made structurally stable by an arbitrarily small perturbation (such examples have been later constructed on manifolds of dimension three). This means that in higher dimensions, structurally stable systems are not dense . In addition, a structurally stable system may have transversal homoclinic trajectories of hyperbolic saddle closed orbits and infinitely many periodic orbits, even though the phase space is compact. The closest higher-dimensional analogue of structurally stable systems considered by Andronov and Pontryagin is given by the Morse–Smale systems .
https://en.wikipedia.org/wiki/Structural_stability
Structural steel is steel used for making construction materials in a variety of shapes. Many structural steel shapes take the form of an elongated beam having a profile of a specific cross section . Structural steel shapes, sizes, chemical composition, mechanical properties such as strengths, storage practices, etc., are regulated by standards in most industrialized countries. Structural steel shapes, such as I-beams , have high second moments of area , so can support a high load without excessive sagging . [ 1 ] The shapes available are described in published standards worldwide, and specialist, proprietary cross sections are also available. [ citation needed ] Sections can be hot or cold rolled , or fabricated by welding together flat or bent plates. [ 3 ] The terms angle iron , channel iron , and sheet iron have been in common use since before wrought iron was replaced by steel for commercial purposes and are still sometimes used informally. In technical writing angle stock , channel stock , and sheet are used instead of those misnomers . [ citation needed ] Most steels used in Europe are specified to comply with EN 10025 . However, some national standards remain in force. [ 4 ] Example grades are S275J2 or S355K2W where S denotes structural steel; 275 or 355 denotes the yield strength in newtons per square millimetre or the equivalent megapascals ; J2 or K2 denotes the material's toughness by Charpy impact test values, and the W denotes weathering steel . Further letters can be used to designate fine grain steel (N or NL); quenched and tempered steel (Q or QL); and thermomechanically rolled steel (M or ML). [ citation needed ] Common yield strengths available are 195, 235, 275, 355, 420, and 460, although some grades are more commonly used than others. In the UK, almost all structural steel is S275 and S355. Higher grades such as 500, 550, 620, 690, 890 and 960 available in quenched and tempered material although grades above 690 receive little if any use in construction at present. [ citation needed ] Euronorms define the shape of standard structural profiles: Steels used for building construction in the US use standard alloys identified and specified by ASTM International . These steels have an alloy identification beginning with A and then two, three, or four numbers. The four-number AISI steel grades commonly used for mechanical engineering, machines, and vehicles are a completely different specification series. The standard commonly used structural steels are: [ 5 ] The concept of CE marking for all construction products and steel products is introduced by the Construction Products Directive (CPD) . The CPD is a European Directive that ensures the free movement of all construction products within the European Union. Because steel components are "safety critical", CE Marking is not allowed unless the Factory Production Control (FPC) system under which they are produced has been assessed by a suitable certification body that has been approved to the European Commission. [ 6 ] In the case of steel products such as sections, bolts and fabricated steelwork the CE Marking demonstrates that the product complies with the relevant harmonized standard. [ 7 ] For steel structures the main harmonized standards are: The standard that covers CE Marking of structural steelwork is EN 1090 -1. The standard has come into force in late 2010. After a transition period of two years, CE Marking will become mandatory in most European Countries sometime early in 2012. [ 8 ] The official end date of the transition period is July 1, 2014 Steel is sold by weight so the design must be as light as possible whilst being structurally safe. Utilizing multiple, identical steel members can be cheaper than unique components. [ 9 ] Reinforced concrete and structural steel can be sustainable [ 10 ] if used properly. Over 80% of structural steel members are fabricated from recycled metals, called A992 steel. This member material is cheaper and has a higher strength to weight ratio than previously used steel members (A36 grade). [ 11 ] Special considerations must be taken into account with structural steel to ensure it is not under a dangerous fire hazard condition. [ 12 ] Structural steel cannot be exposed to the environment without suitable protection, because any moisture, or contact with water, will cause oxidisation to occur, compromising the structural integrity of the building and endangering occupants and neighbors. [ 12 ] Having high strength, stiffness, toughness, and ductile properties, structural steel is one of the most commonly used materials in commercial and industrial building construction. [ 13 ] Structural steel can be developed into nearly any shape, which are either bolted or welded together in construction. Structural steel can be erected as soon as the materials are delivered on site, whereas concrete must be cured at least 1–2 weeks after pouring before construction can continue, making steel a schedule-friendly construction material. [ 12 ] Steel is inherently a noncombustible material. However, when heated to temperatures seen in a fire, the strength and stiffness of the material is significantly reduced. The International Building Code requires steel be enveloped in sufficient fire-resistant materials, increasing overall cost of steel structure buildings. [ 13 ] Steel, when in contact with water, can corrode, creating a potentially dangerous structure. Measures must be taken in structural steel construction to prevent any lifetime corrosion. The steel can be painted, providing water resistance. Also, the fire resistance material used to envelope steel is commonly water resistant. [ 12 ] Steel provides a less suitable surface environment for mold to grow than wood. [ 14 ] Tall structures are constructed using structural steel due to its constructability, as well as its high strength-to-weight ratio. [ 15 ] Steel loses strength when heated sufficiently. The critical temperature of a steel member is the temperature at which it cannot safely support its load [ citation needed ] . Building codes and structural engineering standard practice defines different critical temperatures depending on the structural element type, configuration, orientation, and loading characteristics. The critical temperature is often considered the temperature at which its yield stress has been reduced to 60% of the room temperature yield stress. [ 16 ] In order to determine the fire resistance rating of a steel member, accepted calculations practice can be used, [ 17 ] or a fire test can be performed, the critical temperature of which is set by the standard accepted to the Authority Having Jurisdiction, such as a building code. In Japan, this is below 400 °C. [ 18 ] In China, Europe and North America (e.g., ASTM E-119), this is approximately 1000–1300 °F [ 19 ] (530–810 °C). The time it takes for the steel element that is being tested to reach the temperature set by the test standard determines the duration of the fire-resistance rating . Heat transfer to the steel can be slowed by the use of fireproofing materials , thus limiting steel temperature. Common fireproofing methods for structural steel include intumescent , endothermic, and plaster coatings as well as drywall, calcium silicate cladding, and mineral wool insulating blankets. [ 20 ] Structural steel fireproofing materials include intumescent, endothermic and plaster coatings as well as drywall , calcium silicate cladding, and mineral or high temperature insulation wool blankets. Attention is given to connections, as the thermal expansion of structural elements can compromise fire-resistance rated assemblies. Cutting workpieces to length is usually done with a bandsaw . [ citation needed ] A beam drill line drills holes and mills slots into beams, channels and HSS elements. CNC beam drill lines are typically equipped with feed conveyors and position sensors to move the element into position for drilling, plus probing capability to determine the precise location where the hole or slot is to be cut. [ citation needed ] For cutting irregular openings or non-uniform ends on dimensional (non-plate) elements, a cutting torch is typically used. Oxy-fuel torches are the most common technology and range from simple hand-held torches to automated CNC coping machines that move the torch head around the structural element in accordance with cutting instructions programmed into the machine. [ citation needed ] Fabricating flat plate is performed on a plate processing center where the plate is laid flat on a stationary 'table' and different cutting heads traverse the plate from a gantry-style arm or "bridge". The cutting heads can include a punch, drill or torch. [ citation needed ]
https://en.wikipedia.org/wiki/Structural_steel
A structural support is a part of a building or structure that provides the necessary stiffness and strength in order to resist the internal forces (vertical forces of gravity and lateral forces due to wind and earthquakes) and guide them safely to the ground. External loads (actions of other bodies) that act on buildings cause internal forces (forces and couples by the rest of the structure) in building support structures. Supports can be either at the end or at any intermediate point along a structural member or a constituent part of a building and they are referred to as connections, joints or restraints. [ 1 ] Building support structures, no matter what materials are used, have to give accurate and safe results. A structure depends less on the weight and stiffness of a material and more on its geometry for stability. [ 2 ] Whatever the condition is, a specific rigidity is necessary for connection designs. The support connection type has effects on the load bearing capacity of each element, which makes up a structural system. Each support condition influences the behaviour of the elements and therefore, the system. Structures can be either Horizontal- span support systems (floor and roof structures) or Vertical building structure systems (walls, frames, cores, etc.) [ 3 ] Structure is necessary for buildings but architecture, as an idea, does not require structure. Every building has both load-bearing structures and non-load bearing portions. Structural members form systems and transfer the loads that are acting upon the structural systems, through a series of elements to the ground. Building Structure Elements include Line ( beams , columns , cables , frames or arches , space frames , surface elements (walls, slab or shells ) and Freeform. [ 3 ] The structure's functional requirements will narrow the possible forms that one can consider. Other factors such as the availability of materials, foundation conditions, the aesthetic requirements and economic limitations also play important roles in establishing the structural form. [ 4 ] Structural systems or all their members and parts are considered to be in equilibrium if the systems are initially at rest and remain at rest when a system of forces and couples acts on them. [ 5 ] They are not aspects of a model that should be guessed. To be able to analyze a structure, it is necessary to be clear about the forces that can be quite complicated. There are two types of forces, External Forces which are the actions of other bodies on the structure under consideration and Internal Forces which the rest of the structure exert on a member or portion of the structure as forces and couples. [ 6 ] A little deflection or play is required for a structure to protect other surrounding materials from those forces. There are five basic idealized support structure types, categorized by the types of deflection they constrain: roller , pinned , fixed , hanger and simple support . [ 1 ] A roller support allows thermal expansion and contraction of the span and prevents damage on other structural members such as a pinned support. The typical application of roller supports is in large bridges. In civil engineering, roller supports can be seen at one end of a bridge. A roller support cannot prevent translational movements in horizontal or lateral directions and any rotational movement but prevents vertical translations. [ 1 ] [ 5 ] Its reaction force is a single linear force perpendicular to, and away from, the surface (upward or downward). This support type is assumed to be capable of resisting normal displacement. It can be rubber bearings , rocker or a set of gears allowing a limited amount of lateral movement. A structure on roller skates, for example, remains in place as long as it must only support itself. As soon as lateral load pushes on the structure, a structure on roller skates will roll away in response to the force. A pinned support attaches the only web of a beam to a girder called a shear connection. The support can exert a force on a member acting in any direction and prevent translational movements, or relative displacement of the member-ends in all directions but cannot prevent any rotational movements . [ 1 ] Its reaction forces are single linear forces of unknown direction or horizontal and vertical forces which are components of the single force of unknown direction. [ 5 ] A pinned support is just like a human elbow. It can be extended and flexed (rotation), but you cannot move your forearm left to right (translation). One benefit of pinned supports is not having internal moment forces and only their axial force playing a big role in designing them. However, a single pinned support cannot completely restrain a structure. At least two supports are needed to resist the moment. [ 7 ] Applying in trusses is one frequent way we can use this support. Rigid or fixed supports maintain the angular relationship between the joined elements and provide both force and moment resistance. It exerts forces acting in any direction and prevents all translational movements (horizontal and vertical) as well as all rotational movements of a member. These supports’ reaction forces are horizontal and vertical components of a linear resultant ; a moment. [ 5 ] It is a rigid type of support or connection. The application of the fixed support is beneficial when we can only use single support, and people most widely used this type as the only support for a cantilever . [ 7 ] They are common in beam-to-column connections of moment-resisting steel frames and beam, column and slab connections in concrete frames. A hanger support only exerts a force and prevents a member from acting or translating away in the direction of the hanger. However, this support cannot prevent translational movement in all directions and any rotational movement. [ 1 ] [ 5 ] This is one of the simplest structural forms in which the elements are in pure tension. Structures of this type range from simple guyed or stayed structures to large cable-supported bridge and roof systems. [ 4 ] A simple support is basically where the structural member rests on an external structure as in two concrete blocks holding a resting plank of wood on their tops. This support is similar to roller support in a sense that restrains vertical forces but not horizontal forces. Therefore, it is not widely used in real life structures unless the engineer can be sure that the member will not translate. [ 7 ]
https://en.wikipedia.org/wiki/Structural_support
Structural synthesis of programs (SSP) is a special form of (automatic) program synthesis that is based on propositional calculus . More precisely, it uses intuitionistic logic for describing the structure of a program in such a detail that the program can be automatically composed from pieces like subroutines or even computer commands. It is assumed that these pieces have been implemented correctly, hence no correctness verification of these pieces is needed. SSP is well suited for automatic composition of services [ 1 ] for service-oriented architectures and for synthesis of large simulation programs. [ 2 ] [ 3 ] Automatic program synthesis began in the artificial intelligence field, with software intended for automatic problem solving. The first program synthesizer was developed by Cordell Green in 1969. [ 4 ] At about the same time, mathematicians including R. Constable , Z. Manna , and R. Waldinger explained the possible use of formal logic for automatic program synthesis. Practically applicable program synthesizers appeared considerably later. The idea of structural synthesis of programs was introduced at a conference on algorithms in modern mathematics and computer science [ 5 ] organized by Andrey Ershov and Donald Knuth in 1979. The idea originated from G. Pólya ’s well-known book on problem solving. [ 6 ] The method for devising a plan for solving a problem in SSP was presented as a formal system . The inference rules of the system were restructured and justified in logic by G. Mints and E. Tyugu [ 7 ] in 1982. A programming tool PRIZ [ 8 ] that uses SSP was developed in the 1980s. A recent Integrated development environment that supports SSP is CoCoViLa [ 9 ] — a model-based software development platform for implementing domain specific languages and developing large Java programs. Structural synthesis of programs is a method for composing programs from already implemented components (e.g. from computer commands or software object methods) that can be considered as functions. A specification for synthesis is given in intuitionistic propositional logic by writing axioms about the applicability of functions. An axiom about the applicability of a function f is a logical implication where X 1 , X 2 , ... X m are preconditions and Y 1 , Y 2 , ... Y n are postconditions of the application of the function f . In intuitionistic logic, the function f is called a realization of this formula. A precondition can be a proposition stating that input data exists, e.g. X i may have the meaning “variable x i has received a value”, but it may denote also some other condition, e.g. that resources needed for using the function f are available, etc. A precondition may also be an implication of the same form as the axiom given above; then it is called a subtask. A subtask denotes a function that must be available as an input when the function f is applied. This function itself must be synthesized in the process of SSP. In this case, realization of the axiom is a higher order function , i.e., a function that uses another function as an input. For instance, the formula can specify a higher order function with two inputs and an output result . The first input is a function that has to be synthesized for computing nextState from state , and the second input is initialState . Higher order functions give generality to the SSP – any control structure needed in a synthesized program can be preprogrammed and used then automatically with a respective specification. In particular, the last axiom presented here is a specification of a complex program – a simulation engine for simulating dynamic systems on models where nextState can be computed from state of the system.
https://en.wikipedia.org/wiki/Structural_synthesis_of_programs
The term structural system or structural frame in structural engineering refers to the load -resisting sub-system of a building or object. The structural system transfers loads through interconnected elements or members. Commonly used structures can be classified into five major categories, depending on the type of primary stress that may arise in the members of the structures under major design loads. However any two or more of the basic structural types described in the following may be combined in a single structure, such as a building or a bridge in order to meet the structure's functional requirements. [ 1 ] The structural system of a high-rise building is designed to cope with vertical gravity loads as well as lateral loads caused by wind or seismic activity. The structural system consists only of the members designed to carry the loads, and all other members are referred to as non-structural. A classification for the structural system of a high-rise was introduced in 1969 by Fazlur Khan and was extended to incorporate interior and exterior structures. The primary lateral load-resisting system defines if a structural system is an interior or exterior one. [ 2 ] The following interior structures are possible: The following exterior structures are possible:
https://en.wikipedia.org/wiki/Structural_system
In polymer chemistry , a structural unit is a building block of a polymer chain. It is the result of a monomer which has been polymerized into a long chain. There may be more than one structural unit in the repeat unit . When different monomers are polymerized, a copolymer is formed. It is a routine way of developing new properties for new materials. Consider the example of polyethylene terephthalate (PET or "polyester"). The monomers which could be used to create this polymer are ethylene glycol and terephthalic acid : HO-CH 2 -CH 2 -OH and HOOC-C 6 H 4 -COOH In the polymer, there are two structural units, which are -O-CH 2 -CH 2 -O- and -CO-C 6 H 4 -CO- The repeat unit is -CH 2 -CH 2 -O-CO-C 6 H 4 -CO-O- The functionality of a monomeric structural unit is defined as the number of covalent bonds which it forms with other reactants. [ 1 ] A structural unit in a linear polymer chain segment forms two bonds and is therefore bifunctional , as for the PET structural units above. Other values of functionality exist. Unless the macromolecule is cyclic, it will have monovalent structural units at each end of the polymer chain. In branched polymers , there are trifunctional units at each branch point. For example, in the synthesis of PET, a small fraction of the ethylene glycol can be replaced by glycerol which has three alcohol groups. This trifunctional molecule inserts itself in the polymeric chain and bonds to three carboxylic acid groups forming a branch point. Finally, the formation of cross-linked polymers involves tetrafunctional structural units. For example, in the synthesis of cross-linked polystyrene , a small fraction of monomeric styrene (or vinylbenzene) is replaced by 1,4- divinylbenzene (or para -divinylbenzene). Each of the two vinyl groups is inserted into a polymeric chain, so that the tetravalent unit is inserted into both chains, linking them together. This chemistry -related article is a stub . You can help Wikipedia by expanding it . This article about polymer science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Structural_unit
In computing , a structural vulnerability is an IT system weakness that consists of several so-called component vulnerabilities . This type of weakness generally emerges due to several system architecture flaws. An example of a structural vulnerability is a person working in a critical part of the system with no security training, who doesn’t follow the software patch cycles and who is likely to disclose critical information in a phishing attack. [ 1 ]
https://en.wikipedia.org/wiki/Structural_vulnerability_(computing)
Structuralism is a theory in the philosophy of mathematics that holds that mathematical theories describe structures of mathematical objects . Mathematical objects are exhaustively defined by their place in such structures. Consequently, structuralism maintains that mathematical objects do not possess any intrinsic properties but are defined by their external relations in a system. For instance, structuralism holds that the number 1 is exhaustively defined by being the successor of 0 in the structure of the theory of natural numbers . By generalization of this example, any natural number is defined by its respective place in that theory. Other examples of mathematical objects might include lines and planes in geometry , or elements and operations in abstract algebra . Structuralism is an epistemologically realistic view in that it holds that mathematical statements have an objective truth value . However, its central claim only relates to what kind of entity a mathematical object is, not to what kind of existence mathematical objects or structures have (not, in other words, to their ontology ). The kind of existence that mathematical objects have would be dependent on that of the structures in which they are embedded; different sub-varieties of structuralism make different ontological claims in this regard. [ 1 ] Structuralism in the philosophy of mathematics is particularly associated with Paul Benacerraf , Geoffrey Hellman , Michael Resnik , Stewart Shapiro and James Franklin . The historical motivation for the development of structuralism derives from a fundamental problem of ontology . Since Medieval times, philosophers have argued as to whether the ontology of mathematics contains abstract objects . In the philosophy of mathematics, an abstract object is traditionally defined as an entity that: (1) exists independent of the mind; (2) exists independent of the empirical world; and (3) has eternal, unchangeable properties. Traditional mathematical Platonism maintains that some set of mathematical elements— natural numbers , real numbers , functions , relations , systems —are such abstract objects. Contrarily, mathematical nominalism denies the existence of any such abstract objects in the ontology of mathematics. In the late 19th and early 20th century, a number of anti-Platonist programs gained in popularity. These included intuitionism , formalism , and predicativism . By the mid-20th century, however, these anti-Platonist theories had a number of their own issues. This subsequently resulted in a resurgence of interest in Platonism. It was in this historic context that the motivations for structuralism developed. In 1965, Paul Benacerraf published an article entitled "What Numbers Could Not Be". [ 2 ] Benacerraf concluded, on two principal arguments, that set-theoretic Platonism cannot succeed as a philosophical theory of mathematics. Firstly, Benacerraf argued that Platonic approaches do not pass the ontological test. [ 2 ] He developed an argument against the ontology of set-theoretic Platonism, which is now historically referred to as Benacerraf's identification problem . Benacerraf noted that there are elementarily equivalent , set-theoretic ways of relating natural numbers to pure sets . However, if someone asks for the "true" identity statements for relating natural numbers to pure sets, then different set-theoretic methods yield contradictory identity statements when these elementarily equivalent sets are related together. [ 2 ] This generates a set-theoretic falsehood. Consequently, Benacerraf inferred that this set-theoretic falsehood demonstrates it is impossible for there to be any Platonic method of reducing numbers to sets that reveals any abstract objects. Secondly, Benacerraf argued that Platonic approaches do not pass the epistemological test. Benacerraf contended that there does not exist an empirical or rational method for accessing abstract objects. If mathematical objects are not spatial or temporal, then Benacerraf infers that such objects are not accessible through the causal theory of knowledge . [ 3 ] The fundamental epistemological problem thus arises for the Platonist to offer a plausible account of how a mathematician with a limited, empirical mind is capable of accurately accessing mind-independent, world-independent, eternal truths. It was from these considerations, the ontological argument and the epistemological argument, that Benacerraf's anti-Platonic critiques motivated the development of structuralism in the philosophy of mathematics. Stewart Shapiro divides structuralism into three major schools of thought. [ 4 ] These schools are referred to as the ante rem , the in re , and the post rem . Precursors
https://en.wikipedia.org/wiki/Structuralism_(philosophy_of_mathematics)
Structure-Based Assignment (SBA) is a technique to accelerate the resonance assignment which is a key bottleneck of NMR ( Nuclear magnetic resonance ) structural biology. [ 1 ] A homologous (similar) protein is used as a template to the target protein in SBA. This template protein provides prior structural information about the target protein and leads to faster resonance assignment . By analogy, in X-ray Crystallography , the molecular replacement technique allows solution of the crystallographic phase problem when a homologous structural model is known, thereby facilitating rapid structure determination. [ 2 ] Some of the SBA algorithms are CAP which is an RNA assignment algorithm which performs an exhaustive search over all permutations, [ 3 ] MARS which is a program for robust automatic backbone assignment [ 4 ] and Nuclear Vector Replacement (NVR) which is a molecular replacement like approach for SBA of resonances and sparse Nuclear Overhauser Effect (NOE)'s. [ 5 ] [ 6 ] [ 7 ] This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Structure-based_assignment
Structure-based combinatorial protein engineering ( SCOPE ) is a synthetic biology technique for creating gene libraries ( lineages ) of defined composition designed from structural and probabilistic constraints of the encoded proteins. The development of this technique was driven by fundamental questions about protein structure , function , and evolution, although the technique is generally applicable for the creation of engineered proteins with commercially desirable properties. Combinatorial travel through sequence spacetime is the goal of SCOPE. [ citation needed ] At its inception, SCOPE was developed as a homology -independent recombination technique to enable the creation of multiple crossover libraries from distantly related genes. In this application, an “exon plate tectonics ” design strategy was devised to assemble “equivalent” elements of structure ( continental plates ) with variability in the junctions linking them ( fault lines ) to explore global protein space. To create the corresponding library of genes, the breeding scheme of Gregor Mendel was adapted into a PCR strategy to selectively cross hybrid genes, a process of iterative inbreeding to create all possible combinations of coding segments with variable linkages. Genetic complementation in temperature-sensitive E. coli was used as the selection system to successfully identify functional hybrid DNA polymerases of minimal architecture with enhanced phenotypes . SCOPE was then used to construct a synthetic enzyme lineage, which was biochemically characterized to recapitulate the evolutionary divergence of two modern day enzymes. The rapid evolvability of chemical diversity in terpene synthases were demonstrated through processes akin to both Darwinian gradualism and saltation : some mutational pathways show steady, additive changes, whereas others show drastic jumps between contrasting product specificities with single mutational steps. Further, a metric was devised to describe the chemical distance of mutational steps to derive a chemical-based phylogeny relating sequence variation to chemical output. These examples establish SCOPE as a standardized method for the construction of synthetic gene libraries from close or distantly related parental sequences to identify functional novelty among the encoded proteins.
https://en.wikipedia.org/wiki/Structure-based_combinatorial_protein_engineering
In universal algebra and in model theory , a structure consists of a set along with a collection of finitary operations and relations that are defined on it. Universal algebra studies structures that generalize the algebraic structures such as groups , rings , fields and vector spaces . The term universal algebra is used for structures of first-order theories with no relation symbols . [ 1 ] Model theory has a different scope that encompasses more arbitrary first-order theories , including foundational structures such as models of set theory . From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic , cf. also Tarski's theory of truth or Tarskian semantics . For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models . Logicians sometimes refer to structures as " interpretations ", [ 2 ] whereas the term "interpretation" generally has a different (although related) meaning in model theory; see interpretation (model theory) . In database theory , structures with no functions are studied as models for relational databases , in the form of relational models . In the context of mathematical logic, the term " model " was first applied in 1940 by the philosopher Willard Van Orman Quine , in a reference to mathematician Richard Dedekind (1831–1916), a pioneer in the development of set theory . [ 3 ] [ 4 ] Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it. Formally, a structure can be defined as a triple A = ( A , σ , I ) {\displaystyle {\mathcal {A}}=(A,\sigma ,I)} consisting of a domain A , {\displaystyle A,} a signature σ , {\displaystyle \sigma ,} and an interpretation function I {\displaystyle I} that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature σ {\displaystyle \sigma } one can refer to it as a σ {\displaystyle \sigma } -structure. The domain of a structure is an arbitrary set; it is also called the underlying set of the structure, its carrier (especially in universal algebra), its universe (especially in model theory, cf. universe ), or its domain of discourse . In classical first-order logic, the definition of a structure prohibits the empty domain . [ citation needed ] [ 5 ] Sometimes the notation dom ⁡ ( A ) {\displaystyle \operatorname {dom} ({\mathcal {A}})} or | A | {\displaystyle |{\mathcal {A}}|} is used for the domain of A , {\displaystyle {\mathcal {A}},} but often no notational distinction is made between a structure and its domain (that is, the same symbol A {\displaystyle {\mathcal {A}}} refers both to the structure and its domain.) [ 6 ] The signature σ = ( S , ar ) {\displaystyle \sigma =(S,\operatorname {ar} )} of a structure consists of: The natural number n = ar ⁡ ( s ) {\displaystyle n=\operatorname {ar} (s)} of a symbol s {\displaystyle s} is called the arity of s {\displaystyle s} because it is the arity of the interpretation [ clarification needed ] of s . {\displaystyle s.} Since the signatures that arise in algebra often contain only function symbols, a signature with no relation symbols is called an algebraic signature . A structure with such a signature is also called an algebra ; this should not be confused with the notion of an algebra over a field . The interpretation function I {\displaystyle I} of A {\displaystyle {\mathcal {A}}} assigns functions and relations to the symbols of the signature. To each function symbol f {\displaystyle f} of arity n {\displaystyle n} is assigned an n {\displaystyle n} -ary function f A = I ( f ) {\displaystyle f^{\mathcal {A}}=I(f)} on the domain. Each relation symbol R {\displaystyle R} of arity n {\displaystyle n} is assigned an n {\displaystyle n} -ary relation R A = I ( R ) ⊆ A a r ( R ) {\displaystyle R^{\mathcal {A}}=I(R)\subseteq A^{\operatorname {ar(R)} }} on the domain. A nullary ( = 0 {\displaystyle =\,0} -ary) function symbol c {\displaystyle c} is called a constant symbol , because its interpretation I ( c ) {\displaystyle I(c)} can be identified with a constant element of the domain. When a structure (and hence an interpretation function) is given by context, no notational distinction is made between a symbol s {\displaystyle s} and its interpretation I ( s ) . {\displaystyle I(s).} For example, if f {\displaystyle f} is a binary function symbol of A , {\displaystyle {\mathcal {A}},} one simply writes f : A 2 → A {\displaystyle f:{\mathcal {A}}^{2}\to {\mathcal {A}}} rather than f A : | A | 2 → | A | . {\displaystyle f^{\mathcal {A}}:|{\mathcal {A}}|^{2}\to |{\mathcal {A}}|.} The standard signature σ f {\displaystyle \sigma _{f}} for fields consists of two binary function symbols + {\displaystyle \mathbf {+} } and × {\displaystyle \mathbf {\times } } where additional symbols can be derived, such as a unary function symbol − {\displaystyle \mathbf {-} } (uniquely determined by + {\displaystyle \mathbf {+} } ) and the two constant symbols 0 {\displaystyle \mathbf {0} } and 1 {\displaystyle \mathbf {1} } (uniquely determined by + {\displaystyle \mathbf {+} } and × {\displaystyle \mathbf {\times } } respectively). Thus a structure (algebra) for this signature consists of a set of elements A {\displaystyle A} together with two binary functions, that can be enhanced with a unary function, and two distinguished elements; but there is no requirement that it satisfy any of the field axioms. The rational numbers Q , {\displaystyle \mathbb {Q} ,} the real numbers R {\displaystyle \mathbb {R} } and the complex numbers C , {\displaystyle \mathbb {C} ,} like any other field, can be regarded as σ {\displaystyle \sigma } -structures in an obvious way: Q = ( Q , σ f , I Q ) R = ( R , σ f , I R ) C = ( C , σ f , I C ) {\displaystyle {\begin{alignedat}{3}{\mathcal {Q}}&=(\mathbb {Q} ,\sigma _{f},I_{\mathcal {Q}})\\{\mathcal {R}}&=(\mathbb {R} ,\sigma _{f},I_{\mathcal {R}})\\{\mathcal {C}}&=(\mathbb {C} ,\sigma _{f},I_{\mathcal {C}})\\\end{alignedat}}} In all three cases we have the standard signature given by σ f = ( S f , ar f ) {\displaystyle \sigma _{f}=(S_{f},\operatorname {ar} _{f})} with [ 7 ] S f = { + , × , − , 0 , 1 } {\displaystyle S_{f}=\{+,\times ,-,0,1\}} and ar f ( + ) = 2 , ar f ( × ) = 2 , ar f ( − ) = 1 , ar f ( 0 ) = 0 , ar f ( 1 ) = 0. {\displaystyle {\begin{alignedat}{3}\operatorname {ar} _{f}&(+)&&=2,\\\operatorname {ar} _{f}&(\times )&&=2,\\\operatorname {ar} _{f}&(-)&&=1,\\\operatorname {ar} _{f}&(0)&&=0,\\\operatorname {ar} _{f}&(1)&&=0.\\\end{alignedat}}} The interpretation function I Q {\displaystyle I_{\mathcal {Q}}} is: and I R {\displaystyle I_{\mathcal {R}}} and I C {\displaystyle I_{\mathcal {C}}} are similarly defined. [ 7 ] But the ring Z {\displaystyle \mathbb {Z} } of integers , which is not a field, is also a σ f {\displaystyle \sigma _{f}} -structure in the same way. In fact, there is no requirement that any of the field axioms hold in a σ f {\displaystyle \sigma _{f}} -structure. A signature for ordered fields needs an additional binary relation such as < {\displaystyle \,<\,} or ≤ , {\displaystyle \,\leq ,\,} and therefore structures for such a signature are not algebras, even though they are of course algebraic structures in the usual, loose sense of the word. The ordinary signature for set theory includes a single binary relation ∈ . {\displaystyle \in .} A structure for this signature consists of a set of elements and an interpretation of the ∈ {\displaystyle \in } relation as a binary relation on these elements. A {\displaystyle {\mathcal {A}}} is called an (induced) substructure of B {\displaystyle {\mathcal {B}}} if The usual notation for this relation is A ⊆ B . {\displaystyle {\mathcal {A}}\subseteq {\mathcal {B}}.} A subset B ⊆ | A | {\displaystyle B\subseteq |{\mathcal {A}}|} of the domain of a structure A {\displaystyle {\mathcal {A}}} is called closed if it is closed under the functions of A , {\displaystyle {\mathcal {A}},} that is, if the following condition is satisfied: for every natural number n , {\displaystyle n,} every n {\displaystyle n} -ary function symbol f {\displaystyle f} (in the signature of A {\displaystyle {\mathcal {A}}} ) and all elements b 1 , b 2 , … , b n ∈ B , {\displaystyle b_{1},b_{2},\dots ,b_{n}\in B,} the result of applying f {\displaystyle f} to the n {\displaystyle n} -tuple b 1 b 2 … b n {\displaystyle b_{1}b_{2}\dots b_{n}} is again an element of B : {\displaystyle B:} f ( b 1 , b 2 , … , b n ) ∈ B . {\displaystyle f(b_{1},b_{2},\dots ,b_{n})\in B.} For every subset B ⊆ | A | {\displaystyle B\subseteq |{\mathcal {A}}|} there is a smallest closed subset of | A | {\displaystyle |{\mathcal {A}}|} that contains B . {\displaystyle B.} It is called the closed subset generated by B , {\displaystyle B,} or the hull of B , {\displaystyle B,} and denoted by ⟨ B ⟩ {\displaystyle \langle B\rangle } or ⟨ B ⟩ A {\displaystyle \langle B\rangle _{\mathcal {A}}} . The operator ⟨ ⟩ {\displaystyle \langle \rangle } is a finitary closure operator on the set of subsets of | A | {\displaystyle |{\mathcal {A}}|} . If A = ( A , σ , I ) {\displaystyle {\mathcal {A}}=(A,\sigma ,I)} and B ⊆ A {\displaystyle B\subseteq A} is a closed subset, then ( B , σ , I ′ ) {\displaystyle (B,\sigma ,I')} is an induced substructure of A , {\displaystyle {\mathcal {A}},} where I ′ {\displaystyle I'} assigns to every symbol of σ the restriction to B {\displaystyle B} of its interpretation in A . {\displaystyle {\mathcal {A}}.} Conversely, the domain of an induced substructure is a closed subset. The closed subsets (or induced substructures) of a structure form a lattice . The meet of two subsets is their intersection. The join of two subsets is the closed subset generated by their union. Universal algebra studies the lattice of substructures of a structure in detail. Let σ = { + , × , − , 0 , 1 } {\displaystyle \sigma =\{+,\times ,-,0,1\}} be again the standard signature for fields. When regarded as σ {\displaystyle \sigma } -structures in the natural way, the rational numbers form a substructure of the real numbers , and the real numbers form a substructure of the complex numbers . The rational numbers are the smallest substructure of the real (or complex) numbers that also satisfies the field axioms. The set of integers gives an even smaller substructure of the real numbers which is not a field. Indeed, the integers are the substructure of the real numbers generated by the empty set, using this signature. The notion in abstract algebra that corresponds to a substructure of a field, in this signature, is that of a subring , rather than that of a subfield . The most obvious way to define a graph is a structure with a signature σ {\displaystyle \sigma } consisting of a single binary relation symbol E . {\displaystyle E.} The vertices of the graph form the domain of the structure, and for two vertices a {\displaystyle a} and b , {\displaystyle b,} ( a , b ) ∈ E {\displaystyle (a,b)\!\in {\text{E}}} means that a {\displaystyle a} and b {\displaystyle b} are connected by an edge. In this encoding, the notion of induced substructure is more restrictive than the notion of subgraph . For example, let G {\displaystyle G} be a graph consisting of two vertices connected by an edge, and let H {\displaystyle H} be the graph consisting of the same vertices but no edges. H {\displaystyle H} is a subgraph of G , {\displaystyle G,} but not an induced substructure. The notion in graph theory that corresponds to induced substructures is that of induced subgraphs . Given two structures A {\displaystyle {\mathcal {A}}} and B {\displaystyle {\mathcal {B}}} of the same signature σ, a (σ-)homomorphism from A {\displaystyle {\mathcal {A}}} to B {\displaystyle {\mathcal {B}}} is a map h : | A | → | B | {\displaystyle h:|{\mathcal {A}}|\rightarrow |{\mathcal {B}}|} that preserves the functions and relations. More precisely: where R A {\displaystyle R^{\mathcal {A}}} , R B {\displaystyle R^{\mathcal {B}}} is the interpretation of the relation symbol R {\displaystyle R} of the object theory in the structure A {\displaystyle {\mathcal {A}}} , B {\displaystyle {\mathcal {B}}} respectively. A homomorphism h from A {\displaystyle {\mathcal {A}}} to B {\displaystyle {\mathcal {B}}} is typically denoted as h : A → B {\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}} , although technically the function h is between the domains | A | {\displaystyle |{\mathcal {A}}|} , | B | {\displaystyle |{\mathcal {B}}|} of the two structures A {\displaystyle {\mathcal {A}}} , B {\displaystyle {\mathcal {B}}} . For every signature σ there is a concrete category σ- Hom which has σ-structures as objects and σ-homomorphisms as morphisms . A homomorphism h : A → B {\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}} is sometimes called strong if: The strong homomorphisms give rise to a subcategory of the category σ- Hom that was defined above. A (σ-)homomorphism h : A → B {\displaystyle h:{\mathcal {A}}\rightarrow {\mathcal {B}}} is called a (σ-) embedding if it is one-to-one and (where as before R A {\displaystyle R^{\mathcal {A}}} , R B {\displaystyle R^{\mathcal {B}}} refers to the interpretation of the relation symbol R of the object theory σ in the structure A {\displaystyle {\mathcal {A}}} , B {\displaystyle {\mathcal {B}}} respectively). Thus an embedding is the same thing as a strong homomorphism which is one-to-one. The category σ- Emb of σ-structures and σ-embeddings is a concrete subcategory of σ- Hom . Induced substructures correspond to subobjects in σ- Emb . If σ has only function symbols, σ- Emb is the subcategory of monomorphisms of σ- Hom . In this case induced substructures also correspond to subobjects in σ- Hom . As seen above, in the standard encoding of graphs as structures the induced substructures are precisely the induced subgraphs. However, a homomorphism between graphs is the same thing as a homomorphism between the two structures coding the graph. In the example of the previous section, even though the subgraph H of G is not induced, the identity map id: H → G is a homomorphism. This map is in fact a monomorphism in the category σ- Hom , and therefore H is a subobject of G which is not an induced substructure. The following problem is known as the homomorphism problem : Every constraint satisfaction problem (CSP) has a translation into the homomorphism problem. [ 8 ] Therefore, the complexity of CSP can be studied using the methods of finite model theory . Another application is in database theory , where a relational model of a database is essentially the same thing as a relational structure. It turns out that a conjunctive query on a database can be described by another structure in the same signature as the database model. A homomorphism from the relational model to the structure representing the query is the same thing as a solution to the query. This shows that the conjunctive query problem is also equivalent to the homomorphism problem. Structures are sometimes referred to as "first-order structures". This is misleading, as nothing in their definition ties them to any specific logic, and in fact they are suitable as semantic objects both for very restricted fragments of first-order logic such as that used in universal algebra, and for second-order logic . In connection with first-order logic and model theory, structures are often called models , even when the question "models of what?" has no obvious answer. Each first-order structure M = ( M , σ , I ) {\displaystyle {\mathcal {M}}=(M,\sigma ,I)} has a satisfaction relation M ⊨ ϕ {\displaystyle {\mathcal {M}}\vDash \phi } defined for all formulas ϕ {\displaystyle \,\phi } in the language consisting of the language of M {\displaystyle {\mathcal {M}}} together with a constant symbol for each element of M , {\displaystyle M,} which is interpreted as that element. This relation is defined inductively using Tarski's T-schema . A structure M {\displaystyle {\mathcal {M}}} is said to be a model of a theory T {\displaystyle T} if the language of M {\displaystyle {\mathcal {M}}} is the same as the language of T {\displaystyle T} and every sentence in T {\displaystyle T} is satisfied by M . {\displaystyle {\mathcal {M}}.} Thus, for example, a "ring" is a structure for the language of rings that satisfies each of the ring axioms, and a model of ZFC set theory is a structure in the language of set theory that satisfies each of the ZFC axioms. An n {\displaystyle n} -ary relation R {\displaystyle R} on the universe (i.e. domain) M {\displaystyle M} of the structure M {\displaystyle {\mathcal {M}}} is said to be definable (or explicitly definable cf. Beth definability , or ∅ {\displaystyle \emptyset } - definable , or definable with parameters from ∅ {\displaystyle \emptyset } cf. below) if there is a formula φ ( x 1 , … , x n ) {\displaystyle \varphi (x_{1},\ldots ,x_{n})} such that R = { ( a 1 , … , a n ) ∈ M n : M ⊨ φ ( a 1 , … , a n ) } . {\displaystyle R=\{(a_{1},\ldots ,a_{n})\in M^{n}:{\mathcal {M}}\vDash \varphi (a_{1},\ldots ,a_{n})\}.} In other words, R {\displaystyle R} is definable if and only if there is a formula φ {\displaystyle \varphi } such that ( a 1 , … , a n ) ∈ R ⇔ M ⊨ φ ( a 1 , … , a n ) {\displaystyle (a_{1},\ldots ,a_{n})\in R\Leftrightarrow {\mathcal {M}}\vDash \varphi (a_{1},\ldots ,a_{n})} is correct. An important special case is the definability of specific elements. An element m {\displaystyle m} of M {\displaystyle M} is definable in M {\displaystyle {\mathcal {M}}} if and only if there is a formula φ ( x ) {\displaystyle \varphi (x)} such that M ⊨ ∀ x ( x = m ↔ φ ( x ) ) . {\displaystyle {\mathcal {M}}\vDash \forall x(x=m\leftrightarrow \varphi (x)).} A relation R {\displaystyle R} is said to be definable with parameters (or | M | {\displaystyle |{\mathcal {M}}|} - definable ) if there is a formula φ {\displaystyle \varphi } with parameters [ clarification needed ] from M {\displaystyle {\mathcal {M}}} such that R {\displaystyle R} is definable using φ . {\displaystyle \varphi .} Every element of a structure is definable using the element itself as a parameter. Some authors use definable to mean definable without parameters , [ citation needed ] while other authors mean definable with parameters . [ citation needed ] Broadly speaking, the convention that definable means definable without parameters is more common amongst set theorists, while the opposite convention is more common amongst model theorists. Recall from above that an n {\displaystyle n} -ary relation R {\displaystyle R} on the universe M {\displaystyle M} of M {\displaystyle {\mathcal {M}}} is explicitly definable if there is a formula φ ( x 1 , … , x n ) {\displaystyle \varphi (x_{1},\ldots ,x_{n})} such that R = { ( a 1 , … , a n ) ∈ M n : M ⊨ φ ( a 1 , … , a n ) } . {\displaystyle R=\{(a_{1},\ldots ,a_{n})\in M^{n}:{\mathcal {M}}\vDash \varphi (a_{1},\ldots ,a_{n})\}.} Here the formula φ {\displaystyle \varphi } used to define a relation R {\displaystyle R} must be over the signature of M {\displaystyle {\mathcal {M}}} and so φ {\displaystyle \varphi } may not mention R {\displaystyle R} itself, since R {\displaystyle R} is not in the signature of M . {\displaystyle {\mathcal {M}}.} If there is a formula φ {\displaystyle \varphi } in the extended language containing the language of M {\displaystyle {\mathcal {M}}} and a new symbol R , {\displaystyle R,} and the relation R {\displaystyle R} is the only relation on M {\displaystyle {\mathcal {M}}} such that M ⊨ φ , {\displaystyle {\mathcal {M}}\vDash \varphi ,} then R {\displaystyle R} is said to be implicitly definable over M . {\displaystyle {\mathcal {M}}.} By Beth's theorem , every implicitly definable relation is explicitly definable. Structures as defined above are sometimes called one-sorted structure s to distinguish them from the more general many-sorted structure s . A many-sorted structure can have an arbitrary number of domains. The sorts are part of the signature, and they play the role of names for the different domains. Many-sorted signatures also prescribe which sorts the functions and relations of a many-sorted structure are defined on. Therefore, the arities of function symbols or relation symbols must be more complicated objects such as tuples of sorts rather than natural numbers. Vector spaces , for example, can be regarded as two-sorted structures in the following way. The two-sorted signature of vector spaces consists of two sorts V (for vectors) and S (for scalars) and the following function symbols: If V is a vector space over a field F , the corresponding two-sorted structure V {\displaystyle {\mathcal {V}}} consists of the vector domain | V | V = V {\displaystyle |{\mathcal {V}}|_{V}=V} , the scalar domain | V | S = F {\displaystyle |{\mathcal {V}}|_{S}=F} , and the obvious functions, such as the vector zero 0 V V = 0 ∈ | V | V {\displaystyle 0_{V}^{\mathcal {V}}=0\in |{\mathcal {V}}|_{V}} , the scalar zero 0 S V = 0 ∈ | V | S {\displaystyle 0_{S}^{\mathcal {V}}=0\in |{\mathcal {V}}|_{S}} , or scalar multiplication × V : | V | S × | V | V → | V | V {\displaystyle \times ^{\mathcal {V}}:|{\mathcal {V}}|_{S}\times |{\mathcal {V}}|_{V}\rightarrow |{\mathcal {V}}|_{V}} . Many-sorted structures are often used as a convenient tool even when they could be avoided with a little effort. But they are rarely defined in a rigorous way, because it is straightforward and tedious (hence unrewarding) to carry out the generalization explicitly. In most mathematical endeavours, not much attention is paid to the sorts. A many-sorted logic however naturally leads to a type theory . As Bart Jacobs puts it: "A logic is always a logic over a type theory." This emphasis in turn leads to categorical logic because a logic over a type theory categorically corresponds to one ("total") category, capturing the logic, being fibred over another ("base") category, capturing the type theory. [ 9 ] Both universal algebra and model theory study classes of (structures or) algebras that are defined by a signature and a set of axioms. In the case of model theory these axioms have the form of first-order sentences. The formalism of universal algebra is much more restrictive; essentially it only allows first-order sentences that have the form of universally quantified equations between terms, e.g. ∀ {\displaystyle \forall } x ∀ {\displaystyle \forall } y ( x + y = y + x ). One consequence is that the choice of a signature is more significant in universal algebra than it is in model theory. For example, the class of groups, in the signature consisting of the binary function symbol × and the constant symbol 1, is an elementary class , but it is not a variety . Universal algebra solves this problem by adding a unary function symbol −1 . In the case of fields this strategy works only for addition. For multiplication it fails because 0 does not have a multiplicative inverse. An ad hoc attempt to deal with this would be to define 0 −1 = 0. (This attempt fails, essentially because with this definition 0 × 0 −1 = 1 is not true.) Therefore, one is naturally led to allow partial functions, i.e., functions that are defined only on a subset of their domain. However, there are several obvious ways to generalize notions such as substructure, homomorphism and identity. In type theory , there are many sorts of variables, each of which has a type . Types are inductively defined; given two types δ and σ there is also a type σ → δ that represents functions from objects of type σ to objects of type δ. A structure for a typed language (in the ordinary first-order semantics) must include a separate set of objects of each type, and for a function type the structure must have complete information about the function represented by each object of that type. There is more than one possible semantics for higher-order logic , as discussed in the article on second-order logic . When using full higher-order semantics, a structure need only have a universe for objects of type 0, and the T-schema is extended so that a quantifier over a higher-order type is satisfied by the model if and only if it is disquotationally true. When using first-order semantics, an additional sort is added for each higher-order type, as in the case of a many sorted first order language. In the study of set theory and category theory , it is sometimes useful to consider structures in which the domain of discourse is a proper class instead of a set. These structures are sometimes called class models to distinguish them from the "set models" discussed above. When the domain is a proper class, each function and relation symbol may also be represented by a proper class. In Bertrand Russell 's Principia Mathematica , structures were also allowed to have a proper class as their domain.
https://en.wikipedia.org/wiki/Structure_(mathematical_logic)
In condensed matter physics and crystallography , the static structure factor (or structure factor for short) is a mathematical description of how a material scatters incident radiation. The structure factor is a critical tool in the interpretation of scattering patterns ( interference patterns ) obtained in X-ray , electron and neutron diffraction experiments. Confusingly, there are two different mathematical expressions in use, both called 'structure factor'. One is usually written S ( q ) {\displaystyle S(\mathbf {q} )} ; it is more generally valid, and relates the observed diffracted intensity per atom to that produced by a single scattering unit. The other is usually written F {\displaystyle F} or F h k ℓ {\displaystyle F_{hk\ell }} and is only valid for systems with long-range positional order — crystals. This expression relates the amplitude and phase of the beam diffracted by the ( h k ℓ ) {\displaystyle (hk\ell )} planes of the crystal ( ( h k ℓ ) {\displaystyle (hk\ell )} are the Miller indices of the planes) to that produced by a single scattering unit at the vertices of the primitive unit cell . F h k ℓ {\displaystyle F_{hk\ell }} is not a special case of S ( q ) {\displaystyle S(\mathbf {q} )} ; S ( q ) {\displaystyle S(\mathbf {q} )} gives the scattering intensity, but F h k ℓ {\displaystyle F_{hk\ell }} gives the amplitude. It is the modulus squared | F h k ℓ | 2 {\displaystyle |F_{hk\ell }|^{2}} that gives the scattering intensity. F h k ℓ {\displaystyle F_{hk\ell }} is defined for a perfect crystal, and is used in crystallography, while S ( q ) {\displaystyle S(\mathbf {q} )} is most useful for disordered systems. For partially ordered systems such as crystalline polymers there is obviously overlap, and experts will switch from one expression to the other as needed. The static structure factor is measured without resolving the energy of scattered photons/electrons/neutrons. Energy-resolved measurements yield the dynamic structure factor . Consider the scattering of a beam of wavelength λ {\displaystyle \lambda } by an assembly of N {\displaystyle N} particles or atoms stationary at positions R j , j = 1 , … , N {\displaystyle \textstyle \mathbf {R} _{j},j=1,\,\ldots ,\,N} . Assume that the scattering is weak, so that the amplitude of the incident beam is constant throughout the sample volume ( Born approximation ), and absorption, refraction and multiple scattering can be neglected ( kinematic diffraction ). The direction of any scattered wave is defined by its scattering vector q {\displaystyle \mathbf {q} } . q = k s − k o {\displaystyle \mathbf {q} =\mathbf {k_{s}} -\mathbf {k_{o}} } , where k s {\displaystyle \mathbf {k_{s}} } and k o {\displaystyle \mathbf {k_{o}} } ( | k s | = | k 0 | = 2 π / λ {\displaystyle |\mathbf {k_{s}} |=|\mathbf {k_{0}} |=2\pi /\lambda } ) are the scattered and incident beam wavevectors , and θ {\displaystyle \theta } is the angle between them. For elastic scattering, | k s | = | k o | {\displaystyle |\mathbf {k} _{s}|=|\mathbf {k_{o}} |} and q = | q | = 4 π λ sin ⁡ ( θ / 2 ) {\displaystyle q=|\mathbf {q} |={{\frac {4\pi }{\lambda }}\sin(\theta /2)}} , limiting the possible range of q {\displaystyle \mathbf {q} } (see Ewald sphere ). The amplitude and phase of this scattered wave will be the vector sum of the scattered waves from all the atoms Ψ s ( q ) = ∑ j = 1 N f j e − i q ⋅ R j {\displaystyle \Psi _{s}(\mathbf {q} )=\sum _{j=1}^{N}f_{j}\mathrm {e} ^{-i\mathbf {q} \cdot \mathbf {R} _{j}}} [ 1 ] [ 2 ] For an assembly of atoms, f j {\displaystyle f_{j}} is the atomic form factor of the j {\displaystyle j} -th atom. The scattered intensity is obtained by multiplying this function by its complex conjugate The structure factor is defined as this intensity normalized by 1 / ∑ j = 1 N f j 2 {\displaystyle 1/\sum _{j=1}^{N}f_{j}^{2}} [ 3 ] If all the atoms are identical, then Equation ( 1 ) becomes I ( q ) = f 2 ∑ j = 1 N ∑ k = 1 N e − i q ⋅ ( R j − R k ) {\displaystyle I(\mathbf {q} )=f^{2}\sum _{j=1}^{N}\sum _{k=1}^{N}\mathrm {e} ^{-i\mathbf {q} \cdot (\mathbf {R} _{j}-\mathbf {R} _{k})}} and ∑ j = 1 N f j 2 = N f 2 {\displaystyle \sum _{j=1}^{N}f_{j}^{2}=Nf^{2}} so Another useful simplification is if the material is isotropic, like a powder or a simple liquid. In that case, the intensity depends on q = | q | {\displaystyle q=|\mathbf {q} |} and r j k = | r j − r k | {\displaystyle r_{jk}=|\mathbf {r} _{j}-\mathbf {r} _{k}|} . In three dimensions, Equation ( 2 ) then simplifies to the Debye scattering equation: [ 1 ] An alternative derivation gives good insight, but uses Fourier transforms and convolution . To be general, consider a scalar (real) quantity ϕ ( r ) {\displaystyle \phi (\mathbf {r} )} defined in a volume V {\displaystyle V} ; this may correspond, for instance, to a mass or charge distribution or to the refractive index of an inhomogeneous medium. If the scalar function is integrable, we can write its Fourier transform as ψ ( q ) = ∫ V ϕ ( r ) exp ⁡ ( − i q ⋅ r ) d r {\displaystyle \textstyle \psi (\mathbf {q} )=\int _{V}\phi (\mathbf {r} )\exp(-i\mathbf {q} \cdot \mathbf {r} )\,\mathrm {d} \mathbf {r} } . In the Born approximation the amplitude of the scattered wave corresponding to the scattering vector q {\displaystyle \mathbf {q} } is proportional to the Fourier transform ψ ( q ) {\displaystyle \textstyle \psi (\mathbf {q} )} . [ 1 ] When the system under study is composed of a number N {\displaystyle N} of identical constituents (atoms, molecules, colloidal particles, etc.) each of which has a distribution of mass or charge f ( r ) {\displaystyle f(\mathbf {r} )} then the total distribution can be considered the convolution of this function with a set of delta functions . with R j , j = 1 , … , N {\displaystyle \textstyle \mathbf {R} _{j},j=1,\,\ldots ,\,N} the particle positions as before. Using the property that the Fourier transform of a convolution product is simply the product of the Fourier transforms of the two factors, we have ψ ( q ) = f ( q ) × ∑ j = 1 N exp ⁡ ( − i q ⋅ R j ) {\displaystyle \textstyle \psi (\mathbf {q} )=f(\mathbf {q} )\times \sum _{j=1}^{N}\exp(-i\mathbf {q} \cdot \mathbf {R} _{j})} , so that: This is clearly the same as Equation ( 1 ) with all particles identical, except that here f {\displaystyle f} is shown explicitly as a function of q {\displaystyle \mathbf {q} } . In general, the particle positions are not fixed and the measurement takes place over a finite exposure time and with a macroscopic sample (much larger than the interparticle distance). The experimentally accessible intensity is thus an averaged one ⟨ I ( q ) ⟩ {\displaystyle \textstyle \langle I(\mathbf {q} )\rangle } ; we need not specify whether ⟨ ⋅ ⟩ {\displaystyle \langle \cdot \rangle } denotes a time or ensemble average . To take this into account we can rewrite Equation ( 3 ) as: In a crystal , the constitutive particles are arranged periodically, with translational symmetry forming a lattice . The crystal structure can be described as a Bravais lattice with a group of atoms, called the basis, placed at every lattice point; that is, [crystal structure] = [lattice] ∗ {\displaystyle \ast } [basis]. If the lattice is infinite and completely regular, the system is a perfect crystal . For such a system, only a set of specific values for q {\displaystyle \mathbf {q} } can give scattering, and the scattering amplitude for all other values is zero. This set of values forms a lattice, called the reciprocal lattice , which is the Fourier transform of the real-space crystal lattice. In principle the scattering factor S ( q ) {\displaystyle S(\mathbf {q} )} can be used to determine the scattering from a perfect crystal; in the simple case when the basis is a single atom at the origin (and again neglecting all thermal motion, so that there is no need for averaging) all the atoms have identical environments. Equation ( 1 ) can be written as The structure factor is then simply the squared modulus of the Fourier transform of the lattice, and shows the directions in which scattering can have non-zero intensity. At these values of q {\displaystyle \mathbf {q} } the wave from every lattice point is in phase. The value of the structure factor is the same for all these reciprocal lattice points, and the intensity varies only due to changes in f {\displaystyle f} with q {\displaystyle \mathbf {q} } . The units of the structure-factor amplitude depend on the incident radiation. For X-ray crystallography they are multiples of the unit of scattering by a single electron (2.82 × 10 − 15 {\displaystyle \times 10^{-15}} m); for neutron scattering by atomic nuclei the unit of scattering length of 10 − 14 {\displaystyle 10^{-14}} m is commonly used. The above discussion uses the wave vectors | k | = 2 π / λ {\displaystyle |\mathbf {k} |=2\pi /\lambda } and | q | = 4 π sin ⁡ θ / λ {\displaystyle |\mathbf {q} |=4\pi \sin \theta /\lambda } . However, crystallography often uses wave vectors | s | = 1 / λ {\displaystyle |\mathbf {s} |=1/\lambda } and | g | = 2 sin ⁡ θ / λ {\displaystyle |\mathbf {g} |=2\sin \theta /\lambda } . Therefore, when comparing equations from different sources, the factor 2 π {\displaystyle 2\pi } may appear and disappear, and care to maintain consistent quantities is required to get correct numerical results. In crystallography, the basis and lattice are treated separately. For a perfect crystal the lattice gives the reciprocal lattice , which determines the positions (angles) of diffracted beams, and the basis gives the structure factor F h k l {\displaystyle F_{hkl}} which determines the amplitude and phase of the diffracted beams: where the sum is over all atoms in the unit cell, x j , y j , z j {\displaystyle x_{j},y_{j},z_{j}} are the positional coordinates of the j {\displaystyle j} -th atom, and f j {\displaystyle f_{j}} is the scattering factor of the j {\displaystyle j} -th atom. [ 4 ] The coordinates x j , y j , z j {\displaystyle x_{j},y_{j},z_{j}} have the directions and dimensions of the lattice vectors a , b , c {\displaystyle \mathbf {a} ,\mathbf {b} ,\mathbf {c} } . That is, (0,0,0) is at the lattice point, the origin of position in the unit cell; (1,0,0) is at the next lattice point along a {\displaystyle \mathbf {a} } and (1/2, 1/2, 1/2) is at the body center of the unit cell. ( h k l ) {\displaystyle (hkl)} defines a reciprocal lattice point at ( h a ∗ , k b ∗ , l c ∗ ) {\displaystyle (h\mathbf {a^{*}} ,k\mathbf {b^{*}} ,l\mathbf {c^{*}} )} which corresponds to the real-space plane defined by the Miller indices ( h k l ) {\displaystyle (hkl)} (see Bragg's law ). F h k ℓ {\displaystyle F_{hk\ell }} is the vector sum of waves from all atoms within the unit cell. An atom at any lattice point has the reference phase angle zero for all h k ℓ {\displaystyle hk\ell } since then ( h x j + k y j + ℓ z j ) {\displaystyle (hx_{j}+ky_{j}+\ell z_{j})} is always an integer. A wave scattered from an atom at (1/2, 0, 0) will be in phase if h {\displaystyle h} is even, out of phase if h {\displaystyle h} is odd. Again an alternative view using convolution can be helpful. Since [crystal structure] = [lattice] ∗ {\displaystyle \ast } [basis], F {\displaystyle {\mathcal {F}}} [crystal structure] = F {\displaystyle {\mathcal {F}}} [lattice] × F {\displaystyle \times {\mathcal {F}}} [basis]; that is, scattering ∝ {\displaystyle \propto } [reciprocal lattice] × {\displaystyle \times } [structure factor]. For the body-centered cubic Bravais lattice ( cI ), we use the points ( 0 , 0 , 0 ) {\displaystyle (0,0,0)} and ( 1 2 , 1 2 , 1 2 ) {\displaystyle ({\tfrac {1}{2}},{\tfrac {1}{2}},{\tfrac {1}{2}})} which leads us to and hence The FCC lattice is a Bravais lattice, and its Fourier transform is a body-centered cubic lattice. However to obtain F h k ℓ {\displaystyle F_{hk\ell }} without this shortcut, consider an FCC crystal with one atom at each lattice point as a primitive or simple cubic with a basis of 4 atoms, at the origin x j , y j , z j = ( 0 , 0 , 0 ) {\displaystyle x_{j},y_{j},z_{j}=(0,0,0)} and at the three adjacent face centers, x j , y j , z j = ( 1 2 , 1 2 , 0 ) {\displaystyle x_{j},y_{j},z_{j}=\left({\frac {1}{2}},{\frac {1}{2}},0\right)} , ( 0 , 1 2 , 1 2 ) {\displaystyle \left(0,{\frac {1}{2}},{\frac {1}{2}}\right)} and ( 1 2 , 0 , 1 2 ) {\displaystyle \left({\frac {1}{2}},0,{\frac {1}{2}}\right)} . Equation ( 8 ) becomes with the result The most intense diffraction peak from a material that crystallizes in the FCC structure is typically the (111). Films of FCC materials like gold tend to grow in a (111) orientation with a triangular surface symmetry. A zero diffracted intensity for a group of diffracted beams (here, h , k , ℓ {\displaystyle h,k,\ell } of mixed parity) is called a systematic absence. The diamond cubic crystal structure occurs for example diamond ( carbon ), tin , and most semiconductors . There are 8 atoms in the cubic unit cell. We can consider the structure as a simple cubic with a basis of 8 atoms, at positions But comparing this to the FCC above, we see that it is simpler to describe the structure as FCC with a basis of two atoms at (0, 0, 0) and (1/4, 1/4, 1/4). For this basis, Equation ( 8 ) becomes: And then the structure factor for the diamond cubic structure is the product of this and the structure factor for FCC above, (only including the atomic form factor once) with the result These points are encapsulated by the following equations: where N {\displaystyle N} is an integer. The zincblende structure is similar to the diamond structure except that it is a compound of two distinct interpenetrating fcc lattices, rather than all the same element. Denoting the two elements in the compound by A {\displaystyle A} and B {\displaystyle B} , the resulting structure factor is Cesium chloride is a simple cubic crystal lattice with a basis of Cs at (0,0,0) and Cl at (1/2, 1/2, 1/2) (or the other way around, it makes no difference). Equation ( 8 ) becomes We then arrive at the following result for the structure factor for scattering from a plane ( h k ℓ ) {\displaystyle (hk\ell )} : and for scattered intensity, | F h k ℓ | 2 = { ( f C s + f C l ) 2 , h + k + ℓ even ( f C s − f C l ) 2 , h + k + ℓ odd {\displaystyle |F_{hk\ell }|^{2}={\begin{cases}(f_{Cs}+f_{Cl})^{2},&h+k+\ell &{\text{even}}\\(f_{Cs}-f_{Cl})^{2},&h+k+\ell &{\text{odd}}\end{cases}}} In an HCP crystal such as graphite , the two coordinates include the origin ( 0 , 0 , 0 ) {\displaystyle \left(0,0,0\right)} and the next plane up the c axis located at c /2, and hence ( 1 / 3 , 2 / 3 , 1 / 2 ) {\displaystyle \left(1/3,2/3,1/2\right)} , which gives us From this it is convenient to define dummy variable X ≡ h / 3 + 2 k / 3 + ℓ / 2 {\displaystyle X\equiv h/3+2k/3+\ell /2} , and from there consider the modulus squared so hence This leads us to the following conditions for the structure factor: The reciprocal lattice is easily constructed in one dimension: for particles on a line with a period a {\displaystyle a} , the reciprocal lattice is an infinite array of points with spacing 2 π / a {\displaystyle 2\pi /a} . In two dimensions, there are only five Bravais lattices . The corresponding reciprocal lattices have the same symmetry as the direct lattice. 2-D lattices are excellent for demonstrating simple diffraction geometry on a flat screen, as below. Equations (1)–(7) for structure factor S ( q ) {\displaystyle S(\mathbf {q} )} apply with a scattering vector of limited dimensionality and a crystallographic structure factor can be defined in 2-D as F h k = ∑ j = 1 N f j e [ − 2 π i ( h x j + k y j ) ] {\displaystyle F_{hk}=\sum _{j=1}^{N}f_{j}\mathrm {e} ^{[-2\pi i(hx_{j}+ky_{j})]}} . However, recall that real 2-D crystals such as graphene exist in 3-D. The reciprocal lattice of a 2-D hexagonal sheet that exists in 3-D space in the x y {\displaystyle xy} plane is a hexagonal array of lines parallel to the z {\displaystyle z} or z ∗ {\displaystyle z^{*}} axis that extend to ± ∞ {\displaystyle \pm \infty } and intersect any plane of constant z {\displaystyle z} in a hexagonal array of points. The Figure shows the construction of one vector of a 2-D reciprocal lattice and its relation to a scattering experiment. A parallel beam, with wave vector k i {\displaystyle \mathbf {k} _{i}} is incident on a square lattice of parameter a {\displaystyle a} . The scattered wave is detected at a certain angle, which defines the wave vector of the outgoing beam, k o {\displaystyle \mathbf {k} _{o}} (under the assumption of elastic scattering , | k o | = | k i | {\displaystyle |\mathbf {k} _{o}|=|\mathbf {k} _{i}|} ). One can equally define the scattering vector q = k o − k i {\displaystyle \mathbf {q} =\mathbf {k} _{o}-\mathbf {k} _{i}} and construct the harmonic pattern exp ⁡ ( i q r ) {\displaystyle \exp(i\mathbf {q} \mathbf {r} )} . In the depicted example, the spacing of this pattern coincides to the distance between particle rows: q = 2 π / a {\displaystyle q=2\pi /a} , so that contributions to the scattering from all particles are in phase (constructive interference). Thus, the total signal in direction k o {\displaystyle \mathbf {k} _{o}} is strong, and q {\displaystyle \mathbf {q} } belongs to the reciprocal lattice. It is easily shown that this configuration fulfills Bragg's law . Technically a perfect crystal must be infinite, so a finite size is an imperfection. Real crystals always exhibit imperfections of their order besides their finite size, and these imperfections can have profound effects on the properties of the material. André Guinier [ 5 ] proposed a widely employed distinction between imperfections that preserve the long-range order of the crystal that he called disorder of the first kind and those that destroy it called disorder of the second kind . An example of the first is thermal vibration; an example of the second is some density of dislocations. The generally applicable structure factor S ( q ) {\displaystyle S(\mathbf {q} )} can be used to include the effect of any imperfection. In crystallography, these effects are treated as separate from the structure factor F h k l {\displaystyle F_{hkl}} , so separate factors for size or thermal effects are introduced into the expressions for scattered intensity, leaving the perfect crystal structure factor unchanged. Therefore, a detailed description of these factors in crystallographic structure modeling and structure determination by diffraction is not appropriate in this article. For S ( q ) {\displaystyle S(q)} a finite crystal means that the sums in equations 1-7 are now over a finite N {\displaystyle N} . The effect is most easily demonstrated with a 1-D lattice of points. The sum of the phase factors is a geometric series and the structure factor becomes: This function is shown in the Figure for different values of N {\displaystyle N} . When the scattering from every particle is in phase, which is when the scattering is at a reciprocal lattice point q = 2 k π / a {\displaystyle q=2k\pi /a} , the sum of the amplitudes must be ∝ N {\displaystyle \propto N} and so the maxima in intensity are ∝ N 2 {\displaystyle \propto N^{2}} . Taking the above expression for S ( q ) {\displaystyle S(q)} and estimating the limit S ( q → 0 ) {\displaystyle S(q\to 0)} using, for instance, L'Hôpital's rule ) shows that S ( q = 2 k π / a ) = N {\displaystyle S(q=2k\pi /a)=N} as seen in the Figure. At the midpoint S ( q = ( 2 k + 1 ) π / a ) = 1 / N {\displaystyle S(q=(2k+1)\pi /a)=1/N} (by direct evaluation) and the peak width decreases like 1 / N {\displaystyle 1/N} . In the large N {\displaystyle N} limit, the peaks become infinitely sharp Dirac delta functions, the reciprocal lattice of the perfect 1-D lattice. In crystallography when F h k l {\displaystyle F_{hkl}} is used, N {\displaystyle N} is large, and the formal size effect on diffraction is taken as [ sin ⁡ ( N q a / 2 ) ( q a / 2 ) ] 2 {\displaystyle \left[{\frac {\sin(Nqa/2)}{(qa/2)}}\right]^{2}} , which is the same as the expression for S ( q ) {\displaystyle S(q)} above near to the reciprocal lattice points, q ≈ 2 k π / a {\displaystyle q\approx 2k\pi /a} . Using convolution, we can describe the finite real crystal structure as [lattice] ∗ {\displaystyle \ast } [basis] × {\displaystyle \times } rectangular function , where the rectangular function has a value 1 inside the crystal and 0 outside it. Then F {\displaystyle {\mathcal {F}}} [crystal structure] = F {\displaystyle {\mathcal {F}}} [lattice] × F {\displaystyle \times {\mathcal {F}}} [basis] ∗ F {\displaystyle \ast {F}} [rectangular function]; that is, scattering ∝ {\displaystyle \propto } [reciprocal lattice] × {\displaystyle \times } [structure factor] ∗ {\displaystyle \ast } [ sinc function]. Thus the intensity, which is a delta function of position for the perfect crystal, becomes a sinc 2 {\textstyle \operatorname {sinc} ^{2}} function around every point with a maximum ∝ N 2 {\displaystyle \propto N^{2}} , a width ∝ 1 / N {\displaystyle \propto 1/N} , area ∝ N {\displaystyle \propto N} . This model for disorder in a crystal starts with the structure factor of a perfect crystal. In one-dimension for simplicity and with N planes, we then start with the expression above for a perfect finite lattice, and then this disorder only changes S ( q ) {\displaystyle S(q)} by a multiplicative factor, to give [ 1 ] where the disorder is measured by the mean-square displacement of the positions x j {\displaystyle x_{j}} from their positions in a perfect one-dimensional lattice: a ( j − ( N − 1 ) / 2 ) {\displaystyle a(j-(N-1)/2)} , i.e., x j = a ( j − ( N − 1 ) / 2 ) + δ x {\displaystyle x_{j}=a(j-(N-1)/2)+\delta x} , where δ x {\displaystyle \delta x} is a small (much less than a {\displaystyle a} ) random displacement. For disorder of the first kind, each random displacement δ x {\displaystyle \delta x} is independent of the others, and with respect to a perfect lattice. Thus the displacements δ x {\displaystyle \delta x} do not destroy the translational order of the crystal. This has the consequence that for infinite crystals ( N → ∞ {\displaystyle N\to \infty } ) the structure factor still has delta-function Bragg peaks – the peak width still goes to zero as N → ∞ {\displaystyle N\to \infty } , with this kind of disorder. However, it does reduce the amplitude of the peaks, and due to the factor of q 2 {\displaystyle q^{2}} in the exponential factor, it reduces peaks at large q {\displaystyle q} much more than peaks at small q {\displaystyle q} . The structure is simply reduced by a q {\displaystyle q} and disorder dependent term because all disorder of the first-kind does is smear out the scattering planes, effectively reducing the form factor. In three dimensions the effect is the same, the structure is again reduced by a multiplicative factor, and this factor is often called the Debye–Waller factor . Note that the Debye–Waller factor is often ascribed to thermal motion, i.e., the δ x {\displaystyle \delta x} are due to thermal motion, but any random displacements about a perfect lattice, not just thermal ones, will contribute to the Debye–Waller factor. However, fluctuations that cause the correlations between pairs of atoms to decrease as their separation increases, causes the Bragg peaks in the structure factor of a crystal to broaden. To see how this works, we consider a one-dimensional toy model: a stack of plates with mean spacing a {\displaystyle a} . The derivation follows that in chapter 9 of Guinier's textbook. [ 6 ] This model has been pioneered by and applied to a number of materials by Hosemann and collaborators [ 7 ] over a number of years. Guinier and they termed this disorder of the second kind, and Hosemann in particular referred to this imperfect crystalline ordering as paracrystalline ordering. Disorder of the first kind is the source of the Debye–Waller factor . To derive the model we start with the definition (in one dimension) of the To start with we will consider, for simplicity an infinite crystal, i.e., N → ∞ {\displaystyle N\to \infty } . We will consider a finite crystal with disorder of the second-type below. For our infinite crystal, we want to consider pairs of lattice sites. For large each plane of an infinite crystal, there are two neighbours m {\displaystyle m} planes away, so the above double sum becomes a single sum over pairs of neighbours either side of an atom, at positions − m {\displaystyle -m} and m {\displaystyle m} lattice spacings away, times N {\displaystyle N} . So, then where p m ( Δ x ) {\displaystyle p_{m}(\Delta x)} is the probability density function for the separation Δ x {\displaystyle \Delta x} of a pair of planes, m {\displaystyle m} lattice spacings apart. For the separation of neighbouring planes we assume for simplicity that the fluctuations around the mean neighbour spacing of a are Gaussian, i.e., that and we also assume that the fluctuations between a plane and its neighbour, and between this neighbour and the next plane, are independent. Then p 2 ( Δ x ) {\displaystyle p_{2}(\Delta x)} is just the convolution of two p 1 ( Δ x ) {\displaystyle p_{1}(\Delta x)} s, etc. As the convolution of two Gaussians is just another Gaussian, we have that The sum in S ( q ) {\displaystyle S(q)} is then just a sum of Fourier transforms of Gaussians, and so for r = exp ⁡ [ − q 2 σ 2 2 / 2 ] {\displaystyle r=\exp[-q^{2}\sigma _{2}^{2}/2]} . The sum is just the real part of the sum ∑ m = 1 ∞ [ r exp ⁡ ( i q a ) ] m {\displaystyle \sum _{m=1}^{\infty }[r\exp(iqa)]^{m}} and so the structure factor of the infinite but disordered crystal is This has peaks at maxima q p = 2 n π / a {\displaystyle q_{p}=2n\pi /a} , where cos ⁡ ( q P a ) = 1 {\displaystyle \cos(q_{P}a)=1} . These peaks have heights i.e., the height of successive peaks drop off as the order of the peak (and so q {\displaystyle q} ) squared. Unlike finite-size effects that broaden peaks but do not decrease their height, disorder lowers peak heights. Note that here we assuming that the disorder is relatively weak, so that we still have relatively well defined peaks. This is the limit q σ 2 ≪ 1 {\displaystyle q\sigma _{2}\ll 1} , where r ≃ 1 − q 2 σ 2 2 / 2 {\displaystyle r\simeq 1-q^{2}\sigma _{2}^{2}/2} . In this limit, near a peak we can approximate cos ⁡ ( q a ) ≃ 1 − ( Δ q ) 2 a 2 / 2 {\displaystyle \cos(qa)\simeq 1-(\Delta q)^{2}a^{2}/2} , with Δ q = q − q P {\displaystyle \Delta q=q-q_{P}} and obtain which is a Lorentzian or Cauchy function , of FWHM q P 2 σ 2 2 / a = 4 π 2 n 2 ( σ 2 / a ) 2 / a {\displaystyle q_{P}^{2}\sigma _{2}^{2}/a=4\pi ^{2}n^{2}(\sigma _{2}/a)^{2}/a} , i.e., the FWHM increases as the square of the order of peak, and so as the square of the wave vector q {\displaystyle q} at the peak. Finally, the product of the peak height and the FWHM is constant and equals 4 / a {\displaystyle 4/a} , in the q σ 2 ≪ 1 {\displaystyle q\sigma _{2}\ll 1} limit. For the first few peaks where n {\displaystyle n} is not large, this is just the σ 2 / a ≪ 1 {\displaystyle \sigma _{2}/a\ll 1} limit. For a one-dimensional crystal of size N {\displaystyle N} where the factor in parentheses comes from the fact the sum is over nearest-neighbour pairs ( m = 1 {\displaystyle m=1} ), next nearest-neighbours ( m = 2 {\displaystyle m=2} ), ... and for a crystal of N {\displaystyle N} planes, there are N − 1 {\displaystyle N-1} pairs of nearest neighbours, N − 2 {\displaystyle N-2} pairs of next-nearest neighbours, etc. In contrast with crystals, liquids have no long-range order (in particular, there is no regular lattice), so the structure factor does not exhibit sharp peaks. They do however show a certain degree of short-range order , depending on their density and on the strength of the interaction between particles. Liquids are isotropic, so that, after the averaging operation in Equation ( 4 ), the structure factor only depends on the absolute magnitude of the scattering vector q = | q | {\displaystyle q=\left|\mathbf {q} \right|} . For further evaluation, it is convenient to separate the diagonal terms j = k {\displaystyle j=k} in the double sum, whose phase is identically zero, and therefore each contribute a unit constant: One can obtain an alternative expression for S ( q ) {\displaystyle S(q)} in terms of the radial distribution function g ( r ) {\displaystyle g(r)} : [ 8 ] In the limiting case of no interaction, the system is an ideal gas and the structure factor is completely featureless: S ( q ) = 1 {\displaystyle S(q)=1} , because there is no correlation between the positions R j {\displaystyle \mathbf {R} _{j}} and R k {\displaystyle \mathbf {R} _{k}} of different particles (they are independent random variables ), so the off-diagonal terms in Equation ( 9 ) average to zero: ⟨ exp ⁡ [ − i q ( R j − R k ) ] ⟩ = ⟨ exp ⁡ ( − i q R j ) ⟩ ⟨ exp ⁡ ( i q R k ) ⟩ = 0 {\displaystyle \langle \exp[-i\mathbf {q} (\mathbf {R} _{j}-\mathbf {R} _{k})]\rangle =\langle \exp(-i\mathbf {q} \mathbf {R} _{j})\rangle \langle \exp(i\mathbf {q} \mathbf {R} _{k})\rangle =0} . Even for interacting particles, at high scattering vector the structure factor goes to 1. This result follows from Equation ( 10 ), since S ( q ) − 1 {\displaystyle S(q)-1} is the Fourier transform of the "regular" function g ( r ) {\displaystyle g(r)} and thus goes to zero for high values of the argument q {\displaystyle q} . This reasoning does not hold for a perfect crystal, where the distribution function exhibits infinitely sharp peaks. In the low- q {\displaystyle q} limit, as the system is probed over large length scales, the structure factor contains thermodynamic information, being related to the isothermal compressibility χ T {\displaystyle \chi _{T}} of the liquid by the compressibility equation : In the hard sphere model, the particles are described as impenetrable spheres with radius R {\displaystyle R} ; thus, their center-to-center distance r ≥ 2 R {\displaystyle r\geq 2R} and they experience no interaction beyond this distance. Their interaction potential can be written as: This model has an analytical solution [ 9 ] in the Percus–Yevick approximation . Although highly simplified, it provides a good description for systems ranging from liquid metals [ 10 ] to colloidal suspensions. [ 11 ] In an illustration, the structure factor for a hard-sphere fluid is shown in the Figure, for volume fractions Φ {\displaystyle \Phi } from 1% to 40%. In polymer systems, the general definition ( 4 ) holds; the elementary constituents are now the monomers making up the chains. However, the structure factor being a measure of the correlation between particle positions, one can reasonably expect that this correlation will be different for monomers belonging to the same chain or to different chains. Let us assume that the volume V {\displaystyle V} contains N c {\displaystyle N_{c}} identical molecules, each composed of N p {\displaystyle N_{p}} monomers, such that N c N p = N {\displaystyle N_{c}N_{p}=N} ( N p {\displaystyle N_{p}} is also known as the degree of polymerization ). We can rewrite ( 4 ) as: where indices α , β {\displaystyle \alpha ,\beta } label the different molecules and j , k {\displaystyle j,k} the different monomers along each molecule. On the right-hand side we separated intramolecular ( α = β {\displaystyle \alpha =\beta } ) and intermolecular ( α ≠ β {\displaystyle \alpha \neq \beta } ) terms. Using the equivalence of the chains, ( 11 ) can be simplified: [ 12 ] where S 1 ( q ) {\displaystyle S_{1}(q)} is the single-chain structure factor.
https://en.wikipedia.org/wiki/Structure_factor
Structure field maps (SFMs) or structure maps are visualizations of the relationship between ionic radii and crystal structures for representing classes of materials. [ 1 ] The SFM and its extensions has found broad applications in geochemistry , mineralogy , chemical synthesis of materials, and nowadays in materials informatics . The intuitive concept of the SFMs led to different versions of the visualization method established in different domains of materials science. Structure field map was first introduced in 1954 by MacKenzie L. Keith and Rustum Roy to classify structural prototypes for the oxide perovskites of the chemical formula ABO 3 . [ 2 ] It was later popularized by a compiled handbook written by Olaf Muller and Rustum Roy , published in 1974 that included many more known materials. [ 3 ] [ 4 ] A structure field map is typically two-dimensional, although higher dimensional versions are feasible. The axes in an SFM are the ionic sequences. For example, in oxide perovskites ABO 3 , where A and B represent two metallic cations , the two axes are ionic radii of the A-site and B-site cations. SFMs are constructed according to the oxidation states of the constituent cations. For perovskites of the type ABO 3 , three ways of cation pairings exist: A 3+ B 3+ O 3 , A 2+ B 4+ O 3 , and A 1+ B 5+ O 3 , therefore, three different SFMs exist for each pairs of cation oxidation states. [ 4 ]
https://en.wikipedia.org/wiki/Structure_field_map
The structure function , like the fragmentation function, is a probability density function in physics. It is somewhat analogous to the structure factor in solid-state physics , and the form factor (quantum field theory) . The nucleon (proton and neutron) electromagnetic form factors describe the spatial distributions of electric charge and current inside the nucleon and thus are intimately related to its internal structure; these form factors are among the most basic observables of the nucleon . (Nucleons are the building blocks of almost all ordinary matter in the universe. The challenge of understanding the nucleon's structure and dynamics has occupied a central place in nuclear physics.) The structure functions are important in the study of deep inelastic scattering . [ 1 ] [ 2 ] [ 3 ] The fundamental understanding of structure functions in terms of QCD is one of the outstanding problems in hadron physics. Why do quarks form colourless hadrons with only two stable configurations, proton and neutron ? One important step towards answering this question is to characterize the internal structure of the nucleon. High energy electron scattering provides one of the most powerful tools to investigate this structure. This scattering –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Structure_function
In artificial intelligence and cognitive science , the structure mapping engine ( SME ) is an implementation in software of an algorithm for analogical matching based on the psychological theory of Dedre Gentner . The basis of Gentner's structure-mapping idea is that an analogy is a mapping of knowledge from one domain (the base) into another (the target). The structure-mapping engine is a computer simulation of the analogy and similarity comparisons. [ 1 ] The theory is useful because it ignores surface features and finds matches between potentially very different things if they have the same representational structure. For example, SME could determine that a pen is like a sponge because both are involved in dispensing liquid, even though they do this very differently. Structure mapping theory is based on the systematicity principle, which states that connected knowledge is preferred over independent facts. Therefore, the structure mapping engine should ignore isolated source-target mappings unless they are part of a bigger structure. The SME, the theory goes, should map objects that are related to knowledge that has already been mapped. The theory also requires that mappings be done one-to-one , which means that no part of the source description can map to more than one item in the target and no part of the target description can be mapped to more than one part of the source. The theory also requires that if a match maps subject to target, the arguments of subject and target must also be mapped. If both these conditions are met, the mapping is said to be "structurally consistent." SME maps knowledge from a source into a target. SME calls each description a dgroup. Dgroups contain a list of entities and predicates . Entities represent the objects or concepts in a description — such as an input gear or a switch. Predicates are one of three types and are a general way to express knowledge for SME. Functions and attributes have different meanings, and consequently SME processes them differently. For example, in SME's true analogy rule set, attributes differ from functions because they cannot match unless there is a higher-order match between them. The difference between attributes and functions will be explained further in this section's examples. All predicates have four parameters. They have (1) a functor, which identifies it, and (2) a type, which is either relation, attribute, or function. The other two parameters (3 and 4) are for determining how to process the arguments in the SME algorithm . If the arguments have to be matched in order, commutative is false. If the predicate can take any number of arguments, N-ary is false. An example of a predicate definition is: (sme:defPredicate behavior-set (predicate) relation :n-ary? t :commutative? t) The predicate's functor is “behavior-set,” its type is “relation,” and its n-ary and commutative parameters are both set to true. The “(predicate)” part of the definition specifies that there will be one or more predicates inside an instantiation of behavior-set. The algorithm has several steps. [ 2 ] The first step of the algorithm is to create a set of match hypotheses between source and target dgroups. A match hypothesis represents a possible mapping between any part of the source and the target. This mapping is controlled by a set of match rules. By changing the match rules, one can change the type of reasoning SME does. For example, one set of match rules may perform a kind of analogy called literal similarity, and another performs a kind of analogy called true-analogy. These rules are not the place where domain-dependent information is added, but rather where the analogy process is tweaked, depending on the type of cognitive function the user is trying to emulate. For a given match rule, there are two types of rules that further define how it will be applied: filter rules and intern rules. Intern rules use only the arguments of the expressions in the match hypotheses that the filter rules identify. This limitation makes the processing more efficient by constraining the number of match hypotheses that are generated. At the same time, it also helps to build the structural consistencies that are needed later on in the algorithm. An example of a filter rule from the true-analogy rule set creates match hypotheses between predicates that have the same functor. The true-analogy rule set has an intern rule that iterates over the arguments of any match hypothesis, creating more match hypotheses if the arguments are entities or functions, or if the arguments are attributes and have the same functor. In order to illustrate how the match rules produce match hypotheses consider these two predicates: transmit torque inputgear secondgear (p1) transmit signal switch div10 (p2) Here we use true analogy for the type of reasoning. The filter match rule generates a match between p1 and p2 because they share the same functor, transmit. The intern rules then produce three more match hypotheses: torque to signal, inputgear to switch, and secondgear to div10. The intern rules created these match hypotheses because all the arguments were entities. If the arguments were functions or attributes instead of entities, the predicates would be expressed as: transmit torque (inputgear gear) (secondgear gear) (p3) transmit signal (switch circuit) (div10 circuit) (p4) These additional predicates make inputgear, secondgear, switch, and div10 functions or attributes depending on the value defined in the language input file. The representation also contains additional entities for gear and circuit. Depending on what type inputgear, secondgear, switch, and div10 are, their meanings change. As attributes, each one is a property of the gear or circuit. For example, the gear has two attributes, inputgear and secondgear. The circuit has two attributes, switch and circuit. As functions inputgear, secondgear, switch, and div10 become quantities of the gear and circuit. In this example, the functions inputgear and secondgear now map to the numerical quantities “torque from inputgear” and “torque from secondgear,” For the circuit the quantities map to logical quantity “switch engaged” and the numerical quantity “current count on the divide by 10 counter.” SME processes these differently. It does not allow attributes to match unless they are part of a higher-order relation, but it does allow functions to match, even if they are not part of such a relation. It allows functions to match because they indirectly refer to entities and thus should be treated like relations that involve no entities. However, as next section shows, the intern rules assign lower weights to matches between functions than to matches between relations. The reason SME does not match attributes is because it is trying to create connected knowledge based on relationships and thus satisfy the systematicity principle. For example, if both a clock and a car have inputgear attributes, SME will not mark them as similar. If it did, it would be making a match between the clock and car based on their appearance — not on the relationships between them. When the additional predicates in p3 and p4 are functions, the results from matching p3 and p4 are similar to the results from p1 and p2 except there is an additional match between gear and circuit and the values for the match hypotheses between (inputgear gear) and (switch circuit), and (secondgear gear) and (div10 circuit), are lower. The next section describes the reason for this in more detail. If the inputgear, secondgear, switch, and div10 are attributes instead of entities, SME does not find matches between any of the attributes. It finds matches only between the transmit predicates and between torque and signal. Additionally, the structural-evaluation scores for the remaining two matches decrease. In order to get the two predicates to match, p3 would need to be replaced by p5, which is demonstrated below. transmit torque (inputgear gear) (div10 gear) (p5) Since the true-analogy rule set identifies that the div10 attributes are the same between p5 and p4 and because the div10 attributes are both part of the higher-relation match between torque and signal, SME makes a match between (div10 gear) and (div10 circuit) — which leads to a match between gear and circuit. Being part of a higher-order match is a requirement only for attributes. For example, if (div10 gear) and (div10 circuit) are not part of a higher-order match, SME does not create a match hypothesis between them. However, if div10 is a function or relation, SME does create a match. Once the match hypotheses are generated, SME needs to compute an evaluation score for each hypothesis. SME does so by using a set of intern match rules to calculate positive and negative evidence for each match. Multiple amounts of evidence are correlated using Dempster's rule [Shafer, 1978] resulting in positive and negative belief values between 0 and 1. The match rules assign different values for matches involving functions and relations. These values are programmable, however, and some default values that can be used to enforce the systematicity principle are described in [Falkenhainer et al., 1989]. These rules are: In the example match between p1 and p2, SME gives the match between the transmit relations a positive evidence value of 0.7900, and the others get values of 0.6320. The transmit relation receives the evidence value of 0.7900 because it gains evidence from rules 1, 3, and 2. The other matches get a value of 0.6320 because 0.8 of the evidence from the transmit is propagated to these matches because of rule 5. For predicates p3 and p4, SME assigns less evidence because the arguments of the transmit relations are functions. The transmit relation gets positive evidence of 0.65 because rule 3 no longer adds evidence. The match between (input gear) and (switch circuit) becomes 0.7120. This match gets 0.4 evidence because of rule 3, and 0.52 evidence propagated from the transmit relation because of rule 5. When the predicates in p3 and p4 are attributes, rule 4 adds -0.8 evidence to the transmit match because — though the functors of the transmit relation match — the arguments do not have the potential to match and the arguments are not functions. To summarize, the intern match rules compute a structural evaluation score for each match hypothesis. These rules enforce the systematicity principle. Rule 5 provides trickle-down evidence in order to strengthen matches that are involved in higher-order relations. Rules 1, 3. and 4 add or subtract support for relations that could have matching arguments. Rule 2 adds support for the cases when the functors match. thereby adding support for matches that emphasize relationships. The rules also enforce the difference between attributes, functions, and relations. For example, they have checks which give less evidence for functions than relations. Attributes are not specifically dealt with by the intern match rules, but SME's filter rules ensure that they will only be considered for these rules if they are part of a higher-order relation, and rule 2 ensures that attributes will only match if they have identical functors. The rest of the SME algorithm is involved in creating maximally consistent sets of match hypotheses. These sets are called gmaps. SME must ensure that any gmaps that it creates are structurally consistent; in other words, that they are one-to-one — such that no source maps to multiple targets and no target is mapped to multiple sources. The gmaps must also have support, which means that if a match hypothesis is in the gmap, then so are the match hypothesis that involve the source and target items. The gmap creation process follows two steps. First, SME computes information about each match hypothesis — including entity mappings, any conflicts with other hypotheses, and what other match hypotheses with which it might be structurally inconsistent. SME then uses this information to merge match hypotheses — using a greedy algorithm and the structural evaluation score. It merges the match hypotheses into maximally structurally consistent connected graphs of match hypotheses. Then it combines gmaps that have overlapping structure if they are structurally consistent. Finally, it combines independent gmaps together while maintaining structural consistency. Comparing a source to a target dgroup may produce one or more gmaps. The weight for each gmap is the sum of all the positive evidence values for all the match hypotheses involved in the gmap. For example, if a source containing p1 and p6 below, is compared to a target containing p2, SME will generate two gmaps. Both gmaps have a weight of 2.9186. Source: transmit torque inputgear secondgear (p1) transmit torque secondgear thirdgear (p6) Target: transmit signal switch div10 (p2) These are the gmaps which result from comparing a source containing a p1 and p6 and a target containing p2. Gmap No. 1: Gmap No. 2 : The gmaps show pairs of predicates or entities that match. For example, in gmap No. 1, the entities torque and signal match and the behaviors transmit torque inputgear secondgear and transmit signal switch div10 match. Gmap No. 1 represents combining p1 and p2. Gmap No. 2 represents combining p1 and p6. Although p2 is compatible with both p1 and p6, the one-to-one mapping constraint enforces that both mappings cannot be in the same gmap. Therefore, SME produces two independent gmaps. In addition, combining the two gmaps together would make the entity mappings between thirdgear and div10 conflict with the entity mapping between secondgear and div10. Chalmers, French, and Hofstadter [1992] criticize SME for its reliance on manually constructed LISP representations as input. They argue that too much human creativity is required to construct these representations; the intelligence comes from the design of the input, not from SME. Forbus et al. [1998] attempted to rebut this criticism. Morrison and Dietrich [1995] tried to reconcile the two points of view. Turney [2008] presents an algorithm that does not require LISP input, yet follows the principles of Structure Mapping Theory. Turney [2008] state that their work, too, is not immune to the criticism of Chalmers, French, and Hofstadter [1992]. In her article How Creative Ideas Take Shape, [ 3 ] Liane Gabora writes "According to the honing theory of creativity, creative thought works not on individually considered, discrete, predefined representations but on a contextually-elicited amalgam of items which exist in a state of potentiality and may not be readily separable. This leads to the prediction that analogy making proceeds not by mapping correspondences from candidate sources to target, as predicted by the structure mapping theory of analogy, but by weeding out non-correspondences, thereby whittling away at potentiality."
https://en.wikipedia.org/wiki/Structure_mapping_engine
In computing , the Structure of Management Information (SMI) , an adapted subset of ASN.1 , is a technical language used in definitions of Simple Network Management Protocol (SNMP) and its extensions to define sets ("modules") of related managed objects in a Management Information Base (MIB). SMI subdivides into three parts: module definitions, object definitions, and notification definitions. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Structure_of_Management_Information
The structure of liquids , glasses and other non-crystalline solids is characterized by the absence of long-range order which defines crystalline materials. Liquids and amorphous solids do, however, possess a rich and varied array of short to medium range order, which originates from chemical bonding and related interactions. Metallic glasses , for example, are typically well described by the dense random packing of hard spheres, whereas covalent systems, such as silicate glasses , have sparsely packed, strongly bound, tetrahedral network structures. These very different structures result in materials with very different physical properties and applications. The study of liquid and glass structure aims to gain insight into their behavior and physical properties, so that they can be understood, predicted and tailored for specific applications. Since the structure and resulting behavior of liquids and glasses is a complex many body problem , historically it has been too computationally intensive to solve using quantum mechanics directly. Instead, a variety of diffraction , nuclear magnetic resonance (NMR), molecular dynamics , and Monte Carlo simulation techniques are most commonly used. The pair distribution function (or pair correlation function) of a material describes the probability of finding an atom at a separation r from another atom. A typical plot of g versus r of a liquid or glass shows a number of key features: The static structure factor , S(q) , which can be measured with diffraction techniques, is related to its corresponding g(r) by Fourier transformation where q is the magnitude of the momentum transfer vector, and ρ is the number density of the material. Like g(r) , the S(q) patterns of liquids and glasses have a number of key features: The absence of long-range order in liquids and glasses is evidenced by the absence of Bragg peaks in X-ray and neutron diffraction . For these isotropic materials, the diffraction pattern has circular symmetry, and in the radial direction, the diffraction intensity has a smooth oscillatory shape. This diffracted intensity is usually analyzed to give the static structure factor , S(q) , where q is given by q =4πsin(θ)/λ, where 2θ is the scattering angle (the angle between the incident and scattered quanta), and λ is the incident wavelength of the probe (photon or neutron). Typically diffraction measurements are performed at a single (monochromatic) λ, and diffracted intensity is measured over a range of 2θ angles, to give a wide range of q . Alternatively a range of λ, may be used, allowing the intensity measurements to be taken at a fixed or narrow range of 2θ. In x-ray diffraction, such measurements are typically called "energy dispersive", whereas in neutron diffraction this is normally called "time-of-flight" reflecting the different detection methods used. Once obtained, an S(q) pattern can be Fourier transformed to provide a corresponding radial distribution function (or pair correlation function), denoted in this article as g(r) . For an isotropic material, the relation between S(q) and its corresponding g(r) is The g(r) , which describes the probability of finding an atom at a separation r from another atom, provides a more intuitive description of the atomic structure. The g(r) pattern obtained from a diffraction measurement represents a spatial, and thermal average of all the pair correlations in the material, weighted by their coherent cross-sections with the incident beam. By definition, g(r) is related to the average number of particles found within a given volume of shell located at a distance r from the center. The average density of atoms at a given radial distance from another atom is given by the formula: where n ( r ) is the mean number of atoms in a shell of width Δ r at distance r . [ 1 ] The g(r) of a simulation box can be calculated easily by histograming the particle separations using the following equation where N a is the number of a particles, | r ij | is the magnitude of the separation of the pair of particles i,j . Atomistic simulations can also be used in conjunction with interatomic pair potential functions in order to calculate macroscopic thermodynamic parameters such as the internal energy, Gibbs free energy, entropy and enthalpy of the system. While studying glass, Zachariasen began to notice repeating properties in glasses. He postulated rules and patterns that, when atoms followed these rules, they were likely to form glasses. The following rules make up Zachariasen's theory, applying only to oxide glasses. [ 2 ] All of these rules provide the correct amount of flexibility to form a glass and not a crystal. While these rules only apply to oxide glasses, they were the first rules to establish the idea of a continuous random network for glass structure. He was also the first to classify structural roles for various oxides, some being main glass formers (SiO 2 , GeO 2 , P 2 O 5 ), and some being glass modifiers (Na 2 O, CaO). This criterion established a connection between the chemical bond strength and its glass forming tendency. When a material is quenched to form glass, the stronger the bonds, the easier the glass formation. [ 3 ] Dietzel looked at direct Coulombic interactions between atoms. He categorized cations using field strength where FS=z c /(r c +r a ) 2 , where z c is the charge of the cation, and r c and r a are the radii of the cation and anion respectively. High field strength cations would have a high cation-oxygen bond energy. [ 4 ] These three criterion help establish three different ways to determine whether or not certain oxides molecules will form glasses, and the likeliness of it. Other experimental techniques often employed to study the structure of glasses include nuclear magnetic resonance , X-ray absorption fine structure and other spectroscopy methods including Raman spectroscopy . Experimental measurements can be combined with computer simulation methods, such as reverse Monte Carlo or molecular dynamics simulations, to obtain more complete and detailed description of the atomic structure. Early theories relating to the structure of glass included the crystallite theory whereby glass is an aggregate of crystallites (extremely small crystals). [ 6 ] However, structural determinations of vitreous SiO 2 and GeO 2 made by Warren and co-workers in the 1930s using x-ray diffraction showed the structure of glass to be typical of an amorphous solid [ 7 ] In 1932, Zachariasen introduced the random network theory of glass in which the nature of bonding in the glass is the same as in the crystal but where the basic structural units in a glass are connected in a random manner in contrast to the periodic arrangement in a crystalline material. [ 8 ] Despite the lack of long range order, the structure of glass does exhibit a high degree of ordering on short length scales due to the chemical bonding constraints in local atomic polyhedra . [ 9 ] For example, the SiO 4 tetrahedra that form the fundamental structural units in silica glass represent a high degree of order, i.e. every silicon atom is coordinated by 4 oxygen atoms and the nearest neighbour Si-O bond length exhibits only a narrow distribution throughout the structure. [ 6 ] The tetrahedra in silica also form a network of ring structures which leads to ordering on more intermediate length scales of up to approximately 10 angstroms . The structure of glasses differs from the structure of liquids just above the glass transition temperature T g which is revealed by the XRD analysis [ 10 ] and high-precision measurements of third- and fifth-order non-linear dielectric susceptibilities. [ 11 ] Glasses are generally characterised by a higher degree of connectivity compared liquids. [ 12 ] Alternative views of the structure of liquids and glasses include the interstitialcy model [ 13 ] and the model of string-like correlated motion. [ 14 ] Molecular dynamics computer simulations indicate these two models are closely connected [ 15 ] Oxide glass components can be classified as network formers, intermediates, or network modifiers. [ 16 ] Traditional network formers (e.g. silicon, boron, germanium) form a highly cross-linked network of chemical bonds. Intermediates (e.g. titanium, aluminium, zirconium, beryllium, magnesium, zinc) can behave both as a network former or a network modifier, depending on the glass composition. [ 17 ] The modifiers (calcium, lead, lithium, sodium, potassium) alter the network structure; they are usually present as ions, compensated by nearby non-bridging oxygen atoms, bound by one covalent bond to the glass network and holding one negative charge to compensate for the positive ion nearby. [ 18 ] Some elements can play multiple roles; e.g. lead can act both as a network former (Pb 4+ replacing Si 4+ ), or as a modifier. [ 19 ] The presence of non-bridging oxygens lowers the relative number of strong bonds in the material and disrupts the network, decreasing the viscosity of the melt and lowering the melting temperature. [ 17 ] The alkali metal ions are small and mobile; their presence in a glass allows a degree of electrical conductivity . Their mobility decreases the chemical resistance of the glass, allowing leaching by water and facilitating corrosion. Alkaline earth ions, with their two positive charges and requirement for two non-bridging oxygen ions to compensate for their charge, are much less mobile themselves and hinder diffusion of other ions, especially the alkali's. The most common commercial glass types contain both alkali and alkaline earth ions (usually sodium and calcium), for easier processing and satisfying corrosion resistance. [ 20 ] Corrosion resistance of glass can be increased by dealkalization , removal of the alkali ions from the glass surface [ 21 ] by reaction with sulphur or fluorine compounds. [ 22 ] Presence of alkaline metal ions has also detrimental effect to the loss tangent of the glass, [ 23 ] and to its electrical resistance ; [ 24 ] glass manufactured for electronics (sealing, vacuum tubes, lamps ...) have to take this in account. Silica (the chemical compound SiO 2 ) has a number of distinct crystalline forms: quartz, tridymite, cristobalite, and others (including the high pressure polymorphs stishovite and coesite ). Nearly all of them involve tetrahedral SiO 4 units linked together by shared vertices in different arrangements. Si-O bond lengths vary between the different crystal forms. For example, in α-quartz the bond length is 161 pm, whereas in α-tridymite it ranges from 154 to 171 pm. The Si–O–Si bond angle also varies from 140° in α-tridymite to 144° in α-quartz to 180° in β-tridymite. In amorphous silica ( fused quartz ), the SiO 4 tetrahedra form a network that does not exhibit any long-range order. However, the tetrahedra themselves represent a high degree of local ordering, i.e. every silicon atom is coordinated by 4 oxygen atoms and the nearest neighbour Si-O bond length exhibits only a narrow distribution throughout the structure. [ 6 ] If one considers the atomic network of silica as a mechanical truss, this structure is isostatic, in the sense that the number of constraints acting between the atoms equals the number of degrees of freedom of the latter. According to the rigidity theory , this allows this material to show a great forming ability. [ 25 ] Despite the lack of ordering on extended length scales, the tetrahedra also form a network of ring-like structures which lead to ordering on intermediate length scales (up to approximately 10 angstroms or so). [ 6 ] Under the application of high pressure (approximately 40 GPa) silica glass undergoes a continuous polyamorphic phase transition into an octahedral form, i.e. the Si atoms are surrounded by 6 oxygen atoms instead of four in the ambient pressure tetrahedral glass. [ 26 ]
https://en.wikipedia.org/wiki/Structure_of_liquids_and_glasses
A structure relocation is the process of moving a structure from one location to another. There are two main ways for a structure to be moved: disassembling and then reassembling it at the required destination, or transporting it whole. For the latter, the building is first raised and then may be pushed on temporary rails or dollies if the distance is short. Otherwise, wheels, such as flatbed trucks , are used. These moves can be complicated and require the removal of protruding parts of the building, such as the chimney , as well as obstacles along the journey, such as overhead cables and trees. Reasons for moving a building range from commercial reasons such as scenery to preserving an important or historic building. Moves may also be made simply at the whim of the owner, or to separate a building from the plot of land on which it stands. Elevating a whole structure is typically done by attaching a temporary steel framework under the structure to support it. A network of hydraulic jacks is placed under the framework and controlled by a unified jacking system, elevating the structure off the foundation. An older, low-technology method is to use building jacks called screw jacks or jackscrews which are manually turned. With both types of jacking systems, wood beams (called cribs, cribbing or box cribs ) are stacked into piles to support both the structure and the jacks. The structure is then lifted in increments. Once the structure is at a sufficient height, hydraulic dollies (or a flatbed truck) are placed under the supporting framework. These are used to move the structure to the new location. After the move, the steps are reversed and the structure is lowered. There are several reasons why a structure may be moved. For example, a redevelopment, such as urban regeneration , could cause a relocation. The buyer of a building may wish to move it to a new location, or the owner might sell the land that the building is on while keeping the building. [ 2 ] Another reason for the relocation of a building is to preserve it for historic interest. An example of such preservation is the Lin An Tai Historical House in Taiwan . Such a move could be made because a building is in danger at its present location. [ 3 ] On the island of Chiloe , in Chile , there is a tradition of moving houses if the original site is haunted. [ 4 ] The house is placed on tree trunk rollers and dragged to the new location by oxen. [ 5 ] London 's Marble Arch (1847) was originally the entrance to the newly rebuilt Buckingham Palace . Following the expansion of Buckingham Palace, it was dismantled and rebuilt at a location near Hyde Park , with work being completed in 1851. [ 6 ] [ 7 ] In order to save a single tree, Mustafa Kemal Atatürk , the first President of the Republic of Turkey , moved the future location of his summer house, the Yalova Atatürk Mansion , four meters to the east in 1936. [ 8 ] Between 12 October and 14 November 1930, the 8-story, 11,000 ton Indiana Bell building , the headquarters of the Indiana Bell telecommunications company, was shifted 52 feet south, rotated 90 degrees, then another 100 feet west. The city's telephone operations continued uninterrupted during the course of the building's relocation. [ 9 ] [ 10 ] In 1950 the Compania Telefónica de Mexico (Telephone Company of Mexico) building located in the city of Guadalajara was moved 11.8 meters without any interruption of telephone operations. Building weights 1,700 metric tons. The project started in May and ended in November 1950. The building movement itself took 5 days. Head of the project was Jorge Matute Remus, [ 11 ] construction engineer and headmaster of the Universidad De Guadalajara at the time. In September 1959 a 280 metres tall guyed radio mast of Europe 1 longwave transmitter in Felsberg-Berus, Germany was moved 102 metres northwards in order to realize a better radiation pattern of the transmitter. This was perhaps the tallest object ever moved on land [4] One of Europe's oldest dwelling houses in Exeter , United Kingdom, [ 12 ] [ 13 ] was relocated in 1961 to make way for a bypass road . [ 14 ] The 15th century wooden framed building was moved on rails around 100 yards (91 m) up a 1:10 hill, and became known as The House That Moved . [ 15 ] The Cudecom Building in Bogotá, Colombia (Weight 7,000 Metric Tons, Distance Moved: 95 Feet); was moved in October 1974 using Steel Rollers. The 8 story building was moved westward to build an avenue. The move of the Cudecom Building was in the Guinness Book of World Records for 30 years. Moving Cudecom Building Part 1 Moving Cudecom Building Part 2 The Gem Theatre and Century Theatre , both housed within the same building in Detroit , were moved five blocks on wheels to its new location at 333 Madison Avenue on 16 October 1997, because of the development of the Comerica Park area when it became home of the Detroit Tigers . At a distance of 563 meters (1,847 feet) it is the furthest known relocation of a sizable building setting a world record by Expert House Movers, LLC . [ 16 ] Structure relocation was common in the 1980s in Romania because of Ceausescu's building projects. Many buildings, including churches and older apartment blocks used to be relocated using hydraulics. One of the most notable feats achieved was on 27 May 1987, when a whole apartment block weighing 7600 Tonnes was split in half and completely relocated, with people left inside, with no damage whatsoever. As of this day the building still stands, and it was one of the most challenging relocations in the whole world. Engineer Eugeniu Iordachescu moved 29 buildings (from which 13 were churches) during his career. ( https://ro.wikipedia.org/wiki/Mutarea_cl%C4%83dirilor_%C8%99i_structurilor ) As part of the Minnesota Shubert Performing Arts and Education Center development, the Shubert Theatre was moved between 9 February 1999 and 21 February 1999. The 2,638 tonne (2,596 short ton ) building was moved three city blocks and is the heaviest recorded building move done on wheels. [ 17 ] The 850 tonne Belle Tout Lighthouse was built in 1831 near the edge of the cliff on the next headland west from Beachy Head , East Sussex , England. It was moved more than 17 metres (56 feet) further inland in 1999 due to cliff erosion . It was pushed by four hydraulic jacks along four steel and concrete beams to a new site that was designed specifically to allow for possible future relocations. [ 18 ] In 1999, the 208-foot (63 m) tall, 2540-tonne Cape Hatteras Lighthouse was moved 2,900 feet (880 m) to protect it from being undermined by beach erosion . When the North Carolina lighthouse was built in 1870, it was over 1,500 feet (460 m) from the sea , but by 1935 the beach had eroded and the waves were only 100 feet (30 m) away. Starting in 1930, many efforts to halt the erosion were attempted, including adding over a million cubic yards of loose sand, massive sandbags , and steel and concrete walls. After nearly 70 years it became apparent that fighting the erosion was a never-ending battle, and the decision was made to move the lighthouse away from the sea. The 3,200-year-old Statue of Ramesses II in Cairo was moved on 25 August 2006 from Ramses Square to a new museum site. The statue was slowly being damaged by pollution and was in an area where it was difficult for people to visit. The move of the statue, which measures 11 metres (36 feet) high and weighs around 83 tonnes (91 short tons) was broadcast live on Egyptian television. Transported whole on the back of two trucks, the statue had previously been cut into eight pieces when it was moved from its excavation site in the mid-1950s. [ 19 ] In June 2008, Hamilton Grange National Memorial , the 1802 home of Alexander Hamilton in New York City , was relocated from a cramped lot on Convent Avenue to a more spacious setting facing West 141st Street in nearby St. Nicholas Park , where it is currently undergoing a complete restoration . It is actually the second time the 298-ton mansion has been moved. In 1889, it was relocated from its original site on West 143rd Street to a church's property two blocks away. The Nathaniel Lieb House (1969), by architect Robert Venturi , was moved by barge from Long Beach Island, New Jersey to Glen Cove, New York in 2009. [ 20 ] In April 2013, due to construction works on Fuzuli Street the House of famous Baku millionaire, Isa bey Hajinski, in Baku ( Azerbaijan ), which was built in 1908, was moved 10 m to protect it as historical and architectural monument. The weight of this building is 18,000 tonnes. It was the heaviest building in the world ever moved. [ 21 ] The William Walker House, built circa 1904, was relocated 500 feet when the new owner, Thomas Tull , decided to preserve the home instead of demolishing it. The move took place in August 2016. The house was designed by architects Longfellow, Alden & Harlow . [ 22 ] [ 23 ] On 21 December 2016 part of the Belleview-Biltmore Hotel was relocated and placed on a new foundation where it will be converted into an inn with event space, an ice cream parlor, and a history room. [ 24 ] Beginning in 1983, Shipcarpenter Square in Lewes, Delaware , is an historic residential development in which entire homes from centuries past were re-located to a common residential neighborhood. The development consists of approximately 40 homes that were relocated whole. [ 25 ] The Warder Mansion , the only surviving Washington, D.C. building by architect H. H. Richardson , was saved from demolition in 1923 by George Oakley Totten Jr. Totten bought the exterior stone – except the main doorway, which reportedly went to the Smithsonian Institution – and much of the interior woodwork, and transported it, piece by piece, in his Model T Ford . He reassembled the building about 1.5 miles north of its original site and converted it into an apartment house. [ 26 ] In 1925, Thomas C. Williams Jr. bought a 15th-century Tudor manor house , Agecroft Hall , which stood by the River Irwell in Pendlebury , England. The hall was disassembled, crated and transported to Richmond, Virginia , where it was reassembled as the centrepiece of a Tudor estate on the banks of the James River . [ 27 ] The 16th-century Warwick Priory in Warwick, England was bought by Alexander and Virginia Weddell in 1926 and relocated in the same manner. Architect Henry G. Morse oversaw both moves. He designed additions to the reassembled priory, inspired by Sulgrave Manor and Wormleighton Manor . The expanded building was renamed Virginia House , and stands next door to Agecroft Hall. Newspaper magnate William Randolph Hearst purchased and attempted to relocate two Cistercian monasteries during his travels in Spain , but neither was completed during his lifetime. The first was built about 1141 and found abandoned by Hearst in 1925. He purchased the ruin and attempted to ship it to his home in California , San Simeon . The crates, however, were detained by customs officials in New York City , and due to his deteriorating finances during the Great Depression , Hearst was unable to complete the shipment. The stones were purchased in 1951 and reassembled in Florida as a tourist attraction. In 1964, the building was purchased by a local Episcopal diocese and restored to its original purpose as the Church of St. Bernard de Clairvaux . [ 28 ] Hearst 's second attempt at relocating a monastery was in 1931 when he found the closed Santa Maria de Ovila monastery, built around 1200. He purchased the structure, disassembled it and successfully shipped it to San Francisco , but was unable to rebuild the monastery. Hearst eventually gave the stones to the city of San Francisco, where they sat for decades in Golden Gate Park . Eventually, some of the stones were acquired by the Abbey of New Clairvaux in Vina, California, where they are currently being reconstructed; [ 29 ] others are now being used as decorative accents in the San Francisco Botanical Garden . Abu Simbel is an archaeological site comprising two massive rock temples completed in 1244 BCE , on the western bank of the Nile in southern Egypt . Construction of the Aswan High Dam would have submerged the temples beneath the waters of Lake Nasser . In 1959, an international donation campaign began to save the monuments of Nubia : the southernmost relics of this ancient human civilization. The salvage of the Abu Simbel temples began in 1964, and cost US$80 million. Between 1964 and 1968, the entire site was cut into large blocks, dismantled and reassembled in a new location – 65 m higher and 200 m back from the river, in what many consider one of the greatest feats of archaeological engineering. Today, thousands of tourists visit the temples daily. Guarded convoys of buses and cars depart twice a day from Aswan , the nearest city. Many visitors also arrive by plane, at an airfield that was specially constructed for the temple complex. On 18 April 1968, John Rennie 's London Bridge (which had replaced the original bridge in 1831) was sold to the American entrepreneur Robert P. McCulloch of McCulloch Oil for the sum of $2,460,000. The bridge was reconstructed at Lake Havasu City, Arizona , and opened on 10 October 1971. Not all of the bridge was transported to America, as some were kept behind in lieu of tax duties. The version of London Bridge that was rebuilt at Lake Havasu consists of a concrete frame with stones from the old (but not the original) London Bridge used as cladding. It spans a canal that leads from Lake Havasu to Thomson Bay, and forms the centrepiece of a theme park in the English style, complete with mock-Tudor shopping mall . The bridge has become one of Arizona 's biggest tourist attractions. [ 30 ] The Old Wellington Inn (1552) and Sinclair's Oyster Bar, two of Manchester , England's oldest buildings, dating from the 16th century and 17th century respectively, had their foundations raised 4 feet 9 inches (1.45 m) when the Shambles Square marketplace was refurbished in the 1960s. [ 31 ] They were in close proximity to the 1996 Manchester bombing . As part of the rebuilding, they were disassembled and moved 100 m north to the new Shambles Square, next to Manchester Cathedral . [ 32 ] Originally the two buildings comprised a single row, but they were rebuilt 90 degrees to each other and connected by new construction. The formerly Grade-I Listed Murray House in Hong Kong (built 1844) was dismantled in 1982 to make way for the Bank of China Tower . It was rebuilt brick-by-brick at Stanley in 2000. The relocation process, nonetheless, was said to have failed to meet 'the international standard of preservation'. Certain architectural features, such as the chimneys and stone columns were lost and were replaced with features taken from other contemporary buildings. Much of the structure, furthermore, was reconstructed to be held up by an added steel-and-concrete core that which was not representative of how it once existed. The Grade-I listed status has thus since been withdrawn. [ 33 ] Several museums , particularly open-air museums , move historic buildings into their surroundings, with some dedicated to showing what life was like in previous centuries called living history . Museums that have transported and reconstructed old buildings and structures include: In the past, it was not uncommon that radio towers, free-standing as well as guyed, were dismantled and rebuilt at another site. In some cases, they were rebuilt just a few metres away from their original site, but in others far away from their original site. In first case, these towers were nearly all part of a directional antenna system for long- and medium-wave for which the regulations of directional patterns were changed and the best way to fulfill it, was to build either a new tower or to dismantle one tower and to rebuild it on the new site. It was also done that a tower was dismantled and then used for the upper parts of a new radio tower. This was done for example with the masts at Sender Donebach in 1982 and with the wooden tower of Transmitter Ismaning in 1934. After World War II some radio towers in former East Germany were dismantled by Soviet occupants and rebuilt in former Soviet Union, the most famous example herefore is Goliath transmitter . It is also common that electricity pylons are dismantled and rebuilt at a new site. Also small observation towers built of steel were sometimes dismantled for renovation and afterwards rebuilt. The tallest structure ever relocated is BREN Tower . In 1959 a 280-metre-tall radio mast was relocated at Felsberg-Berus without dismantling. [ citation needed ] Although smaller projects are usually paid for in cash, larger projects such as the relocation of a house to a new site are typically financed by banks. Finance is often a major problem faced by project managers as the house needs to be paid for prior to leaving its current site, but the lender for the project cannot take security over the house until it is complete and on the new site. This creates a short-term cashflow problem that unhinges many projects.
https://en.wikipedia.org/wiki/Structure_relocation
In mathematics , the structure theorem for Gaussian measures shows that the abstract Wiener space construction is essentially the only way to obtain a strictly positive Gaussian measure on a separable Banach space . It was proved in the 1970s by Kallianpur –Satô–Stefan and Dudley – Feldman – le Cam . There is the earlier result due to H. Satô (1969) [ 1 ] which proves that "any Gaussian measure on a separable Banach space is an abstract Wiener measure in the sense of L. Gross ". The result by Dudley et al. generalizes this result to the setting of Gaussian measures on a general topological vector space . Let γ be a strictly positive Gaussian measure on a separable Banach space ( E , || ||). Then there exists a separable Hilbert space ( H , ⟨ , ⟩) and a map i : H → E such that i : H → E is an abstract Wiener space with γ = i ∗ ( γ H ), where γ H is the canonical Gaussian cylinder set measure on H .
https://en.wikipedia.org/wiki/Structure_theorem_for_Gaussian_measures
Macromolecular structure validation is the process of evaluating reliability for 3-dimensional atomic models of large biological molecules such as proteins and nucleic acids . These models, which provide 3D coordinates for each atom in the molecule (see example in the image), come from structural biology experiments such as x-ray crystallography [ 1 ] or nuclear magnetic resonance (NMR). [ 2 ] The validation has three aspects: 1) checking on the validity of the thousands to millions of measurements in the experiment; 2) checking how consistent the atomic model is with those experimental data; and 3) checking consistency of the model with known physical and chemical properties. Proteins and nucleic acids are the workhorses of biology, providing the necessary chemical reactions, structural organization, growth, mobility, reproduction, and environmental sensitivity. Essential to their biological functions are the detailed 3D structures of the molecules and the changes in those structures. To understand and control those functions, we need accurate knowledge about the models that represent those structures, including their many strong points and their occasional weaknesses. End-users of macromolecular models include clinicians, teachers and students, as well as the structural biologists themselves, journal editors and referees , experimentalists studying the macromolecules by other techniques, and theoreticians and bioinformaticians studying more general properties of biological molecules. Their interests and requirements vary, but all benefit greatly from a global and local understanding of the reliability of the models. Macromolecular crystallography was preceded by the older field of small-molecule x-ray crystallography (for structures with less than a few hundred atoms). Small-molecule diffraction data extends to much higher resolution than feasible for macromolecules, and has a very clean mathematical relationship between the data and the atomic model. The residual, or R-factor, measures the agreement between the experimental data and the values back-calculated from the atomic model. For a well-determined small-molecule structure the R-factor is nearly as small as the uncertainty in the experimental data (well under 5%). Therefore, that one test by itself provides most of the validation needed, but a number of additional consistency and methodology checks are done by automated software [ 3 ] as a requirement for small-molecule crystal structure papers submitted to the International Union of Crystallography (IUCr) journals such as Acta Crystallographica section B or C. Atomic coordinates of these small-molecule structures are archived and accessed through the Cambridge Structural Database (CSD) [ 4 ] or the Crystallography Open Database (COD). [ 5 ] The first macromolecular validation software was developed around 1990, for proteins. It included Rfree cross-validation for model-to-data match, [ 6 ] bond length and angle parameters for covalent geometry, [ 7 ] and sidechain and backbone conformational criteria. [ 8 ] [ 9 ] [ 10 ] For macromolecular structures, the atomic models are deposited in the Protein Data Bank (PDB), still the single archive of this data. The PDB was established in the 1970s at Brookhaven National Laboratory , [ 11 ] moved in 2000 to the RCSB Archived 2008-08-28 at the Wayback Machine (Research Collaboration for Structural Biology) centered at Rutgers , [ 12 ] and expanded in 2003 to become the wwPDB (worldwide Protein Data Bank), [ 13 ] with access sites added in Europe ( [1] ) and Asia ( [2] ), and with NMR data handled at the BioMagResBank (BMRB) in Wisconsin. Validation rapidly became standard in the field, [ 14 ] with further developments described below. *Obviously needs expansion* A large boost was given to the applicability of comprehensive validation for both x-ray and NMR as of February 1, 2008, when the worldwide Protein Data Bank (wwPDB) made mandatory the deposition of experimental data along with atomic coordinates. Since 2012 strong forms of validation have been in the process of being adopted for wwPDB deposition from recommendations of the wwPDB Validation Task Force committees for x-ray crystallography , [ 15 ] for NMR, [ 16 ] for SAXS ( small-angle x-ray scattering ), and for cryoEM (cryo- Electron Microscopy ). [ 17 ] Validations can be broken into three stages: validating the raw data collected (data validation), the interpretation of the data into the atomic model (model-to-data validation), and finally validation on the model itself. While the first two steps are specific to the technique used, validating the arrangement of atoms in the final model is not. [ 7 ] [ 18 ] [ 19 ] The backbone and side-chain dihedral angles of protein and RNA have been shown to have specific combinations of angles which are allowed (or forbidden). For protein backbone dihedrals (φ, ψ), this has been addressed by the legendary Ramachandran Plot while for side-chain dihedrals (χ's), one should refer to the Dunbrack Backbone-dependent rotamer library . [ 20 ] Though, mRNA structures are generally short-lived and single-stranded, there are an abundance of non-coding RNAs with different secondary and tertiary folding (tRNA, rRNA etc.) which contain a preponderance of the canonical Watson-Crick (WC) base-pairs, together with significant number of non-Watson Crick (NWC) base-pairs - for which such RNA also qualify for regular structural validation that apply for nucleic acid helices. The standard practice is to analyse the intra- (Transnational: Shift, Slide, Rise; Rotational: Tilt, Roll, Twist) and inter-base-pair geometrical parameters (Transnational: Shear, Stagger, Stretch, Rotational: Buckle, Propeller, Opening) - whether in-range or out-of-range with respect to their suggested values. [ 21 ] [ 22 ] These parameters describe the relative orientations of the two paired bases with respect to each other in two strands (intra) along with those of the two stacked base pairs (inter) with respect to each other, and, hence, together, they serve to validate nucleic acid structures in general. Since, RNA-helices are small in length (average: 10-20 bps), the use of electrostatic surface potential as a validation parameter [ 23 ] has been found to be beneficial, particularly for modelling purposes. For globular proteins, interior atomic packing (arising from short-range, local interactions) of side-chains [ 24 ] [ 25 ] [ 26 ] [ 27 ] has been shown to be pivotal in the structural stabilization of the protein-fold. On the other hand, the electrostatic harmony (non-local, long-range) of the overall fold [ 28 ] has also been shown to be essential for its stabilization. Packing anomalies include steric clashes, [ 29 ] short contacts, [ 27 ] holes [ 30 ] and cavities [ 31 ] while electrostatic disharmony [ 28 ] [ 32 ] refer to unbalanced partial charges in the protein core (particularly relevant for designed protein interiors). While the clash-score of Molprobity identifies steric clashes at a very high resolution, the Complementarity Plot combines packing anomalies with electrostatic imbalance of side-chains and signals for either or both. The branched and cyclic nature of carbohydrates poses particular problems to structure validation tools. [ 35 ] At higher resolutions, it is possible to determine the sequence/structure of oligo- and poly-saccharides, both as covalent modifications and as ligands. However, at lower resolutions (typically lower than 2.0Å), sequences/structures should either match known structures, or be supported by complementary techniques such as Mass Spectrometry. [ 36 ] Also, monosaccharides have clear conformational preferences (saturated rings are typically found in chair conformations), [ 37 ] but errors introduced during model building and/or refinement (wrong linkage chirality or distance, or wrong choice of model - see [ 38 ] for recommendations on carbohydrate model building and refinement and [ 39 ] [ 40 ] [ 41 ] for reviews on general errors in carbohydrate structures) can bring their atomic models out of the more likely low-energy state. Around 20% of the deposited carbohydrate structures are in a higher-energy conformation not justified by the structural data (measured using real-space correlation coefficient). [ 42 ] A number of carbohydrate validation web services are available at glycosciences.de (including nomenclature checks and linkage checks by pdb-care , [ 43 ] and cross-validation with Mass Spectrometry data through the use of GlycanBuilder), whereas the CCP4 suite currently distributes Privateer , [ 33 ] which is a tool that is integrated into the model building and refinement process itself. Privateer is able to check stereo- and regio-chemistry, ring conformation and puckering, linkage torsions, and real-space correlation against positive omit density, generating aperiodic torsion restraints on ring bonds, which can be used by any refinement software in order to maintain the monosaccharide's minimal energy conformation. [ 33 ] Privateer also generates scalable two-dimensional SVG diagrams according to the Essentials of Glycobiology [ 34 ] standard symbol nomenclature containing all the validation information as tooltip annotations (see figure). This functionality is currently integrated into other CCP4 programs, such as the molecular graphics program CCP4mg (through the Glycoblocks 3D representation, [ 44 ] which conforms to the standard symbol nomenclature [ 34 ] ) and the suite's graphical interface, CCP4i2. Many evaluation criteria apply globally to an entire experimental structure, most notably the resolution , the anisotropy or incompleteness of the data, and the residual or R-factor that measures overall model-to-data match (see below). Those help a user choose the most accurate among related Protein Data Bank entries to answer their questions. Other criteria apply to individual residues or local regions in the 3D structure, such as fit to the local electron density map or steric clashes between atoms. Those are especially valuable to the structural biologist for making improvements to the model, and to the user for evaluating the reliability of that model right around the place they care about - such as a site of enzyme activity or drug binding. Both types of measures are very useful, but although global criteria are easier to state or publish, local criteria make the greatest contribution to scientific accuracy and biological relevance. As expressed in the Rupp textbook, "Only local validation, including assessment of both geometry and electron density, can give an accurate picture of the reliability of the structure model or any hypothesis based on local features of the model." [ 45 ] TALOS+ . Predicts protein backbone torsion angles from chemical shift data. Frequently used to generate further restraints applied to a structure model during refinement. One of the critical needs for NMR structural ensemble validation is to distinguish well-determined regions (those that have experimental data) from regions that are highly mobile and/or have no observed data. There are several current or proposed methods for making this distinction such as Random Coil Index , but so far the NMR community has not standardized on one. Cyro-EM presents special challenges to model-builders as the observed electron density is frequently insufficient to resolve individual atoms, leading to a higher likelihood of errors. Geometry-based validation tools similar to those used in X-ray crystallography can be used to highlight implausible modeling choices and guide modeler toward more native-like structures. The CaBLAM method, which only uses Cα atoms, [ 48 ] is suitable for low-resolution structures from cyro-EM. [ 49 ] A way to compute the difference density map has been formulated for cyro-EM. [ 50 ] [ 51 ] Cross-validation using a "free" map, comparable to the use of a free R-factor , is also available. [ 52 ] [ 53 ] Other methods for checking model-map fit include correlation coefficients, model-map FSC, [ 54 ] confidence maps, CryoEF (orientation bias check), and TEMPy SMOC. [ 51 ] SAXS (small-angle x-ray scattering) is a rapidly growing area of structure determination, both as a source of approximate 3D structure for initial or difficult cases and as a component of hybrid-method structure determination when combined with NMR, EM, crystallographic, cross-linking, or computational information. There is great interest in the development of reliable validation standards for SAXS data interpretation and for quality of the resulting models, but there are as yet no established methods in general use. Three recent steps in this direction are the creation of a Small-Angle Scattering Validation Task Force committee by the worldwide Protein DataBank and its initial report, [ 55 ] a set of suggested standards for data inclusion in publications, [ 56 ] and an initial proposal of statistically derived criteria for automated quality evaluation. [ 57 ] It is difficult to do meaningful validation of an individual, purely computational, macromolecular model in the absence of experimental data for that molecule, because the model with the best geometry and conformational score may not be the one closest to the right answer. Therefore, much of the emphasis in validation of computational modeling is in assessment of the methods. To avoid bias and wishful thinking, double-blind prediction competitions have been organized, the original example of which (held every 2 years since 1994) is CASP (Critical Assessment of Structure Prediction) to evaluate predictions of 3D protein structure for newly solved crystallographic or NMR structures held in confidence until the end of the relevant competition. [ 58 ] The major criterion for CASP evaluation is a weighted score called GDT-TS for the match of Calpha positions between the predicted and the experimental models. [ 59 ]
https://en.wikipedia.org/wiki/Structure_validation
Structured Financial Messaging System ( SFMS ) is a secure messaging standard developed to serve as a platform for intra-bank and inter-bank applications. It is an Indian standard similar to SWIFT which is the international messaging system used for financial messaging globally. SFMS can be used for secure communication within the bank and between banks. The SFMS was launched on December 14, 2001 at IDRBT . [ 1 ] It allows the definition of message structures, message formats, and authorization of the same for usage by the financial community. [ citation needed ] SFMS has a number of features and it is a modularised and web enabled software, with a flexible architecture facilitating centralised or distributed deployment. The access control is through Smart Card based user access and messages are secured by means of standard encryption and authentication services conforming to ISO standards. [ citation needed ] The intra-bank part of SFMS is used by banks to take full advantage of the secure messaging facility it provides. [ citation needed ] The inter-bank messaging part is used by applications like electronic funds transfer (EFT), real time gross settlement systems (RTGS), delivery versus payments (DVP), centralised funds management systems (CFMS) and others. The SFMS provides application program interfaces (APIs), which can be used to integrate existing and future applications with the SFMS. [ citation needed ] Several banks have integrated it with their core or centralised banking software. [ citation needed ] With a view to provide focused attention and enable development of techno-banking in the country, the institute has promoted a new Section 8 company named The Indian Financial Technology and Allied Services (IFTAS). Headquartered in Mumbai , the mandate of the IFTAS is to provide IT-related services to the RBI , banks and other financial institutions. [ 2 ] Services like Indian Financial Network (INFINET), [ 3 ] Structured Financial Messaging System (SFMS) [ 4 ] and the Indian Banking Community Cloud (IBCC) [ 5 ] have been handed over to IFTAS with effect from April 1, 2016. [ 6 ] This finance-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Structured_Financial_Messaging_System
Structured illumination light sheet microscopy (SI-LSM) is an optical imaging technique used for achieving volumetric imaging with high temporal and spatial resolution in all three dimensions. It combines the ability of light sheet microscopy to maintain spatial resolution throughout relatively thick samples with the higher axial and spatial resolution characteristic of structured illumination microscopy. SI-LSM can achieve lateral resolution below 100 nm [ 1 ] in biological samples hundreds of micrometers thick. [ 2 ] SI-LSM is most often used for fluorescent imaging of living biological samples, such as cell cultures. It is particularly useful for longitudinal studies, where high-rate imaging must be performed over long periods of time without damaging the sample. [ 3 ] [ 4 ] The two methods most used for fluorescent imaging of 3D samples – confocal microscopy and widefield microscopy – both have significant drawbacks for this type of application. In widefield microscopy, both in-focus light from the plane of interest as well as out-of-focus light from the rest of the sample is acquired together, creating the “missing cone problem” which makes high resolution imaging difficult. [ 5 ] Although confocal microscopy largely solves this problem by using a pinhole to block unfocused light, this technique also inevitably blocks useful signal, which is particularly detrimental in fluorescent imaging when the signal is already very weak. [ 6 ] In addition, both widefield and confocal microscopy illuminate the entirety of the sample throughout imaging, which leads to problems with photobleaching and phototoxicity in some samples. [ 3 ] [ 4 ] While light-field microscopy alone can address most of these issues, its achieved resolution is still fundamentally limited by the diffraction of light and it is unable to achieve super-resolution . [ 3 ] [ 5 ] SI-LSM works by using a patterned rather than uniform light sheet to illuminate a single plane of a volume being imaged. In this way, it maintains the many benefits of light-sheet microscopy while achieving the high resolution of structured illumination microscopy. The theory behind SI-LSM is best understood by considering the separate development of structured illumination and light sheet microscopy. Structured illumination microscopy (SIM) is a method of super-resolution microscopy which is performed by acquiring multiple images of the same sample under different patterns of illumination, then computationally combining these images to achieve a single reconstruction with up to 2x improvement over the diffraction limited lateral resolution. The theory was first proposed and implemented in a 1995 paper by John M. Guerra [ 7 ] in which a silicon grating with 50 nm lines and spaces was resolved with 650 nm wavelength (in air) illumination structured by a transparent replica proximal to said grating. The name “structured illumination microscopy” was coined in 2000 by M.G.L. Gustafsson. [ 6 ] SIM takes advantage of the “ Moiré Effect”, which occurs when two patterns are multiplicatively superimposed. [ 6 ] The superimposition causes “Moiré Fringes” to appear, which are coarser than either original pattern but still contain information about the high frequency patterns which would otherwise not be visible. [ 8 ] The theory behind SIM is best understood in the Fourier or frequency domain . In general, imaging systems can only resolve frequencies below the diffraction limit . Thus, in the Fourier domain, all recorded frequencies from the imaged sample would reside within a circle of a fixed radius. Any frequencies outside this limit cannot be resolved. However, the frequency spectrum can be shifted by imaging the sample with patterned illumination. Most often, the pattern is a 1D sinusoidal gradient, such as the pattern used to create the Moiré fringes in the above image. Because the Fourier transform of a sinusoid is a shifted delta function , the transform of this pattern will consist of three delta functions: one at the zero frequency and two corresponding to the positive and negative frequency components of the sinusoid (see below image). When the target is illuminated using this pattern, the target and illumination pattern are multiplicatively superimposed, which means the Fourier transform of the resulting image is the convolution of the individual transforms of the target and the illumination pattern. Convolving any function with a delta function has the effect of shifting the center of the original function to the location of the delta function. Thus, in this situation, the frequency spectrum of the target is shifted and frequencies that were previously too high to resolve now lie within the circle of resolvable frequencies. The result is that for a single image acquisition with SIM, the frequency components from three separate regions in the Fourier domain (corresponding to the center and the positive and negative shifts) are all captured together. Finally, because rotation in the spatial domain results in the same rotation in the Fourier domain, high frequencies over the full 360° can be captured by rotating the illumination pattern. Figure b) in the image below shows which frequency components would be captured by acquiring 4 separate images and rotating the illumination pattern by 45° in between each acquisition. Once all images have been captured, a single final image can be computationally reconstructed. [ 2 ] Using this technique, resolution can be improved up to 2x over the diffraction limit. [ 9 ] This 2x limit is imposed because the illumination pattern itself is still diffraction limited. [ 2 ] The concepts behind 2D SIM can be expanded to 3D volumetric imaging. [ 5 ] By using three mutually coherent beams of excitation light, interference patterns with multiple frequency components can be created in the imaged sample. This ultimately makes it possible to perform 3D reconstructions with up to 2x improved resolution along all three axes. [ 5 ] However, due to the strong scattering coefficient of biological tissues, this theoretical resolution can only be achieved in samples thinner than about 10 um. [ 3 ] Beyond that, the scattering leads to an excess of background signal which makes accurate reconstruction impossible. Light sheet microscopy (LSM) was developed to allow for fine optical sectioning of thick biological samples without the need for physical sectioning or clearing, which are both time consuming and detrimental to in-vivo imaging. [ 10 ] While most fluorescent imaging techniques use aligned illumination and detection axes, LSM utilizes orthogonal axes. A focused light sheet is used to illuminate the sample from the side, while the fluorescent signal is detected from above. [ 10 ] This both eliminates the “cone problem” of widefield microscopy by eliminating out-of-focus contributions from planes not being actively imaged and reduces the impact of photobleaching since the entire sample is not illuminated throughout imaging. In addition, because the sample is illuminated from the side, the focus of the illumination light is not depth-dependent, making volumetric imaging of biological samples far more feasible. A major ongoing challenge in LSM is in shaping the light sheet. In general, there is a tradeoff between the thickness of the light sheet at the optical axis (which largely determines axial resolution) and the field-of-view over which the light sheet maintains adequate thickness. [ 11 ] This problem can be partially addressed by the added resolution from SI-LSM. SI-LSM can be divided into two main categories. Optical Sectioning SI-LSM is the most common approach and improves axial resolution by further reducing the impact of un-focused background signal. Super-resolution SI-LSM uses the illumination and reconstruction techniques of 2D SIM to achieve super-resolution in 3D samples. Optical sectioning SI-LSM (OS-SI-LSM) was first described in a 1997 paper by M.A. Neil et al. [ 12 ] Rather than achieving super-resolution, this technique uses the ideas behind structured illumination to improve axial resolution by removing background haze from layers other than where the illuminating light sheet is most focused. [ 12 ] [ 13 ] While there are several approaches for achieving this, the most common approach is known as “three-phase” SIM, which will be described here. It is shown in the Neil paper [ 12 ] that the signal acquired by imaging a target with a grid illumination pattern can be represented by the following equation: Here, I 0 {\displaystyle I_{0}} is the background signal, while I c {\displaystyle I_{c}} and I s {\displaystyle I_{s}} are signals from the region of the target illuminated by the cosine and sine components of the grid. It is also shown that an in-focus image of the plane of interest could be reconstructed using the equation: This can be achieved by acquiring three separate images under the grid illumination conditions, rotating the grid by 60° between each acquisition. The desired 2D image can then be reconstructed using the equation: This creates a 2D image containing only information from most focused region of the grid illumination pattern. If this pattern is created using a light sheet, the sheet can then be scanned in the axial direction to generate a full 3D reconstruction of a sample. [ 13 ] The primary drawback of using this approach for reducing background signal is that it ultimately relies on subtracting out the shared background signal between two images. Some in-focus signal will inevitably be subtracted alongside the background haze. [ 3 ] This will result in an overall reduction of signal, which can be detrimental in low-signal fluorescent imaging. Nevertheless, this technique is the most common use of SI-LSM and has shown improved axial resolution over LSM alone. [ 13 ] Super-resolution SI-LSM (SR-SI-LSM) uses the techniques from 2D or 3D SIM while using a light sheet as the illumination source to achieve the spatial resolution of SIM alongside the depth independent imaging and low photobleaching of LSM. In the most common application, a light sheet is used to create a 1D sinusoidal pattern at a single plane of the 3D target sample. [ 1 ] The pattern is then rotated multiple times at this single plane to acquire enough images for a high resolution 2D reconstruction. The light is scanned in the axial resolution and the process is repeated until there are enough 2D images for a full 3D reconstruction. In general, this approach demonstrates not only improved resolution but also improved SNR over OS-SI-LSM, because no information is discarded in the reconstruction. [ 3 ] In addition, although the theoretical resolution for SR-SI-LSM is slightly lower than 3D SIM, in depths >10 um this technique shows improved performance over 3D SIM due to the depth-independent focusing of illumination light characteristic of LSM. [ 3 ] A major challenge in SI-LSM is engineering systems which are physically capable of generating structured patterns in light sheets. The three main approaches for accomplishing this are using interfering light sheets, digital LSM, and spatial light modulators. With interfering light sheets, two coherent counterpropagating sheets are sent into the sample. [ 1 ] The interference pattern between these sheets creates the desired illumination pattern, which can be rotated and scanned using rotating mirrors to deflect the sheets. Additional flexibility can be added by using digital light-sheet microscopy to generate the illumination patterns. In digital LSM, the light sheet is created by rapidly scanning a laser beam through the sample. [ 3 ] [ 2 ] [ 14 ] This allows for fine control over the specific illumination pattern by modulating the intensity of the laser as it scans. This technique has been used to create systems capable of multiple types of light sheet microscopy in addition to SI-LSM. [ 2 ] [ 3 ] Finally, spatial light modulators can be used to electronically control the light patterns, which has the advantage of allowing for very fine control of and fast switching between patterns. [ 15 ] [ 16 ] In addition, much of the recent work around SI-LSM focuses on combining the approach with other techniques for deep imaging in biological tissues. For instance, a 2021 paper demonstrated the use of SI-LSM with NIR-II illumination to improve resolution of transcranial mouse brain imaging by ~1.7x with a penetration depth of ~750 um and almost 16x improvement in the signal to background ratio. [ 14 ] Other promising directions include combining SIM with other techniques for shaping the light sheets in LSM, [ 11 ] combining SI-LSM with two-photon excitation, [ 14 ] or using non-linear fluorescence to further push the resolution limits. [ 9 ]
https://en.wikipedia.org/wiki/Structured_illumination_light_sheet_microscopy
The term structured packing refers to a range of specially designed materials for use in absorption and distillation columns. [ 1 ] Structured packings typically consist of thin corrugated metal plates or gauzes arranged in a way that force fluids to take complicated paths through the column, thereby creating a large surface area for contact between different phases . Structured packing is formed from corrugated sheets of perforated embossed metal, plastic, or wire gauze. The result is a very open honeycomb structure with inclined corrugations or flow channels, giving a relatively high surface area but with very low resistance to gas flow. The surface enhancements have been chosen to maximize liquid spreading. These characteristics tend to show significant performance benefits in low pressure and low irrigation rate applications. Steeper or larger corrugation angles lower the pressure drop at the cost of lower separation efficiencies. The sheets are packaged into elements that are piled up in alternating layers, forming a packed bed that fills the complete cross-sectional area of the fractionation tower. To fully utilize the separation efficiency, structured packings require a careful distribution of the liquid on top of the bed. For the packings to reach their highest efficiency the variation in the liquid distribution should be less than 1–2%. In high purity applications with many equilibrium stages, the packing needs to be installed in multiple packed beds, in between the liquid is collected and re-distributed anew. Structured packings have been established for many decades and evolved from random column packing . The first generation of structured packing arose in the early 1940s. In 1953, a patented packing appeared named Panapak, made of a wavy-form expanded metal sheet. The packing was not successful, due to maldistribution and lack of good marketing. The second generation appeared at the end of the 1950s, with highly efficient wire mesh packings, such as Goodloe, Hyperfil and Koch-Sulzer. Until the 1970s, due to their low pressure drop per theoretical stage , those packings were the most widely used in vacuum distillation. However, high cost, low capacity and high sensitivity to solids have prevented wider utilization of wire mesh packings. Corrugated structured packings, introduced by Sulzer by the end of the 1970s, marked the third generation of structured packed columns. These packings offer high capacity, lower cost, and less sensitivity to solids, while keeping a high performance. Popularity of the packings grew in the 1980s, particularly in air separation and for revamps in oil and petrochemical plants. These structured packings, made of corrugated metal sheets, had their surfaces treated, chemically or mechanically, to enhance their wettability. Consequently, the packings' wetted area increased, also for fluids that do not tend to wet surfaces very well, improving performance. In 1999, an improved structure of corrugated sheet packings, the Mellapak Plus, was developed based on CFD simulations and experiments. This packing had a new structure with a varying corrugation angle compared to the conventional Mellapak which had a single angle. This significantly lowered the pressure drop and increased the useful capacity. [ 2 ] Structured packing is manufactured in a wide range of sizes by varying the crimp altitude and corrugation angle (with respect to the horizontal). Two corrugation angles are common: 45 degrees "Y" packings and 60 degrees "X" packings. Commercial packing surface ranges from 50 m²/m³ (lowest efficiency, highest capacity) to 750 m²/m³ (highest efficiency, lowest capacity). The material thickness varies, for sheet metals the typical thickness ranges between 0.1 and 0.2 mm, whereas for plastic thickness ranges between 0.5 and 1 mm. Typical applications include fractionators in refinery and chemical process plants as well as in natural gas processing [ 3 ] to remove sour gases and lower water content to prevent condensation in pipelines. Though structured packings also are applied in atmospheric and pressure applications, it is especially separations that are conducted under vacuum which benefit from the low pressure drop that structured packings provide. As such, structured packing replaced practically all trays in vacuum services. They have found their use in many industrial equipment/processes: Structured packing offers the following advantages as compared to the use of random packing and trays : Structured packing offers the following disadvantages as compared to the use of random packing and trays:
https://en.wikipedia.org/wiki/Structured_packing
Structured systems analysis and design method ( SSADM ) is a systems approach to the analysis and design of information systems. SSADM was produced for the Central Computer and Telecommunications Agency , a UK government office concerned with the use of technology in government, from 1980 onwards. SSADM is a waterfall method for the analysis and design of information systems . SSADM can be thought to represent a pinnacle of the rigorous document-led approach to system design, and contrasts with more contemporary agile methods such as DSDM or Scrum . SSADM is one particular implementation and builds on the work of different schools of structured analysis and development methods, such as Peter Checkland's soft systems methodology , Larry Constantine's structured design , Edward Yourdon's Yourdon Structured Method , Michael A. Jackson's Jackson Structured Programming , and Tom DeMarco's structured analysis . The names "Structured Systems Analysis and Design Method" and "SSADM" are registered trademarks of the Office of Government Commerce (OGC), which is an office of the United Kingdom's Treasury. [ 1 ] The principal stages of the development of Structured System Analysing And Design Method were: [ 2 ] The three most important techniques that are used in SSADM are as follows: The SSADM method involves the application of a sequence of analysis, documentation and design tasks concerned with the following. In order to determine whether or not a given project is feasible, there must be some form of investigation into the goals and implications of the project. For very small scale projects this may not be necessary at all as the scope of the project is easily understood. In larger projects, the feasibility may be done but in an informal sense, either because there is no time for a formal study or because the project is a "must-have" and will have to be done one way or the other. A data flow Diagram is used to describe how the current system works and to visualize the known problems. When a feasibility study is carried out, there are four main areas of consideration: Technical – is the project technically possible? Financial – can the business afford to carry out the project? Organizational – will the new system be compatible with existing practices? Ethical – is the impact of the new system socially acceptable? To answer these questions, the feasibility study is effectively a condensed version of a comprehensive systems analysis and design. The requirements and usages are analyzed to some extent, some business options are drawn up and even some details of the technical implementation. The product of this stage is a formal feasibility study document. SSADM specifies the sections that the study should contain including any preliminary models that have been constructed and also details of rejected options and the reasons for their rejection. The developers of SSADM understood that in almost all cases there is some form of current system even if it is entirely composed of people and paper. Through a combination of interviewing employees, circulating questionnaires, observations and existing documentation, the analyst comes to full understanding of the system as it is at the start of the project. This serves many purposes (Like examples?). Having investigated the current system, the analyst must decide on the overall design of the new system. To do this, he or she, using the outputs of the previous stage, develops a set of business system options. These are different ways in which the new system could be produced varying from doing nothing to throwing out the old system entirely and building an entirely new one. The analyst may hold a brainstorming session so that as many and various ideas as possible are generated. The ideas are then collected to options which are presented to the user. The options consider the following: Where necessary, the option will be documented with a logical data structure and a level 1 data-flow diagram. The users and analyst together choose a single business option. This may be one of the ones already defined or may be a synthesis of different aspects of the existing options. The output of this stage is the single selected business option together with all the outputs of the feasibility stage. This is probably the most complex stage in SSADM. Using the requirements developed in stage 1 and working within the framework of the selected business option, the analyst must develop a full logical specification of what the new system must do. The specification must be free from error, ambiguity and inconsistency. By logical, we mean that the specification does not say how the system will be implemented but rather describes what the system will do. To produce the logical specification, the analyst builds the required logical models for both the data-flow diagrams (DFDs) and the Logical Data Model (LDM), consisting of the Logical Data Structure (referred to in other methods as entity relationship diagrams ) and full descriptions of the data and its relationships. These are used to produce function definitions of every function which the users will require of the system, Entity Life-Histories (ELHs) which describe all events through the life of an entity, and Effect Correspondence Diagrams (ECDs) which describe how each event interacts with all relevant entities. These are continually matched against the requirements and where necessary, the requirements are added to and completed. The product of this stage is a complete requirements specification document which is made up of: This stage is the first towards a physical implementation of the new system application. Like the Business System Options, in this stage a large number of options for the implementation of the new system are generated. This is narrowed down to two or three to present to the user from which the final option is chosen or synthesized. However, the considerations are quite different being: All of these aspects must also conform to any constraints imposed by the business such as available money and standardization of hardware and software. The output of this stage is a chosen technical system option. Though the previous level specifies details of the implementation, the outputs of this stage are implementation-independent and concentrate on the requirements for the human computer interface. The logical design specifies the main methods of interaction in terms of menu structures and command structures. One area of activity is the definition of the user dialogues. These are the main interfaces with which the users will interact with the system. Other activities are concerned with analyzing both the effects of events in updating the system and the need to make inquiries about the data on the system. Both of these use the events, function descriptions and effect correspondence diagrams produced in stage 3 to determine precisely how to update and read data in a consistent and secure way. The product of this stage is the logical design which is made up of: This is the final stage where all the logical specifications of the system are converted to descriptions of the system in terms of real hardware and software. This is a very technical stage and a simple overview is presented here. The logical data structure is converted into a physical architecture in terms of database structures. The exact structure of the functions and how they are implemented is specified. The physical data structure is optimized where necessary to meet size and performance requirements. The product is a complete Physical Design which could tell software engineers how to build the system in specific details of hardware and software and to the appropriate standards. 5. Keith Robinson, Graham Berrisford: Object-oriented SSADM, Prentice Hall International (UK), Hemel Hempstead, ISBN 0-13-309444-8
https://en.wikipedia.org/wiki/Structured_systems_analysis_and_design_method
The structured what-if technique ( SWIFT ) is a prospective hazards analysis method that uses structured brainstorming with guidewords and prompts to identify risks, [ 1 ] with the aim of being quicker than more intensive methods like failure mode and effects analysis (FMEA). [ 2 ] [ 3 ] It is used in various settings, including healthcare. [ 1 ] [ 2 ] [ 3 ] [ 4 ] As with other methods, SWIFT may not be comprehensive and the approach has some limitations. In a healthcare context, SWIFT was found to reveal significant risks, but like similar methods (including healthcare failure mode and effects analysis ) it may have limited validity when used in isolation. [ 2 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Structured_what-if_technique
Structures for lossless ion manipulations ( SLIM ) are a form of ion optics to which various radio frequency and dc electric potentials can be applied and used to enable a broad range of ion manipulations, such as separations based upon ion mobility spectrometry , reactions (unimolecular, ion-molecule, and ion-ion), and storage (i.e. ion trapping ). [ 1 ] SLIM was developed by Richard D. Smith and coworkers at Pacific Northwest National Laboratory (PNNL) and are generally fabricated from arrays of electrodes on evenly spaced planar surfaces. [ 2 ] In 2017, Erin S. Baker , Sandilya Garimella, Yehia Ibrahim, Richard D. Smith and Ian Webb from the Interactive Omics Group of PNNL received the R&D 100 Award for the development of SLIM. [ 3 ] [ 4 ] In SLIM, ions move in the space between the two surfaces, in directions controlled using electric fields, and also moved between different of multi-level SLIM, as can be constructed from a stack of printed circuit boards (PCBs). The lossless nature of SLIM is derived from the use of rf electric fields, and particularly the pseudo potential derived from the inhomogeneous electric fields resulting from rf of appropriate frequency applied to multiple adjacent electrodes, and that serves to prevent ions from closely approaching the electrodes and surface where loss would conventionally be expected. SLIM are generally used in conjunction with mass spectrometry for analytical applications. The first SLIM were fabricated using PCB technology to demonstrate a range of simple ion manipulations in gases at low pressures (a few torr). [ 5 ] This SLIM technology has conceptual similarities with integrated electronic circuits, but instead of moving electrons, electric fields were used to create pathways, switches, etc. to manipulate ions in the gas phase. SLIM devices can enable complex sequences of ion separations, transfers and trapping to occur in the space between two surfaces positioned (e.g., ~4 mm apart) and each patterned with conductive electrodes. The SLIM devices use the inhomogeneous electric fields created by arrays of closely spaced electrodes to which readily generated peak-to-peak RF voltages (e.g., Vp-p ~ 100 V; ~ 1 MHz) are applied with opposite polarity on adjacent electrodes to create effective potential fields that prevent ions from approaching the surfaces. The operating pressure for SLIM devices has initially been reported to be in the 1-10 torr range which allows ions to be effectively confined using the previously defined RF potentials. At higher pressures, the capacity to confine ions diminishes without additional forces being placed on the ion populations. The confinement functions over a range of pressures (<0.1 torr to ~50 torr), and over an adjustable mass-to-charge ratio ( m/z ) range (e.g., m/z 200 to >2000). This effective potential works in conjunction with DC potentials applied to side electrodes to prevent ion losses, and allows creating ion traps and conduits in the gap between the two surfaces for the effectively lossless storage and movement of ions as a result of any gradient in the applied DC fields. The two mirrored halves of a SLIM system are shown in the example to the left. Compared to the longer pathlength systems developed at PNNL, this board is considerably shorter but serves as a rapid prototype. [ 6 ] When folded together and spaced ~3 mm apart, the co-planar electrode surfaces create the fields needed for ion confinement and separation.
https://en.wikipedia.org/wiki/Structures_for_lossless_ion_manipulations
The structure–activity relationship ( SAR ) is the relationship between the chemical structure of a molecule and its biological activity . This idea was first presented by Alexander Crum Brown and Thomas Richard Fraser at least as early as 1868. [ 1 ] [ 2 ] The analysis of SAR enables the determination of the chemical group responsible for evoking a target biological effect in the organism. This allows modification of the effect or the potency of a bioactive compound (typically a drug) by changing its chemical structure. Medicinal chemists use the techniques of chemical synthesis to insert new chemical groups into the biomedical compound and test the modifications for their biological effects. This method was refined to build mathematical relationships between the chemical structure and the biological activity, known as quantitative structure–activity relationships (QSAR). A related term is structure affinity relationship (SAFIR). The large number of synthetic organic chemicals currently in production presents a major challenge for timely collection of detailed environmental data on each compound. The concept of structure biodegradability relationships (SBR) has been applied to explain variability in persistence among organic chemicals in the environment. Early attempts generally consisted of examining the degradation of a homologous series of structurally related compounds under identical conditions with a complex "universal" inoculum , typically derived from numerous sources. [ 3 ] This approach revealed that the nature and positions of substituents affected the apparent biodegradability of several chemical classes, with resulting general themes, such as halogens generally conferring persistence under aerobic conditions. [ 4 ] Subsequently, more quantitative approaches have been developed using principles of QSAR and often accounting for the role of sorption (bioavailability) in chemical fate. [ 5 ]
https://en.wikipedia.org/wiki/Structure–activity_relationship
In crystallography , a Strukturbericht designation or Strukturbericht type is a system of detailed crystal structure classification by analogy to another known structure. The designations were intended to be comprehensive but are mainly used as supplement to space group crystal structures designations, especially historically. [ 1 ] [ 2 ] Each Strukturbericht designation is described by a single space group , but the designation includes additional information about the positions of the individual atoms, rather than just the symmetry of the crystal structure. While Strukturbericht symbols exist for many of the earliest observed and most common crystal structures, the system is not comprehensive, and is no longer being updated. Modern databases such as Inorganic Crystal Structure Database index thousands of structure types directly by the prototype compound (i.e. "the NaCl structure" instead of "the B1 structure"). [ 3 ] These are essentially equivalent to the old Stukturbericht designations. The designations were established by the journal Zeitschrift für Kristallographie – Crystalline Materials , which published its first round of supplemental reviews under the name Strukturbericht from 1913-1928. [ 4 ] These reports were collected into a book published in 1931 by Paul Peter Ewald and Carl Hermann which became Volume 1 of Strukturbericht . [ 5 ] While the series was continued after the war under the name Structure reports , which was published through 1990, [ 6 ] the series stopped generating new symbols. Instead, some new additional designations were given in books by Smithels, [ 7 ] and Pearson. [ 8 ] For the first volume, the designation consisted of a capital letter (A,B,C,D,E,F,G,H,L,M,O) specifying a broad category of compounds, and then a number to specify a particular crystal structure. In the second volume, subscript numbers were added, some early symbols were modified (e.g. what was initially D1 became D0 1 , noted in the tables below as "D1 → D0 1 "), and the categories were modified (types I,K,S were added). In the third volume, the class I was renamed J. Later designations began to use a lower case letter in subscripts as well. [ 9 ] The 'A' compounds are reserved for structures made up of atoms of all the same chemical element . (ABAC Barlow packing ) 'B' designates compounds of two elements with equal numbers of atoms. 'C' designates compounds of the stoichiometry AB 2 . (ABCBCACAB Barlow packing ) 'D' designates compounds of arbitrary stoichiometry. Originally, D1-D10 were set aside for stoichiometry AB 3 , D11-D20 for stoichiometry AB n for n > 3, D31-D50 for (AB n ) 2 , and D51 up for the A m B n for arbitrary m and n. [ 9 ] (G-phase) Letters between 'E' and 'K' designate more complex compounds. 'L' designates intermetallic compounds .
https://en.wikipedia.org/wiki/Strukturbericht_designation
Strut channel , often referred to colloquially by one of several manufacturer trade names, is a standardized formed structural system used in the construction and electrical industries for light structural support, often for supporting wiring, plumbing, or mechanical components such as air conditioning or ventilation systems. A strut is usually formed from a metal sheet, folded over into an open channel shape with inwards-curving lips to provide additional stiffness and as a location to mount interconnecting components. Increasingly, struts are being constructed from fiberglass, a highly corrosion -resistant material that's known for its lightweight strength and rigidity. [ 1 ] Struts usually have holes of some sort in the base, to facilitate interconnection or fastening strut to underlying building structures. The main advantage of strut channels in construction is that there are many options available for rapidly and easily connecting lengths together and other items to the strut channel, using various specialized strut-specific fasteners and bolts . They can be assembled very rapidly with minimal tools and only moderately trained labor, which reduces costs significantly in many applications. A strut channel installation also can often be modified or added-to relatively easily if needed. The only alternative to strut channels for most applications is custom fabrication using steel bar stock and other commodity components, requiring welding or extensive drilling and bolting, which has none of the above advantages. The basic typical strut channel forms a box measuring about 1 + 5 ⁄ 8 inches (41 mm) square. There are several additional sizes and combined shapes manufactured. Basic strut channel comes in the open box section 1 + 5 ⁄ 8 in (41 mm) square. A half height 1 + 5 ⁄ 8 in × 13 ⁄ 16 in (41 mm × 21 mm) version is also available, used mostly where mounted directly to a wall as it has significantly less stiffness and ability to carry loads across an open space or brace. A deep channel 1 + 5 ⁄ 8 in × 2 + 7 ⁄ 16 in (41 mm × 62 mm) version is also manufactured. The material used to form the channel is typically sheet metal with a thickness of 1.5 mm or 2.5 mm (12 or 14 gauge ; 0.1046 inch or 0.0747 inch, respectively). [ 2 ] Several variations are available with different hole patterns for mounting to walls and supports. Solid channel has no holes predrilled, and must either be drilled on site or mounted in another fashion. Punched channel has round holes, large enough for an M16 or 5/8 inch threaded steel rod or bolts , punched in the top of the channel at regular 48 mm (1 7/8 inch) centers. Half-slot channel has short, rounded end rectangular slots punched out on 50 mm (2 inch) centers. Slot channel has longer slots on 100 mm (4 inch) centers. In metric system based products, the eyelets are about 11×13 mm. In addition, shapes are manufactured with two lengths of channel welded together back to back, or three or four welded together in various patterns, to form stronger structural elements. Strut is normally made of sheet steel , with a zinc coating ( galvanized ), paint , epoxy , powder coat , or other finish. Strut channel is also manufactured from stainless steel for use where rusting might become a problem (e.g., outdoors, facilities with corrosive materials), from aluminium alloy when weight is an issue or from fiberglass for very corrosive environments. [ 3 ] The Metal Framing Manufacturers Association (MFMA) defines a standard for strut channel construction that allows multiple manufacturers' channels to be compatible. The current version of the standard, as of 2020, is MFMA-4. [ 4 ] Well-known manufacturers of strut channel, including Unistrut U.S., Cooper Industries/Eaton Corporation , and Thomas & Betts Corp. /ABB Group, are members of the MFMA and defined the standard. [ 5 ] The inwards-facing lips on the open side of strut channel are routinely used to mount special nuts, braces, connecting angles, and other types of interconnection mechanism or device to join lengths of strut channel together or connect pipes, wire, other structures, threaded rod, bolts, or walls into the strut channel structural system. Strut channel is used to mount, brace, support, and connect lightweight structural loads in building construction. These include pipes, electrical and data wire, mechanical systems such as ventilation, air conditioning, and other mechanical systems. Objects can be attached to the strut channel with a bolt, threaded into a channel nut, that may have a spring to ease installation. Circular objects such as pipes or cables may be attached with straps that have a shaped end to be retained by the channel. Strut channel is also used for other applications that require a strong framework, such as workbenches, shelving systems, equipment racks, etc. Specially made sockets are available to tighten nuts, bolts, etc. inside the channel, as normal sockets are unable to fit through the opening.
https://en.wikipedia.org/wiki/Strut_channel
Strychnine total synthesis in chemistry describes the total synthesis of the complex biomolecule strychnine . The first reported method by the group of Robert Burns Woodward in 1954 is considered a classic in this research field. [ 2 ] [ 3 ] [ 4 ] [ 5 ] At the time it formed the natural conclusion to an elaborate process of molecular structure elucidation that started with the isolation of strychnine from the beans of Strychnos ignatii by Pierre Joseph Pelletier and Joseph Bienaimé Caventou in 1818. [ 6 ] Major contributors to the entire effort were Sir Robert Robinson with over 250 publications and Hermann Leuchs with another 125 papers in a time span of 40 years. Robinson was awarded the Nobel Prize in Chemistry in 1947 for his work on alkaloids, strychnine included. The process of chemical identification was completed with publications in 1946 by Robinson [ 7 ] [ 8 ] [ 9 ] and later confirmed by Woodward in 1947. [ 10 ] X-ray structures establishing the absolute configuration became available between 1947 and 1951 with publications from Johannes Martin Bijvoet [ 11 ] [ 12 ] and J. H. Robertson [ 13 ] [ 14 ] Woodward published a very brief account on the strychnine synthesis in 1954 (just 3 pages) [ 15 ] and a lengthy one (42 pages) in 1963. [ 16 ] Many more methods exist and reported by the research groups of Magnus, [ 17 ] Overman, [ 18 ] Kuehne, [ 19 ] [ 20 ] Rawal, [ 21 ] Bosch, [ 22 ] [ 23 ] Vollhardt, [ 24 ] [ 25 ] Mori, [ 26 ] [ 27 ] Shibasaki, [ 28 ] Li, [ 29 ] Fukuyama [ 30 ] Vanderwal [ 31 ] and MacMillan. [ 32 ] Synthetic (+)-strychnine is also known. [ 33 ] [ 34 ] Racemic synthesises were published by Padwa in 2007 [ 35 ] and in 2010 by Andrade [ 36 ] and by Reissig. [ 37 ] In his 1963 publication Woodward quoted Sir Robert Robinson who said [ 38 ] for its molecular size it is the most complex substance known . The C 21 H 22 N 2 O 2 strychnine molecule contains 7 rings including an indoline system. It has a tertiary amine group, an amide , an alkene and an ether group. The naturally occurring compound is also chiral with 6 asymmetric carbon atoms including one quaternary one. The synthesis of ring II was accomplished with a Fischer indole synthesis using phenylhydrazine 1 and acetophenone derivative acetoveratrone 2 (catalyst polyphosphoric acid ) to give the 2-veratrylindole 3 . The veratryl group not only blocks the 2-position for further electrophilic substitution but will also become part of the strychnine skeleton. A Mannich reaction with formaldehyde and dimethylamine ) produced gramine 4 . Alkylation with iodomethane gave an intermediate quaternary ammonium salt which reacted with sodium cyanide in a nucleophilic substitution to nitrile 5 and then in a reduction with lithium aluminium hydride to tryptamine 6 . Amine-carbonyl condensation with ethyl glyoxylate give the imine 7 . The reaction of this imine with TsCl in pyridine to the ring-closed N-tosyl compound 8 was described by Woodward as a concerted nucleophilic enamine attack and formally a Pictet–Spengler reaction . This compound should form as a diastereomeric pair but only one compound was found although which one was not investigated. Finally the newly formed double bond was reduced by sodium borohydride to indoline 9 with the C8 hydrogen atom approaching from the least hindered side (this proton is removed later on in the sequence and is of no importance). Indoline 9 was acetylated to N-acetyl compound 10 ( acetic anhydride , pyridine ) and then the veratryl group was then ring-opened with ozone in aqueous acetic acid to muconic ester 11 (made possible by the two electron-donating methoxide groups). This is an example of bioinspired synthesis already proposed by Woodward in 1948. [ 39 ] Cleavage of the acetyl group and ester hydrolysis with HCl in methanol resulted in formation of pyridone ester 12 with additional isomerization of the exocyclic double bond to an endocyclic double bond (destroying one asymmetric center). Subsequent treatment with hydrogen iodide and red phosphorus removed the tosyl group and hydrolysed both remaining ester groups to form diacid 13 . Acetylation and esterification ( diazomethane ) produced acetyl diester 14 which was then subjected to a Dieckman condensation with sodium methoxide in methanol to enol 15 . In order to remove the C15 alcohol group, Enol 15 was converted to tosylate 16 ( TsCl , pyridine ) and then to mercaptoester 17 (sodium benzylmercaptide) which was then reduced to unsaturated ester 18 by Raney nickel and hydrogen . Further reduction with hydrogen / palladium on carbon afforded the saturated ester 19 . Alkaline ester hydrolysis to carboxylic acid 20 was accompanied by epimerization at C14. This particular compound was already known from strychnine degradation studies. Until now all intermediates were racemic but chirality was introduced at this particular stage via chiral resolution using quinidine . The C20 carbon atom was then introduced by acetic anhydride to form enol acetate 21 and the free aminoketone 22 was obtained by hydrolysis with hydrochloric acid . Ring VII in intermediate 23 was closed by selenium dioxide oxidation, a process accompanied by epimerization again at C14. The formation of 21 can be envisioned as a sequence of acylation, deprotonation, rearrangement with loss of carbon dioxide and again acylation: To diketone 23 , sodium acetylide ( Alkynylation ) was added (bringing in carbon atoms 22 and 23) to give alkyne 24 . This compound was reduced to the allyl alcohol 25 using the Lindlar catalyst and lithium aluminium hydride removed the remaining amide group in 26 . An allylic rearrangement to alcohol 27 (isostrychnine) was brought about by hydrogen bromide in acetic acid followed by hydrolysis with sulfuric acid . In the final step to (−)-strychnine 28 treatment of 27 with ethanolic potassium hydroxide caused rearrangement of the C12-13 double bond and ring closure in a conjugate addition by the hydroxyl anion. In this effort one of strychnine's many degradation products was synthesised first (the relay compound), a compound also available in several steps from another degradation product called the Wieland-Gumlich aldehyde . In the final leg strychnine itself was synthesised from the relay compound. The Overman synthesis (1993) took a chiral cyclopentene compound as starting material obtained by enzymatic hydrolysis of cis -1,4-diacetoxycyclopent-2-ene. This starting material was converted in several steps to trialkylstannane 2 which was then coupled with an aryl iodide 1 in a Stille reaction in presence of carbon monoxide ( tris(dibenzylideneacetone)dipalladium(0) , triphenylarsine ). The internal double in 3 was converted to an epoxide using tert-Butyl hydroperoxide , the carbonyl group was then converted to an alkene in a Wittig reaction using Ph 3 P=CH 2 and the TIPS group was hydrolyzed ( TBAF ) and replaced by a trifluoroacetamide group (NH 2 COCF 3 , NaH ) in 4 . Cyclization (NaH) took place next, opening the epoxide ring and the trifluoroacetyl group was removed using KOH affording azabicyclooctane 5 . The key step was an aza- Cope - Mannich reaction initiated by an amine-carbonyl condensation using formaldehyde and forming 6 in a quantitative yield: In the final sequence strychnine was obtained through the Wieland-Gumlich aldehyde ( 10 ): Intermediate 6 was acylated using methyl cyanoformate and two protective groups ( tert-butyl and ) were removed using HCl / MeOH in 7 . The C8C13 double bond was reduced with zinc (MeOH/H + ) to saturated ester 8 (mixture). Epimerization at C13 with sodium methoxide in MeOH produced beta-ester 9 which was reduced with diisobutylaluminium hydride to Wieland-Gumlich aldehyde 10 . Conversion of this compound with malonic acid to (−)-strychnine 11 was already known as a procedure. The 1993 Keuhne synthesis concerns racemic strychnine. Starting compounds tryptamine 1 and 4,4-dimethoxy acrolein 2 were reacted together with boron trifluoride to acetal 3 as a single diastereomer in an amine-carbonyl condensation / sigmatropic rearrangement sequence. Hydrolysis with perchloric acid afforded aldehyde 4 . A Johnson–Corey–Chaykovsky reaction ( trimethylsulfonium iodide / n-butyllithium ) converted the aldehyde into an epoxide which reacted in situ with the tertiary amine to ammonium salt 5 (contaminated with other cyclization products). Reduction ( palladium on carbon / hydrogen ) removed the benzyl group to alcohol 6 , more reduction ( sodium cyanoborohydride ) and acylation ( acetic anhydride / pyridine ) produced 7 as a mixture of epimers (at C17). Ring closure of ring III to 8 was then accomplished with an aldol reaction using lithium bis(trimethylsilyl)amide (using only the epimer with correct configuration). Even more reduction ( sodium borohydride ) and acylation resulted in epimeric di-acetate 9 . A DBU mediated elimination reaction formed olefinic alcohol 10 and subsequent Swern oxidation have an unstable amino ketone 11 . In the final steps a Horner–Wadsworth–Emmons reaction ( methyl 2-(diethy1phosphono)acetate ) give acrylate ester 12 as a mixture of cis and trans isomers which could be coached into the right (trans) direction by application of light in a photochemical rearrangement , the ester group was reduced ( DIBAL / boron trifluoride ) to isostrychnine 13 and racemic strychnine 14 was formed by base-catalyzed ring closure as in the Woodward synthesis. In the 1998 Keuhne synthesis of chiral (−)-strychnine the starting material was derived from chiral tryptophan . In the Rawal synthesis (1994, racemic) amine 1 and enone 2 were combined in an amine-carbonyl condensation followed by methyl chloroformate quench to triene 3 which was then reacted in a Diels–Alder reaction (benzene 185 °C) to hexene 4 . The three ester groups were hydrolyzed using iodotrimethylsilane forming pentacyclic lactam 5 after a methanol quench in a combination of 7 reaction steps (one of them a Dieckmann condensation ). The C 4 segment 6 was added in an amine alkylation and Heck reaction of 7 formed isostrychnine 8 after TBS deprotection. The overall yield (10%) is to date the largest of any of the published methods [ 40 ] In the Bosch synthesis of (1999, chiral) the olefin group in dione 1 was converted to an aldehyde by ozonolysis and chiral amine 2 was formed in a double reductive amination with ( S )-1- phenethylamine . The phenylethyl substituent was removed using ClCO 2 CHClCH 3 and the enone group was introduced in a Grieco elimination using TMSI , HMDS then PhSeCl then ozone and then diisopropylamine forming carbamate 3 . The amino group was deprotected by refluxing in methanol and then alkylated using ( Z )-BrCH 2 CICH=CH 2 OTBDMS, to tertiary amine 4 . A reductive Heck reaction took place next followed by methoxycarbonylation (LiHMDS, NCCO 2 Me) to tricycle 5 . Reaction with zinc dust in 10% sulfuric acid removed the TBDMS protective group , reduced the nitro group and brought about a reductive amino-carbonyl cyclization in a single step to tetracyclic 6 (epimeric mixture). In the final step to the Wieland-Gumlich aldehyde 7 reaction with NaH in MeOH afforded the correct epimer was followed by DIBAH reduction of the methyl ester. The key reaction in the Vollhardt synthesis (2000, racemic) was an alkyne trimerisation of tryptamine derivative 1 with acetylene and organocobalt compound CpCo(C 2 H 4 ) 2 (THF, 0 °C) to tricycle 2 after deprotection of the amine group (KOH, MeOH/H 2 O reflux). Subsequent reaction with iron nitrate brought about a [1,8]- conjugate addition to tetracycle 3 , amine alkylation with ( Z )-1-bromo-4-[(tert-butyldimethylsilyl)oxy]-2-iodobut-2-ene (see Rawal synthesis) and lithium carbonate , and isomerization of the diene system (NaOiPr, iPrOH) formed enone 4 . A Heck reaction as in the Rawal synthesis ( palladium acetate / triphenylphosphine ), accompanied by aromatization formed pyridone 5 and lithium aluminium hydride reduction and TBS group deprotection formed isostrychnine 6 . The Mori synthesis ((-) chiral, 2003) was the first one containing an asymmetric reaction step . It also features a large number of Pd catalyzed reactions. In it N-tosyl amine 1 reacted with allyl carbonate 2 in an allylic asymmetric substitution using Pd 2 (dba) 3 and asymmetric ligand (S-BINAPO) to chiral secondary amine 3 . Desilylation of the TBDMS group next took place by HCl to the hydroxide and then to the nitrile 4 ( NaCN ) through the bromide ( PBr 3 ). Heck reaction ( Pd(OAc) 2 / Me 2 PPh ) and debromination ( Ag 2 CO 3 ) afforded tricycle 5 . LiALH 4 Nitrile reduction to the amine and its Boc 2 O protection to boc amine 6 was then followed by a second allylic oxidation ( Pd(OAc) 2 / AcOH / benzoquinone / MnO 2 ) to tetracycle 7 . Hydroboration-oxidation ( 9-BBN / H 2 O 2 ) gave alcohol 8 and subsequent Swern oxidation ketone 9 . Reaction with LDA / PhNTf 2 gave enol triflate 10 and the triflate group was removed in alkene 11 by reaction with Pd(OAc) 2 and PPh 3 . Detosylation of 11 ( sodium naphthalenide ) and amidation with acid chloride 3-bromoacryloyl chloride gave amide 12 and another Heck reaction gave pentacycle 13 . double bond isomerization ( sodium / iPrOH ), Boc group deprotection ( triflic acid ) and amine alkylation with (Z)-BrCH 2 CICH=CH 2 OTBDMS (see Rawal) gave compound 14 (identical to one of the Vollhardt intermediates). A final heck reaction ( 15 ) and TBDMS deprotection formed (−)-isostrychnine 16 . The Shibasaki synthesis ((-) chiral, 2002) was a second published method in strychnine total synthesis using an asymmetric reaction step . Cyclohexenone 1 was reacted with dimethyl malonate 2 in an asymmetric Michael reaction using AlLibis(binaphthoxide) to form chiral diester 3 . Its ketone group was protected as an acetal (2-ethyl-2-methyl-1,3-dioxolane, TsOH ) and a carboxyl group was removed ( LiCl , DMSO 140 °C) in monoester 4 . A C2 fragment was added as Weinreb amide 5 to form PMB ether 6 using LDA . The ketone was then reduced to the alcohol ( NaBH 3 CN , TiCl 4 ) and then water was eliminated ( DCC , CuCl ) to form alkene 7 . After ester reduction ( DIBAL ) to the alcohol and its TIPS protection ( TIPSOTf , triethylamine ), the acetal group was removed (catalytic CSA ) in ketone 8 . Enone 9 was then formed by Saegusa oxidation . The conversion to alcohol 10 was accomplished via a Mukaiyama aldol addition using formaldehyde , iodonation to 11 ( iodine , DMAP ) was followed by a Stille coupling ( Pd 2 dba 3 , Ph 3 As , CuI ) incorporating nitrobenzene unit 12 . Alcohol 13 was formed after SEM protection (SEMCl,i-Pr2NEt) and TIPS removal ( HF ). In the second part of the sequence alcohol 13 was converted to a triflate ( triflic anhydride , N , N -diisopropylethylamine ), then 2,2-bis(ethylthio)ethylamine 14 was added immediately followed by zinc powder, setting of a tandem reaction with nitro group reduction to the amine, 1,4-addition of the thio-amine group and amine-keto condensation to indole 16 . Reaction with DMTSF gave thionium attack at C7 forming 17 , the imine group was then reduced ( NaBH 3 CN , TiCl 4 ), the new amino group acylated ( acetic anhydride , pyridine ), both alcohol protecting groups removed ( NaOMe / meOH) and the allyl alcohol group protected again (TIPS). This allowed removal of the ethylthio group ( NiCl 2 , NaBH 4 , EtOH/MeOH) to 18 . The alcohol was oxidized to the aldehyde using a Parikh-Doering oxidation and TIPS group removal gave hemiacetal 19 called (+)-diaboline which is acylated Wieland-Gumlich aldehyde . The synthesis reported by Bodwell/Li (racemic, 2002) was a formal synthesis as it produced a compound already prepared by Rawal (no. 5 in the Rawal synthesis). The key step was an inverse electron demand Diels–Alder reaction of cyclophane 1 by heating in N , N -diethylaniline (dinitrogen is expulsed) followed by reduction of double bond in 2 to 3 by sodium borohydride / triflic acid and removal of the carbamate protecting group ( PDC / celite ) to 4 . The method is disputed by Reissig (see Reissig synthesis). The Fukuyama synthesis (chiral (-), 2004) started from cyclic amine 1 . Chirality was at some point introduced into this starting material by enzymatic resolution of one of the precursors. Acyloin 2 was formed by Rubottom oxidation and hydrolysis. Oxidative cleavage by lead acetate formed aldehyde 3 , removal of the nosyl group ( thiophenol / cesium carbonate ) triggered an amine-carbonyl condensation with iminium ion 4 continuing to react in a transannular cyclization to diester 5 which could be converted to the Wieland-Gumlich aldehyde by known chemistry. The method reported by Beemelmanns & Reissig (racemic, 2010) is another formal synthesis leading to the Rawal pentacycle (see amine 5 in the Rawal method). In this method indole 1 was converted to tetracycle 2 (together with by-product) in a single cascade reaction using samarium diiodide and HMPA . [ 41 ] Raney nickel / H 2 reduction gave amine 3 and a one-pot reaction using methyl chloroformate , DMAP and TEA then MsCl , DMAP and TEA and then DBU gave Rawal precursor 4 with key hydrogen atoms in the desired anti configuration. In an aborted route intermediate 2 was first reduced to imine 5 then converted to carbamate 6 , then dehydrated to diene 7 ( Burgess reagent ) and finally reduced to 8 ( sodium cyanoborohydride ). The hydrogen atoms in 8 are in an undesired cis-relationship which contradicts the results obtained in 2002 by Bodwell/Li for the same reaction. In 2011, the Vanderwal group reported a concise, longest linear sequence of 6 steps, total synthesis of strychnine. [ 42 ] It featured a Zincke aldehyde followed by an anionic bicyclization reaction and a tandem Brook rearrangement / conjugate addition .
https://en.wikipedia.org/wiki/Strychnine_total_synthesis
Stryker's reagent ([(PPh 3 )CuH] 6 ), [ 1 ] also known as the Osborn complex , is a hexameric copper hydride ligated with triphenylphosphine . It is a brick red, air-sensitive solid. Stryker's reagent is a mildly hydridic reagent, used in homogeneous catalysis of conjugate reduction reactions of enones, enoates, and related substrates. The compound is prepared by adding sodium trimethoxyborohydride to a solution of [PPh 3 CuCl] 4 in DMF, after which it precipitates out as a DMF complex ([HCu(PPh 3 )] 6 •DMF). [ 2 ] Other more convenient methods have been developed since its discovery. [ 3 ] [ 4 ] In terms of its structure, the compound is an octahedral cluster of Cu(PPh 3 ) centres that are bonded by Cu---Cu and Cu---H interactions. Originally six of the eight faces were thought to be capped by hydride ligands. [ 5 ] This structural assignment was revised in 2014; the hydrides are now best described as edge bridging rather than face bridging. [ 6 ] The compound can effect regioselective conjugate reductions of various carbonyl derivatives including unsaturated aldehydes, ketones, and esters . This reagent was assigned as the "Reagent of the year" in 1991 for its functional group tolerance, high overall efficiency, and mild reaction conditions in the reduction reactions. Stryker's reagent is used in a catalytic amount where it is regenerated in the reaction in situ using a stoichiometric hydride source, often being molecular hydrogen or silanes . If stored under an inert atmosphere (e.g., argon, nitrogen) it has indefinite shelf life. Brief exposure to the oxygen does not destroy its activity significantly, although solvents used with Stryker's reagent should be rigorously degassed. [ 7 ] Ligand-modified versions of Stryker's reagent have been reported. By changing the ligand to, e.g., P(O-iPr) 3 the selectivity can be improved significantly. [ 8 ] In addition, Lipshutz et al., have shown that the addition of a bidentate, achiral bis-phosphine ligand on the Cu center can lead to substrate-to-ligand ratios typically on the order of 1000−10000:1 can be used to afford products in high yields. [ 9 ]
https://en.wikipedia.org/wiki/Stryker's_reagent
Strähle's construction is a geometric method for determining the lengths for a series of vibrating strings with uniform diameters and tensions to sound pitches in a specific rational tempered musical tuning . It was first published in the 1743 Proceedings of the Royal Swedish Academy of Sciences by Swedish master organ maker Daniel Stråhle (1700–1746). The Academy's secretary Jacob Faggot appended a miscalculated set of pitches to the article, and these figures were reproduced by Friedrich Wilhelm Marpurg in Versuch über die musikalische Temperatur in 1776. Several German textbooks published about 1800 reported that the mistake was first identified by Christlieb Benedikt Funk in 1779, but the construction itself appears to have received little notice until the middle of the twentieth century when tuning theorist J. Murray Barbour presented it as a good method for approximating equal temperament and similar exponentials of small roots, and generalized its underlying mathematical principles. It has become known as a device for building fretted musical instruments through articles by mathematicians Ian Stewart and Isaac Jacob Schoenberg , and is praised by them as a unique and remarkably elegant solution developed by an unschooled craftsman. The name "Strähle" used in recent English language works appears to be due to a transcription error in Marpurg's text, where the old-fashioned diacritic raised "e" was substituted for the raised ring. [ 1 ] Daniel P. Stråhle was active as an organ builder in central Sweden in the second quarter of the eighteenth century. He had worked as a journeyman for the important Stockholm organ builder Johan Niclas Cahman and, in 1741, four years after Cahman's death, Stråhle was granted his privilege for organ making. According to the system in force in Sweden at the time a privilege, a granted monopoly which was held by only a few of the most established makers of each type of musical instruments, gave him the legal right to build and repair organs, as well as to train and examine workers, and it also served as a guarantee of the quality of the work and education of the maker. [ 2 ] An organ by him from 1743 is preserved in its original condition at the chapel at Strömsholm Palace ; [ 3 ] he is also known to have made clavichords , and a notable example with an unusual string scale and construction signed by him and dated 1738 is owned by the Stockholm Music Museum . [ 4 ] His apprentices included his nephew Petter Stråhle and Jonas Gren, partners in the famous Stockholm organ builders Gren & Stråhle, [ 5 ] and according to Abraham Abrahamsson Hülphers in his book Historisk Afhandling om Musik och Instrumenter published in 1773, Stråhle himself had studied mechanics (which has been assumed to have included mathematics [ 6 ] ) with Swedish Academy of Science founding member Christopher Polhem . [ 7 ] He died in 1746 at Lövstabruk in northern Uppland. Stråhle published his construction as a "new invention, to determine the Temperament in tuning, for the pitches of the clavichord and similar instruments" in an article that appeared in the fourth volume of the proceedings of the newly formed Royal Swedish Academy of Sciences, which included articles by prominent scholars and Academy members Polhem, Carl Linnaeus , Carl Fredrik Mennander , Augustin Ehrensvärd , and Samuel Klingenstierna . According to organologist Eva Helenius musical tuning was a subject of intense debate in the Academy during the 1740s, [ 8 ] and though Stråhle himself was not a member his was the third article on practical musical topics published by the Academy—the first two were by amateur musical instrument maker, minister, and Academy member Nils Brelin [ 9 ] which related inventions applicable to harpsichords and clavichords. [ 10 ] Stråhle wrote in his article that he had developed the method with "some thought and a great number of attempts" for the purpose of creating a gauge for the lengths of the strings in the temperament which he described as that which made the tempering ("sväfningar") mildest for the ear, as well comprising as the most useful and even arrangement of the pitches. His instructions produce an irregular tuning with a range of tempered intervals similar to better known tunings published during the same period, but he provided no further comments or description about the tuning itself; today it is generally considered to be an approximation of equal temperament . [ 11 ] He also did not elaborate upon any advantages of his construction, which can produce accurate and repeatable results without calculations or measurement with only a straightedge and dividers; he described the construction in only five steps, and it is less iterative than arithmetic methods described by Dom Bédos de Celles method for determining organ pipe lengths in just intonation or Vincenzo Galilei for determining string fret positions in approximate equal temperament, and geometrical methods such as those described by Gioseffo Zarlino and Marin Mersenne —all of which are much better known than Stråhle's. Stråhle concluded by stating that he had applied the system to a clavichord, although the tuning as well as the method of determining a set of sounding lengths can be used for many other musical instruments, but there is little evidence showing whether it was put into more widespread practice other than the two examples described in the article, and whose whereabouts today are unknown. Stråhle instructed first to draw a line segment QR of a convenient length divided in twelve equal parts, with points labeled I through XIII. QR is then used as the base of an isosceles triangle with sides OQ and OR twice as long as QR , and rays drawn from vertex O through each of the numbered points on the base. Finally a line is drawn from vertex R at an angle through a point P on the opposite leg of the triangle seven units from Q to a point M , located at twice the distance from R as P . The length of MR gives the length of the lowest sounding pitch, and the length of MP the highest of the string lengths generated by the construction, and the sounding lengths between them are determined by the distances from M to the intersections of MR with lines O I through O XII , at points labeled 1 through 12. Stråhle wrote that he had named the line PR "Linea Musica", which Helenius noted was a term Polhem had used in an undated but earlier manuscript now located at the Linköping Stifts- och Landsbibliotek and which is accompanied by notes from composer and geometer Harald Vallerius (1646–1716) and Stråhle's former employer J. N. Cahman. [ 8 ] Stråhle also showed line segments parallel to MR through points NHS , LYT , and KZV in order to illustrate how once created the construction could be scaled to accommodate different starting pitches. Stråhle stated at the conclusion of the article that he had implemented the string scale in the highest three octaves of a clavichord, although it is unclear whether this section would have been strung all with the same gauge wire under equal tension like the monochord which he wrote it resembled, and whose construction he described in more detail. He only described an indirect method of setting its tuning, however, requiring that he first establish reference pitches by transferring the corresponding string lengths to the movable bridges on a keyed thirteen string monochord whose open strings had been previously tuned in unison. a The article following Stråhle's was a mathematical treatment of it by Jacob Faggot (1699–1777), then secretary of the Academy of Sciences and future director of the Surveying Office, who in the same volume also contributed articles on a weight measure for lye and methods for calculating the volume of barrels. Faggot was one of the first members of the Academy, and had also been member of a special commission on weights and measures. [ 12 ] He apparently was not a musician, though Helenius described he was interested in musical topics from a mathematical perspective and documented that he periodically came in contact with musical instrument makers through the Academy. [ 13 ] Helenius also presented a theory that Faggot had a more active, if indirect and posthumous influence on the construction of musical instruments in Sweden, claiming that he may have suggested the long tenor strings used in two experimental instruments built by Johan Broman in 1756 which she proposed influenced the type of clavichord built in Sweden in the late eighteenth and early nineteenth centuries. [ 14 ] In his analysis of Stråhle's article Faggot outlined the trigonometric steps he had used to calculate the sounding lengths of the individual pitches, for the purpose of comparing the new tuning produced by Stråhle's method, against a tuning with pure thirds, fourths and fifths (labeled "N. 1" in the table), and equal temperament, which he called only "an older temperament and [which] is introduced in Mr. Mattheson's Critica Musica " ("N. 2"), He intended the resulting set of figures to show whether "the tuning of the pitches, following the previously described invention, satisfies the ear with pleasant sounds and with better evenness, in the Musical pitches on a keyboard instrument, and therefore teaches understanding better can judge than the old and previously known manner of tuning, when the eye can see what the ear hears." b Both articles were reproduced in a German edition of the Academy's proceedings published in 1751, [ 16 ] and a table of Faggot's calculated string lengths was subsequently included by Marpurg on his 1776 Versuch über die musikalische Temperatur , [ 1 ] who wrote that he accepted their accuracy but that rather than accomplishing "Strähle"'s stated goal, the tuning represented an unequal temperament "not even of the tolerable type." c The sounding lengths calculated by Faggot are substantially different from what would be produced according to Stråhle's instructions, a fact which appears to have been first published by Christlieb Benedict Funk in Dissertatio de Sono et Tono in 1779, [ 17 ] and the tuning he created includes intervals tuned outside of the range conventionally used in Western art music. Funk is credited the observation of this discrepancy in Gehler 's Physikalisches Wörterbuch in 1791, [ 18 ] and Fischer's Physikalisches Wörterbuch in 1804, [ 19 ] and the error was pointed out by Ernst Chladni in Die Akustik in 1830. [ 20 ] No similar comments appear to have been published in Sweden during the same period. These works report Faggot's mistake as the result of having used a value from the tangent instead of the sine column from the logarithmic tables. The error itself consisted of making the angle of RP about seven degrees too great, which caused the effective length of QP to increase to 8.605. This greatly exaggerated the errors of the temperament compared to the tunings he presented alongside it, although it is not clear whether Faggot observed these apparent defects as he made no further comments about Stråhle's construction or temperament in the article. The tuning produced following Stråhle's instructions is a rational temperament with a range of fifths from 696 to 704 cents, which is from about one cent flatter than a meantone fifth to two cents sharp of just 3:2; the range of major thirds is from 396 cents to 404 cents, or ten cents sharp of just 5/4 to three cents flat of Pythagorean 81/64. These intervals fall within what is considered to have been acceptable but there is no distribution of better thirds to more frequently used keys that characterize what are today the most popular of the tunings published in the seventeenth and eighteenth centuries, which are known as well temperaments . The best fifth is pure in the key of F♯—or the pitch given by MB —which has a 398 cent third, and the best third is in the key E, which has a 697 cent fifth; the best combination of the two intervals is in the key of F and the worst combination is in the key of B♭. J. Murray Barbour brought new attention to Stråhle's construction along with Faggot's treatment of it in the 20th century. Introduced in the context of Marpurg, he included an overview of it alongside the more famous methods of determining string lengths in his 1951 book Tuning and Temperament where he characterized the tuning as an "approximation for equal temperament". He also demonstrated how close Stråhle's construction was to the best approximation the method could provide, which reduces the maximum errors in major thirds and fifths by about half a cent and is accomplished by substituting 7.028 for the length of QP . Barbour presented a more complete analysis of the construction in "A Geometrical Approximation to the Roots of Numbers" published six years later in American Mathematical Monthly . [ 21 ] He reviewed Faggot's error and its consequences, and then derived Stråhle's construction algebraically using similar triangles . This takes the generalized form Using the values from Stråhle's instructions this becomes Letting O A − B A = 1 {\displaystyle \scriptstyle OA-BA=1} so that O A + B A = N {\displaystyle \scriptstyle OA+BA={\sqrt {N}}} leads to a form of the first formula that is more useful for calculation Barbour then described a generalized construction using the easily obtained mean proportional for the length of MB that avoids most of the specific angles and lengths required in the original. For musical applications it is simpler and its results are slightly more uniform than Stråhle's, and it has the advantage of producing the desired string lengths without additional scaling. He instructed to first draw the line MR corresponding to the larger of the two numbers with MP the smaller, and to construct their mean proportional at MB . The line that will carry the divisions is drawn from R at any acute angle to MR , and perpendicular to it a line is drawn through B , which intersects the line to be divided at A , and RA is extended to Q such that RA = AQ . A line is drawn from Q through P , intersecting the line through BA at O , and a line drawn from O to R . The construction is completed by dividing QR and drawing rays from O through each of the divisions. Barbour concluded with a discussion of the pattern and magnitude of the errors produced by the generalized construction when used to approximate exponentials of different roots, stating that his method "is simple and works exceedingly well for small numbers". For roots from 1 to 2 the error is less than 0.13%—about 2 cents when N =2— with maxima around m =0.21 and m =0.79. The error curve appears roughly sinusoidal and for this range of N can be approximated by about 99% by fitting the curve obtained for N =1, f ( m ) = m ( 1 − m ) ( 1 − 2 m ) {\displaystyle f(m)=m(1-m)(1-2m)} . The error increases rapidly for larger roots, for which Barbour considered the method inappropriate; the error curve resembles the form f ( x ) = x ( 1 − x 2 a ) {\displaystyle \scriptstyle f(x)=x(1-x^{2a})} with maxima moving closer to m = 0 and m =1 as N increases. The paper was published with two notes added by its referee, Isaac Jacob Schoenberg . He observed that the formula derived by Barbour was a fractional linear transformation and so called for a perspectivity, and that since three pairs of corresponding points on the two lines uniquely determined a projective correspondence Barbour's condition that OA be perpendicular to QR was irrelevant. The omission of this step allows a more convenient selection of length for QR , and reduces the number of operations. Schoenberg also noted that Barbour's equation could be viewed as an interpolation of the exponential curve through the three points m =0, m =1/2, and m =1, which he expanded upon in a short paper titled "On the Location of the Frets on the Guitar" published in American Mathematical Monthly in 1976. [ 22 ] This article concluded with a brief discussion of Stråhle's fortuitous use of 41 29 {\displaystyle \scriptstyle {\frac {41}{29}}} for the half-octave, which is one of the convergents of the continued fraction expansion of the 2 {\displaystyle \scriptstyle {\sqrt {2}}} , and the best rational approximation of it for the size of the denominator. The use of fractional approximations of 2 {\displaystyle \scriptstyle {\sqrt {2}}} in Stråhle's construction was expanded upon by Ian Stewart, who wrote about the construction in "A Well Tempered Calculator" in his 1992 book Another Fine Math You've Got Me Into... [ 23 ] as well as "Faggot's Fretful Fiasco" included in Music and Mathematics published in 2006. Stewart considered the construction from the standpoint of projective geometry, and derived the same formulas as Barbour by treating it from the start as a fractional linear function, of the form y = ( a x + b ) ( c x + d ) {\displaystyle \scriptstyle y={\frac {(ax+b)}{(cx+d)}}} , and he pointed out that the approximation for 2 {\displaystyle \scriptstyle {\sqrt {2}}} implicit in the construction is 17 12 {\displaystyle \scriptstyle {\frac {17}{12}}} , which is the next lower convergent from the half octave it produces. This is the consequence of the function simplifying to p + 2 q p + q {\displaystyle \scriptstyle {\frac {p+2q}{p+q}}} for m =0.5 where p q {\displaystyle \scriptstyle {\frac {p}{q}}} is the generating approximation. The geometric and arithmetic methods for dividing monochords as well as musical instrument fretboards compiled by Barbour were for the stated purpose of illustrating the different tunings each represents or implies, and Schoenberg's and Stewart's works retained similar focus and references. Three textbooks on piano building that are not included by them show similar constructions to Stråhle's for designing new instruments but treat the tuning of their pitches independently; both constructions employ a non-perpendicular form as suggested by Schoenberg's observation in Barbour's "A Geometrical approximation to the Roots of Numbers", and one achieves optimal results while the other demonstrates an application with a root other than 2. Carl Kützing, an organ and piano maker in Bern during the middle of the 19th century wrote in his first book on piano design, Theoretisch-praktisches Handbuch der Fortepiano-Baukunst from 1833, that he devised a simple method of determining the sounding lengths in an octave after reading of the different geometric constructions described in an issue of Marpurg's Historisch-kritischen Beitragen zur Aufnahme der Musik ; he stated that the divisions would be very accurate and that the construction could be used for fretting guitars. Kützing introduced the construction following a description of a large sector to be made for the same purpose. He did not include either method in Das Wissenschaftliche der Fortepiano-Baukunst published eleven years later, where he calculated lengths using approximately 18:35 ratios between octave lengths and proposed a new method with a non-continuous curve adjusted for actual wire diameters in order to reduce tonal differences from jumps in tension. [ 24 ] Kützing instructed to extend a line segment bc —representing a known sounding length—at 45 degrees to the line ba, and from its octave at point d located midway between b and c , to extend a line perpendicular to ba intersecting it at e , then to divide de into 12 equal parts. The point a on ab is located by transferring the lengths of de , db , from e away from b , and rays extended from a through the points dividing de and intersecting bc to locate the different endpoints of the string lengths from c . [ 25 ] This arrangement is equivalent to using the mean proportional to locate a . A re-labeled diagram with instructions was included in a pamphlet printed by England's largest piano manufacturers John Broadwood & Sons to accompany their display at the 1862 International Exhibition in London, where they described it as "a practical method of finding the lengths of Strings, for every note of the Octave on equal temperament; so that with wire of the same size the tension on each note shall be the same." [ 26 ] It was also reproduced, alongside a sector, by Giacomo Sievers, a Russian-born piano maker working in Naples, in his 1868 book Il Pianoforte , where he claimed it was the best practical method for determining sounding lengths of strings in a piano. Like Broadwood, Sievers did not describe its source or extent of its use, and did not explain any theory behind it. He also did not suggest it had any use beyond designing pianos. [ 27 ] English piano maker Samuel Wolfenden presented a construction for determining all but the lowest sounding plain string lengths in a piano in A Treatise on the Art of Pianoforte Construction published in 1916; like Sievers, he did not explain whether this was an original procedure or one in common use, commenting only that it was "a very practical method of determining string lengths, and in past years I used it altogether". He added that at the time of writing he found calculating the lengths directly "somewhat easier" and had preceded the description with a table of computed lengths for the top five octaves of a piano. [ 28 ] He included frequencies in equal temperament, but only published aural tuning instructions in his 1927 supplement. Wolfenden explicitly advocated equalizing the tension of the plain strings which he proposed to accomplish in the upper range by combining a 9:17 ratio between octave lengths with a uniform change in string diameters (achieving slightly more consistent results over the otherwise similar system published by Siegfried Hansing in 1888 [ 29 ] ), in contrast to Sievers scale whose stringing schedule results in higher tension for the thicker, lower sounding pitches. Like Sievers, Wolfenden constructed all of the sounding lengths on a single segment at 45 degrees from the base lines for the rays, starting with points located for each C in the range designed at 54, 102, 192.5, 364, and 688mm from the upper point. The four vertices for the rays are then located by the intersections of the horizontal base lines extended from the lower C in each octave with a second line angled from the upper starting point for the string line, however, which he specified should both be at 51.5 degrees to the base lines and that the base lines have a 35:13 ratio with the difference between the two octave lengths. Wolfenden's method approximates 17 / 9 {\displaystyle \scriptstyle {\sqrt {17/9}}} with roughly 1.3775, and is equivalent to O A / O B = 8.27 {\displaystyle \scriptstyle OA/OB=8.27} in Barbour's form. Compensating for its smaller octaves this produces 596 cent half octaves, an error of about 1mm at note F4 (f′) compared with his calculated figures. "Enligit detta påfund, har jag bygt et Monochordium , i så måtto, at det fullan hafver 13 strängar, ock skulle dy snarare heta Tredekachordium , men som alla strängarna, äro af en nummer , längd ock thon; så behåller jag det gamla namnet. "Til dessa tretton strängar, är lämpadt et vanligit Manual , af en Octave ; men under hvar sträng, sedan de noga äro stämde i unison , sätter jag löfa stallar, å de puncter , ock till de längder fra crepinerne , som min nu beskrefne Linea Musica det äfkar : derefter hvar sträng undfår sin behöriga thon. "Det Claver , som jag här til förfärdigat är jämnväl i de tre högre Octaverne , noga rättadt efter min Linea Musica , til strängarnes längd ock skilnad : ock på det stämningen, må utan besvär, kunna ske; så är mit Monochordium så giordt, at det kan ställas ofvan på Claveret , då en Octav på Claveret stämmes, thon för thon, mot sina tillhöriga thoner på Monochordium , derefter alla de andra thonerne, å Claveret , stämmas Octavs -vis; den stamningen, är ock för örat lättast at värkställa, emedan den bör vara fri för svängningar." "Huruvida thonernes stämning, efter förut beskrefne Påfund, förnöger hörsten, med behageligare ljud, ock med bättre likstämmighet, i de Musikaliska thonerne å et Claver , än de gamla ock härtils bekanta stämnings sätt, derom lärer förståndet bättre kunna döma, när ögat får se det örat hörer." "Ich muss gestehen, dass sich dieser Aufsatz mit Vergnügen lesen lässet, und dass ich von der Richtigkeit der vom Hrn. Jacob Faggot, durch eine sehr mühsame trigonometrische Berechnung der Strählischen Linien, gefunden Zahlen voellig überzeuget bin. Nur muss ich hinzufügen, dass die gefunden Zahlen nicht geben, was sie geben sollen, und was Hr. Strähle suchte, nemlich eine Temperatur, welche das Schweben am gelindesten für das Gehör macht, und alle Töne in gehörige Gleichstimmigkeit setzet. Es enthalten nemlich selbige nichts anders als eine ungleichschwebende Temperatur, und nicht einmal von der erträglichsten Art."
https://en.wikipedia.org/wiki/Strähle_construction
In mathematics and astrophysics , the Strömgren integral , introduced by Bengt Strömgren ( 1932 , p.123) while computing the Rosseland mean opacity , is the integral : Cox (1964) discussed applications of the Strömgren integral in astrophysics, and MacLeod (1996) discussed how to compute it.
https://en.wikipedia.org/wiki/Strömgren_integral
In theoretical astrophysics , there can be a sphere of ionized hydrogen (H II) around a young star of the spectral classes O or B . The theory was derived by Bengt Strömgren in 1937 and later named Strömgren sphere after him. The Rosette Nebula is the most prominent example of this type of emission nebula from the H II-regions . Very hot stars of the spectral class O or B emit very energetic radiation, especially ultraviolet radiation , which is able to ionize the neutral hydrogen (H I) of the surrounding interstellar medium , so that hydrogen atoms lose their single electrons. This state of hydrogen is called H II. After a while, free electrons recombine with those hydrogen ions. Energy is re-emitted, not as a single photon , but rather as a series of photons of lesser energy. The photons lose energy as they travel outward from the star's surface, and are not energetic enough to again contribute to ionization. Otherwise, the entire interstellar medium would be ionized. A Strömgren sphere is the theoretical construct which describes the ionized regions. In its first and simplest form, derived by the Danish astrophysicist Bengt Strömgren in 1939, the model examines the effects of the electromagnetic radiation of a single star (or a tight cluster of similar stars) of a given surface temperature and luminosity on the surrounding interstellar medium of a given density. To simplify calculations, the interstellar medium is taken to be homogeneous and consisting entirely of hydrogen. The formula derived by Strömgren describes the relationship between the luminosity and temperature of the exciting star on the one hand, and the density of the surrounding hydrogen gas on the other. Using it, the size of the idealized ionized region can be calculated as the Strömgren radius . Strömgren's model also shows that there is a very sharp cut-off of the degree of ionization at the edge of the Strömgren sphere. This is caused by the fact that the transition region between gas that is highly ionized and neutral hydrogen is very narrow, compared to the overall size of the Strömgren sphere. [ 1 ] The above-mentioned relationships are as follows: In Strömgren's model, the sphere now named Strömgren's sphere is made almost exclusively of free protons and electrons. A very small amount of hydrogen atoms appear at a density that increases nearly exponentially toward the surface. Outside the sphere, radiation of the atoms' frequencies cools the gas strongly, so that it appears as a thin region in which the radiation emitted by the star is strongly absorbed by the atoms which lose their energy by radiation in all directions. Thus a Strömgren system appears as a bright star surrounded by a less-emitting and difficult to observe globe. Strömgren did not know Einstein's theory of optical coherence. The density of excited hydrogen is low, but the paths may be long, so that the hypothesis of a super-radiance and other effects observed using lasers must be tested. A supposed super-radiant Strömgren's shell emits space-coherent, time-incoherent beams in the direction for which the path in excited hydrogen is maximal, that is, tangential to the sphere. In Strömgren's explanations, the shell absorbs only the resonant lines of hydrogen, so that the available energy is low. Assuming that the star is a supernova, the radiance of the light it emits corresponds (by Planck's law) to a temperature of several hundreds of kelvins, so that several frequencies may combine to produce the resonance frequencies of hydrogen atoms. Thus, almost all light emitted by the star is absorbed, and almost all energy radiated by the star amplifies the tangent, super-radiant rays. The Necklace Nebula is a Strömgren sphere. It shows a dotted circle which gives its name. In supernova remnant 1987A, the Strömgren shell is strangulated into an hourglass whose limbs are like three pearl necklaces. Both Strömgren's original model and the one modified by McCullough do not take into account the effects of dust, clumpiness, detailed radiative transfer, or dynamical effects. [ 2 ] In 1938 the American astronomers Otto Struve and Chris T. Elvey published their observations of emission nebulae in the constellations Cygnus and Cepheus, most of which are not concentrated toward individual bright stars (in contrast to planetary nebulae). They suggested the UV radiation of the O- and B-stars to be the required energy source. [ 3 ] In 1939 Bengt Strömgren took up the problem of the ionization and excitation of the interstellar hydrogen. [ 1 ] This is the paper identified with the concept of the Strömgren sphere. It draws, however, on his earlier similar efforts published in 1937. [ 4 ] In 2000 Peter R. McCullough published a modified model allowing for an evacuated, spherical cavity either centered on the star or with the star displaced with respect to the evacuated cavity. Such cavities might be created by stellar winds and supernovae . The resulting images more closely resemble many actual H II-regions than the original model. [ 2 ] Let's suppose the region is exactly spherical, fully ionized (x=1), and composed only of hydrogen , so that the numerical density of protons equals the density of electrons ( n e = n p {\displaystyle n_{e}=n_{p}} ). Then the Strömgren radius will be the region where the recombination rate equals the ionization rate. We will consider the recombination rate N R {\displaystyle N_{R}} of all energy levels, which is N n {\displaystyle N_{n}} is the recombination rate of the n-th energy level. The reason we have excluded n=1 is that if an electron recombines directly to the ground level, the hydrogen atom will release another photon capable of ionizing up from the ground level. This is important, as the electric dipole mechanism always makes the ionization up from the ground level, so we exclude n=1 to add these ionizing field effects. Now, the recombination rate of a particular energy level N n {\displaystyle N_{n}} is (with n e = n p {\displaystyle n_{e}=n_{p}} ): where β n ( T e ) {\displaystyle \beta _{n}(T_{e})} is the recombination coefficient of the n th energy level in a unitary volume at a temperature T e {\displaystyle T_{e}} , which is the temperature of the electrons in kelvins and is usually the same as the sphere. So after doing the sum, we arrive at where β 2 ( T e ) {\displaystyle \beta _{2}(T_{e})} is the total recombination rate and has an approximate value of Using n {\displaystyle n} as the number of nucleons (in this case, protons), we can introduce the degree of ionization 0 ≤ x ≤ 1 {\displaystyle 0\leq x\leq 1} so n e = x n {\displaystyle n_{e}=xn} , and the numerical density of neutral hydrogen is n h = ( 1 − x ) n {\displaystyle n_{h}=(1-x)n} . With a cross section α 0 {\displaystyle \alpha _{0}} (which has units of area) and the number of ionizing photons per area per second J {\displaystyle J} , the ionization rate N I {\displaystyle N_{I}} is For simplicity we will consider only the geometric effects on J {\displaystyle J} as we get further from the ionizing source (a source of flux S ∗ {\displaystyle S_{*}} ), so we have an inverse square law : We are now in position to calculate the Stromgren radius R S {\displaystyle R_{S}} from the balance between the recombination and ionization and finally, remembering that the region is considered as fully ionized ( x = 1): This is the radius of a region ionized by a type O-B star .
https://en.wikipedia.org/wiki/Strömgren_sphere
The Stuart Ballantine Medal was a science and engineering award presented by the Franklin Institute , of Philadelphia, Pennsylvania, USA. It was named after the US inventor Stuart Ballantine .
https://en.wikipedia.org/wiki/Stuart_Ballantine_Medal
Stuart Schreiber (born February 6, 1956) is an American chemist who is the Morris Loeb Research Professor at Harvard University, [ 1 ] a co-founder of the Broad Institute, [ 2 ] Howard Hughes Medical Institute Investigator, Emeritus, [ 3 ] and a member of the National Academy of Sciences [ 4 ] and National Academy of Medicine. [ 5 ] He currently leads Arena BioWorks . His work integrates chemical biology and human biology to advance the science of therapeutics. Key advances include the discovery that small molecules can function as “molecular glues” that promote protein–protein interactions, the co-discovery of mTOR and its role in nutrient-response signaling, the discovery of histone deacetylases and (with Michael Grunstein and David Allis) the demonstration that chromatin marks regulate gene expression, the development and application of diversity-oriented synthesis to microbial therapeutics, and the discovery of vulnerabilities of cancer cells linked to genetic, lineage and cell-state features, including ferroptotic vulnerabilities. His awards include the Wolf Prize in Chemistry and the Arthur Cope Award. His approach to discovering new therapeutics guided many biotechnology companies that he founded, including Vertex Pharmaceuticals and Ariad Pharmaceuticals. He has founded or co-founded 14 biotechnology companies, which have developed 16 first-in-human approved drugs or advanced clinical candidates. Schreiber was born on February 6, 1956, in Eatontown, New Jersey , to Mary Geraldine Schreiber and Thomas Sewell Schreiber. From the ages of one to four he lived with his family in Villennes-sur-Seine, a small village in France, where his father was a battalion commander at Supreme Headquarters Allied Powers Europe . [ 6 ] Shortly after returning to New Jersey, they moved to Fairfax, VA, where Tom Schreiber worked as an applied mathematician and physicist at Signal Corp on Fort Monmouth. At age 61, Schreiber discovered that Tom Schreiber was not his biological father. [ 7 ] Schreiber attended Luther Jackson Junior High School in Falls Church, VA and graduated from Oakton High School in Fairfax, VA in 1973 after completing a 3-year work study program that prepared him for work in the construction field. [ 8 ] Schreiber obtained a Bachelor of Science degree in chemistry from the University of Virginia in 1977, after which he entered Harvard University as a graduate student in chemistry. He joined the research group of Robert B. Woodward and after Woodward's death continued his studies under the supervision of Yoshito Kishi . In 1980, he joined the faculty of Yale University as an assistant professor in chemistry, and in 1988 he moved to Harvard University as the Morris Loeb Professor. [ 9 ] Schreiber started his research work in organic synthesis, focusing on concepts such as the use of [2 + 2] photocycloadditions to establish stereochemistry in complex molecules, the fragmentation of hydroperoxides to produce macrolides , ancillary stereocontrol, group selectivity and two-directional synthesis. Notable accomplishments include the total syntheses of complex natural products such as periplanone B, talaromycin B, asteltoxin, avenaciolide, gloeosporone, hikizimicin, mycoticin A, epoxydictymene [ 10 ] and the immunosuppressant FK-506 . [ citation needed ] Following his work on the FK506-binding protein FKBP12 in 1988, Schreiber reported that the small molecules FK506 and cyclosporin inhibit the activity of the phosphatase calcineurin by forming the ternary complexes FKBP12-FK506-calcineurin and cyclophilin-ciclosporin-calcineurin. [ 11 ] This work, together with work by Gerald Crabtree at Stanford University concerning the NFAT proteins, led to the elucidation of the calcium- calcineurin - NFAT signaling pathway. [ 12 ] The Ras-Raf-MAPK pathway was not elucidated for another year. [ citation needed ] In 1993, Schreiber and Crabtree developed bifunctional molecules or “chemical inducers of proximity” (CIPs), which provide small-molecule activation over numerous signaling molecules and pathways (e.g., the Fas, insulin, TGFβ and T-cell receptors [ 13 ] [ 14 ] ) through proximity effects. Schreiber and Crabtree demonstrated that small molecules could activate a signaling pathway in an animal with temporal and spatial control. [ 15 ] Dimerizer kits have been distributed freely resulting in many peer-reviewed publications. Its promise in gene therapy has been highlighted by the ability of a small molecule to activate a small-molecule regulated EPO receptor and to induce erythropoiesis ( Ariad Pharmaceuticals , Inc.), and more recently in human clinical trials for treatment of graft-vs-host disease. [ 16 ] In 1994, Schreiber and co-workers investigated (independently with David Sabitini ) the master regulator of nutrient sensing, mTOR. They found that the small molecule rapamycin simultaneously binds FKBP12 and mTOR (originally named FKBP12-rapamycin binding protein, FRAP). [ 17 ] Using diversity-oriented synthesis and small-molecule screening, Schreiber illuminated the nutrient-response signaling network involving TOR proteins in yeast and mTOR in mammalian cells. Small molecules such as uretupamine [ 18 ] and rapamycin were shown to be particularly effective in revealing the ability of proteins such as mTOR, Tor1p, Tor2p, and Ure2p to receive multiple inputs and to process them appropriately towards multiple outputs (in analogy to multi-channel processors). Several pharmaceutical companies are now targeting the nutrient-signaling network for the treatment of several forms of cancer, including solid tumors. [ 19 ] In 1995, Schreiber and co-workers found that the small molecule lactacystin binds and inhibits specific catalytic subunits of the proteasome , [ 20 ] a protein complex responsible for the bulk of proteolysis in the cell, as well as proteolytic activation of certain protein substrates. As a non-peptidic proteasome inhibitor lactacysin has proven useful in the study of proteasome function. Lactacystin modifies the amino-terminal threonine of specific proteasome subunits. This work helped to establish the proteasome as a mechanistically novel class of protease: an amino-terminal threonine protease . The work led to the use of bortezomib to treat multiple myeloma . [ citation needed ] In 1996, Schreiber and co-workers used the small molecules trapoxin and depudecin to investigate the histone deacetylases (HDACs). [ 21 ] Prior to Schreiber's work in this area, the HDAC proteins had not been isolated. Coincident with the HDAC work, David Allis and colleagues reported work on the histone acetyltransferases (HATs). These two contributions catalyzed much research in this area, eventually leading to the characterization of numerous histone-modifying enzymes, their resulting histone “marks”, and numerous proteins that bind to these marks. By taking a global approach to understanding chromatin function, Schreiber proposed a “signaling network model” of chromatin and compared it to an alternative view, the “histone code hypothesis” presented by Strahl and Allis. [ 22 ] These studies shined a bright light on chromatin as a key gene expression regulatory element rather than simply a structural element used for DNA compaction. [ citation needed ] Schreiber applied small molecules to biology through the development of diversity-oriented synthesis (DOS), [ 23 ] chemical genetics, [ 24 ] and ChemBank. [ 25 ] Schreiber has shown that DOS can produce small molecules distributed in defined ways in chemical space by virtue of their different skeletons and stereochemistry, and that it can provide chemical handles on products anticipating the need for follow-up chemistry using, for example, combinatorial synthesis and the so-called Build/Couple/Pair strategy of modular chemical synthesis. DOS pathways and new techniques for small-molecule screening [ 26 ] [ 27 ] [ 28 ] provided many new, potentially disruptive insights into biology. Small-molecule probes of histone and tubulin deacetylases, transcription factors, cytoplasmic anchoring proteins, developmental signaling proteins (e.g., histacin, tubacin, haptamide, uretupamine, concentramide, and calmodulophilin), among many others, have been uncovered in the Schreiber lab using diversity-oriented synthesis and chemical genetics. Multidimensional screening was introduced in 2002 and has provided insights into tumorigenesis, cell polarity, and chemical space, among others. [ 29 ] Using diversity-oriented synthesis, the Schreiber Lab and collaborators discovered numerous novel antimicrobial compounds including the bicyclic azetidine BRD7929 that could both cure and prevent the transmission of malaria in mice, targeting multiple steps in the life cycle of Plasmodium falciparum . [ 30 ] [ 31 ] They found another synthetic azetidine derivative, BRD4592, which kills Mycobacterium tuberculosis through allosteric inhibition of its tryptophan synthase. [ 32 ] High throughput screens further uncovered compounds that inhibit replication of Trypanosoma cruzi [ 33 ] and Hepatitis C virus, [ 34 ] [ 35 ] and inhibit Toxoplasma gondii growth. [ 36 ] Schreiber also contributed to more conventional small molecule discovery projects. He collaborated with Tim Mitchison to discover monastrol – the first small-molecule inhibitor of mitosis that does not target tubulin . [ 37 ] Monastrol was shown to inhibit kinesin-5 , a motor protein and was used to gain new insights into the functions of kinesin-5. This work led pharmaceutical company Merck, among others, to pursue anti-cancer drugs that target human kinesin-5. [ citation needed ] Recently [ when? ] the Schreiber Lab discovered that when certain aggressive cancer cells become resistant to drug treatments, they also become vulnerable to ferroptosis—a natural cellular self-destruction mechanism triggered by peroxide and iron ions undergoing the Fenton reaction. Free radicals unleash a chain reaction turning normal lipids in the cell membrane into toxic radical species. They found that drug-resistant cancer cells that have acquired this new vulnerability rely on an enzyme called GPX4 for survival. GPX4 stops the chain reaction leading to ferroptosis by converting the dangerous lipid peroxides to benign alcohols. They further showed that a small molecule inhibitor of GPX4 kills cancer cells by increasing their vulnerability to ferroptosis. [ 38 ] Schreiber has used small molecules to study three specific areas of biology, and then through the more general application of small molecules in biomedical research. Academic screening centers have been created that emulate the Harvard Institute of Chemistry and Cell Biology and the Broad Institute; in the U.S., there has been a nationwide effort to expand this capability via the government-sponsored NIH Road Map. Chemistry departments have changed their names to include the term chemical biology and new journals have been introduced (Cell Chemical Biology, ChemBioChem, Nature Chemical Biology, ACS Chemical Biology]) to cover the field. Schreiber has been involved in the founding of numerous biopharmaceutical companies whose research relies on chemical biology: Vertex Pharmaceuticals, Inc. (VRTX), Ariad Pharmaceuticals, Inc. (ARIA), Infinity Pharmaceuticals, Inc (INFI), Forma Therapeutics, H3 Biomedicine, Jnana Therapeutics, and Kojin Therapeutics. These companies have produced new therapeutics in several disease areas, including cystic fibrosis and cancer. [ 39 ]
https://en.wikipedia.org/wiki/Stuart_Schreiber