text
stringlengths
11
320k
source
stringlengths
26
161
A solar telescope or a solar observatory is a special-purpose telescope used to observe the Sun . Solar telescopes usually detect light with wavelengths in, or not far outside, the visible spectrum . Obsolete names for Sun telescopes include heliograph and photoheliograph . Solar telescopes need optics large enough to achieve the best possible diffraction limit but less so for the associated light-collecting power of other astronomical telescopes. However, recently newer narrower filters and higher framerates have also driven solar telescopes towards photon-starved operations. [ 1 ] Both the Daniel K. Inouye Solar Telescope as well as the proposed European Solar Telescope (EST) have larger apertures not only to increase the resolution, but also to increase the light-collecting power. Because solar telescopes operate during the day, seeing is generally worse than for night-time telescopes, because the ground around the telescope is heated, which causes turbulence and degrades the resolution. To alleviate this, solar telescopes are usually built on towers and the structures are painted white. The Dutch Open Telescope is built on an open framework to allow the wind to pass through the complete structure and provide cooling around the telescope's main mirror. Another solar telescope-specific problem is the heat generated by the tightly-focused sunlight. For this reason, a heat stop is an integral part of the design of solar telescopes. For the Daniel K. Inouye Solar Telescope , the heat load is 2.5 MW/m 2 , with peak powers of 11.4 kW. [ 2 ] The goal of such a heat stop is not only to survive this heat load, but also to remain cool enough not to induce any additional turbulence inside the telescope's dome. Professional solar observatories may have main optical elements with very long focal lengths (although not always, Dutch Open Telescope ) and light paths operating in a vacuum or helium to eliminate air motion due to convection inside the telescope. However, this is not possible for apertures over 1 meter, at which the pressure difference at the entrance window of the vacuum tube becomes too large. Therefore, the Daniel K. Inouye Solar Telescope and the EST have active cooling of the dome to minimize the temperature difference between the air inside and outside the telescope. Due to the Sun's narrow path across the sky, some solar telescopes are fixed in position (and are sometimes buried underground), with the only moving part being a heliostat to track the Sun. One example of this is the McMath-Pierce Solar Telescope . The Sun, being the closest star to earth, allows a unique chance to study stellar physics with high-resolution. It was, until the 1990s, [ 3 ] the only star whose surface had been resolved. General topics that interest a solar astronomer are its 11-year periodicity (i.e., the Solar Cycle ), sunspots , magnetic field activity (see solar dynamo ), solar flares , coronal mass ejections , differential rotation , and plasma physics . Most solar observatories observe optically at visible, UV, and near infrared wavelengths, but other solar phenomena can be observed — albeit not from the Earth's surface due to the absorption of the atmosphere: In the field of amateur astronomy there are many methods used to observe the Sun. Amateurs use everything from simple systems to project the Sun on a piece of white paper, light blocking filters , Herschel wedges which redirect 95% of the light and heat away from the eyepiece, [ 4 ] up to hydrogen-alpha filter systems and even home built spectrohelioscopes . In contrast to professional telescopes, amateur solar telescopes are usually much smaller. [ citation needed ] With a conventional telescope, an extremely dark filter at the opening of the primary tube is used to reduce the light of the Sun to tolerable levels. Since the full available spectrum is observed, this is known as "white-light" viewing, and the opening filter is called a "white-light filter". The problem is that even reduced, the full spectrum of white light tends to obscure many of the specific features associated with solar activity, such as prominences and details of the chromosphere . Specialized solar telescopes facilitate clear observation of such H-alpha emissions by using a bandwidth filter implemented with a Fabry-Perot etalon . [ 5 ] A solar tower is a structure used to support equipment for studying the Sun, and is typically part of solar telescope designs. Solar tower observatories are also called vacuum tower telescopes. Solar towers are used to raise the observation equipment above atmospheric turbulence caused by solar heating of the ground and the radiation of the heat into the atmosphere. Traditional observatories do not have to be placed high above ground level, as they do most of their observation at night, when ground radiation is at a minimum. The horizontal Snow solar observatory was built on Mount Wilson in 1904. It was soon found that heat radiation was disrupting observations. Almost as soon as the Snow Observatory opened, plans were started for a 60-foot-tall (18 m) tower that opened in 1908 followed by a 150-foot (46 m) tower in 1912. The 60-foot tower is currently used to study helioseismology , while the 150-foot tower is active in UCLA 's Solar Cycle Program. The term has also been used to refer to other structures used for experimental purposes, such as the Solar Tower Atmospheric Cherenkov Effect Experiment ( STACEE ), which is being used to study Cherenkov radiation , and the Weizmann Institute solar power tower . Other solar telescopes that have solar towers are Richard B. Dunn Solar Telescope , Solar Observatory Tower Meudon and others.
https://en.wikipedia.org/wiki/Solar_telescope
Solar water disinfection , in short SODIS , is a type of portable water purification that uses solar energy to make biologically-contaminated (e.g. bacteria, viruses, protozoa and worms) water safe to drink. Water contaminated with non-biological agents such as toxic chemicals or heavy metals require additional steps to make the water safe to drink. [ 1 ] Solar water disinfection is usually accomplished using some mix of electricity generated by photovoltaics panels (solar PV), heat ( solar thermal ), and solar ultraviolet light collection. Solar disinfection using the effects of electricity generated by photovoltaics typically uses an electric current to deliver electrolytic processes which disinfect water, for example by generating oxidative free radicals which kill pathogens by damaging their chemical structure. A second approach uses stored solar electricity from a battery, and operates at night or at low light levels to power an ultraviolet lamp to perform secondary solar ultraviolet water disinfection. Solar thermal water disinfection uses heat from the sun to heat water to 70–100 °C for a short period of time. A number of approaches exist. Solar heat collectors can have lenses in front of them, or use reflectors. They may also use varying levels of insulation or glazing. In addition, some solar thermal water disinfection processes are batch-based, while others (through-flow solar thermal disinfection) operate almost continuously while the sun shines. Water heated to temperatures below 100 °C is generally referred to as pasteurized water. The ultraviolet part of sunlight can also kill pathogens in water. The SODIS method uses a combination of UV light and increased temperature (solar thermal) for disinfecting water using only sunlight and repurposed PET plastic bottles. SODIS is a free and effective method for decentralized water treatment , usually applied at the household level and is recommended by the World Health Organization as a viable method for household water treatment and safe storage. [ 2 ] SODIS is already applied in numerous developing countries . [ 3 ] : 55 Educational pamphlets on the method are available in many languages, [ 4 ] each equivalent to the English-language version. [ 3 ] Guides for the household use of SODIS describe the process. Colourless, transparent PET water or soda bottles of 2 litre or smaller size with few surface scratches are selected for use. Glass bottles are also suitable. Any labels are removed and the bottles are washed before the first use. Water from possibly-contaminated sources is filled into the bottles, using the clearest water possible. Where the turbidity is higher than 30 NTU it is necessary to filter or precipitate out particulates prior to exposure to the sunlight. Filters are locally made from cloth stretched over inverted bottles with the bottoms cut off. In order to improve oxygen saturation, the guides recommend that bottles be filled three-quarters, shaken for 20 seconds (with the cap on), then filled completely, recapped, and checked for clarity. [ citation needed ] The filled bottles are then exposed to the fullest sunlight possible. Bottles will heat faster and hotter if they are placed on a sloped sun-facing reflective metal surface. A corrugated metal roof (as compared to a thatched roof) or a slightly curved sheet of aluminum foil increases the light inside the bottle. Overhanging structures or plants that shade the bottles must be avoided, as they reduce both illumination and heating. After sufficient time, the treated water can be consumed directly from the bottle or poured into clean drinking cups. The risk of re-contamination is minimized if the water is stored in the bottles. Refilling and storage in other containers increases the risk of contamination. The most favorable regions for application of the SODIS method are located between latitude 15°N and 35°N, and also 15°S and 35°S. [ 3 ] These regions have high levels of solar radiation, with limited cloud cover and rainfall, and with over 90% of sunlight reaching the earth's surface as direct radiation. [ 3 ] The second most favorable region lies between latitudes 15°N and 15°S. these regions have high levels of scattered radiation, with about 2500 hours of sunshine annually, due to high humidity and frequent cloud cover. [ 3 ] Local education in the use of SODIS is important to avoid confusion between PET and other bottle materials. Applying SODIS without proper assessment (or with false assessment) of existing hygienic practices & diarrhea incidence may not address other routes of infection. Community trainers must themselves be trained first. [ 3 ] SODIS is an effective method for treating water where fuel or cookers are unavailable or prohibitively expensive. Even where fuel is available, SODIS is a more economical and environmentally friendly option. The application of SODIS is limited if enough bottles are not available, or if the water is highly turbid . In fact, if the water is highly turbid, SODIS cannot be used alone; additional filtering is then necessary. [ 6 ] A basic field test to determine if the water is too turbid for the SODIS method to work properly is the newspaper test. [ 4 ] For the newspaper test the user has to place the filled bottle upright on top of a newspaper headline and look down through the bottle opening. If the letters of the headline are readable, the water can be used for the SODIS method. If the letters are not readable then the turbidity of the water likely exceeds 30 NTU, and the water must be pretreated. [ citation needed ] In theory, the method could be used in disaster relief or refugee camps. However, supplying bottles may be more difficult than providing equivalent disinfecting tablets containing chlorine, bromine, or iodine. In addition, in some circumstances, it may be difficult to guarantee that the water will be left in the sun for the necessary time. Other methods for household water treatment and safe storage exist, including chlorination, flocculation/disinfection, and various filtration procedures. The method should be chosen based on the criteria of effectiveness, the co-occurrence of other types of pollution (e.g. turbidity, chemical pollutants), treatment costs, labor input and convenience, and the user's preference. When the water is highly turbid, SODIS cannot be used alone; additional filtering or flocculation is then necessary to clarify the water prior to SODIS treatment. [ 7 ] [ 8 ] Recent work has shown that common table salt (NaCl) is an effective flocculation agent for decreasing turbidity for the SODIS method in some types of soil. [ 9 ] This method could be used to increase the geographic areas for which the SODIS method could be used as regions with highly turbid water could be treated for low costs. [ 10 ] SODIS may alternatively be implemented using plastic bags. SODIS bags have been found to yield as much as 74% higher treatment efficiencies than SODIS bottles, which may be because the bags are able to reach elevated temperatures that cause accelerated treatment. [ 11 ] SODIS bags with a water layer of approximately 1 cm to 6 cm reach higher temperatures more easily than SODIS bottles, and treat Vibrio cholerae more effectively. [ 11 ] It is assumed this is because of the improved surface area to volume ratio in SODIS bags. In remote regions plastic bottles are not locally available and need to be shipped in from urban centers which may be expensive and inefficient since bottles cannot be packed very tightly. Bags can be packed more densely than bottles, and can be shipped at lower cost, representing an economically preferable alternative to SODIS bottles in remote communities. The disadvantages of using bags are that they can give the water a plastic smell, they are more difficult to handle when filled with water, and they typically require that the water be transferred to a second container for drinking. [ citation needed ] Another important benefit in using the SODIS bottles as opposed to the bags or other methods requiring the water to be transferred to a smaller container for consumption is that the bottles are a point-of-use household water treatment method. [ 12 ] Point-of-use means that the water is treated in the same easy to handle container it will be served from, thus decreasing the risk of secondary water contamination. [ 13 ] If the water bottles are not left in the sun for the proper length of time, the water may not be safe to drink and could cause illness. If the sunlight is less strong, due to overcast weather or a less sunny climate, a longer exposure time in the sun is necessary. [ citation needed ] The following issues should also be considered: According to the World Health Organization , more than two million people per year die of preventable water-borne diseases, and one billion people lack access to a source of improved drinking water. [ 21 ] [ 22 ] It has been shown that the SODIS method (and other methods of household water treatment) can very effectively remove pathogenic contamination from the water. However, infectious diseases are also transmitted through other pathways, i.e. due to a general lack of sanitation and hygiene. Studies on the reduction of diarrhea among SODIS users show reduction values of 30–80%. [ 23 ] [ 24 ] [ 25 ] [ 26 ] The effectiveness of the SODIS was first discovered by Aftim Acra, of the American University of Beirut in the early 1980s. Follow-up was conducted by the research groups of Martin Wegelin at the Swiss Federal Institute of Aquatic Science and Technology (EAWAG) and Kevin McGuigan at the Royal College of Surgeons in Ireland . Clinical control trials were pioneered by Ronan Conroy of the RCSI team in collaboration with Michael Elmore-Meegan . [ citation needed ] A joint research project on SODIS was implemented by the following institutions: The project embarked on a multi-country study including study areas in Zimbabwe , South Africa and Kenya . Other developments include a continuous flow disinfection unit [ 27 ] and solar disinfection with titanium dioxide film over glass cylinders, which prevents the bacterial regrowth of coliforms after SODIS. [ 28 ] Research has shown that a number of low-cost additives are capable of accelerating SODIS and that additives might make SODIS more rapid and effective in both sunny and cloudy weather, developments that could help make the technology more effective and acceptable to users. [ 29 ] A 2008 study showed that powdered seeds of five natural legumes (peas, beans and lentils)— Vigna unguiculata (cowpea), Phaseolus mungo (black lentil), Glycine max (soybean), Pisum sativum (green pea), and Arachis hypogaea (peanut)—when evaluated as natural flocculants for the removal of turbidity, were as effective as commercial alum and even superior for clarification in that the optimum dosage was low (1 g/L), flocculation was rapid (7–25 minutes, depending on the seed used) and the water hardness and pH was essentially unaltered. [ 30 ] Later studies have used chestnuts , oak acorns, and Moringa oleifera (drumstick tree) for the same purpose. [ 31 ] [ 32 ] Other research has examined the use of doped semiconductors to increase the production of oxygen radicals under solar UV-A. [ 33 ] Recently, researchers at the National Centre for Sensor Research and the Biomedical Diagnostics Institute at Dublin City University have developed an inexpensive printable UV dosimeter for SODIS applications that can be read using a mobile phone. [ 34 ] The camera of the phone is used to acquire an image of the sensor and custom software running on the phone analyses the sensor colour to provide a quantitative measurement of UV dose. In isolated regions the effect of wood smoke increases lung disease, due to the constant need for building fires to boil water and cook. Research groups have found that boiling of water is neglected due to the difficulty of gathering wood, which is scarce in many areas. When presented with basic household water treatment options, residents in isolated regions in Africa have shown a preference for the SODIS method over boiling or other basic water treatment methods. A very simple solar water purifier for rural households has been developed which uses 4 layers of saree cloth and solar tubular collectors to remove all coliforms. [ 35 ] In July 2020 researchers reported the development of a reusable aluminium surface for efficient solar-based water sanitation to below the WHO and EPA standards for drinkable water. [ 36 ] [ 37 ] The Swiss Federal Institute of Aquatic Science and Technology (EAWAG), through the Department of Water and Sanitation in Developing Countries (Sandec), coordinates SODIS promotion projects in 33 countries including Bhutan, Bolivia, Burkina Faso, Cambodia, Cameroon, DR Congo, Ecuador, El Salvador, Ethiopia, Ghana, Guatemala, Guinea, Honduras, India, Indonesia, Kenya, Laos, Malawi, Mozambique, Nepal, Nicaragua, Pakistan, Perú, Philippines, Senegal, Sierra Leone, Sri Lanka, Togo, Uganda, Uzbekistan, Vietnam, Zambia, and Zimbabwe. [ 38 ] SODIS projects are funded by, among others, the SOLAQUA Foundation , [ 39 ] several Lions Clubs , Rotary Clubs, Migros , and the Michel Comte Water Foundation. SODIS has also been applied in several communities in Brazil, one of them being Prainha do Canto Verde , Beberibe west of Fortaleza . Villagers there using the SODIS method have been quite successful, since the temperature during the day can go beyond 40 °C (104 °F) and there is a limited amount of shade. [ citation needed ] One of the most important things to consider for public health workers reaching out to communities in need of suitable, cost efficient, and sustainable water treatment methods is teaching the importance of water quality in the context of health promotion and disease prevention while educating about the methods themselves. Although skepticism has posed a challenge in some communities to adopt SODIS and other household water treatment methods for daily use, disseminating knowledge on the important health benefits associated with these methods will likely increase adoption rates. [ citation needed ]
https://en.wikipedia.org/wiki/Solar_water_disinfection
Solarization refers to a phenomenon in physics where a material undergoes a temporary change in color after being subjected to high-energy electromagnetic radiation , such as ultraviolet light or X-rays . Clear glass and many plastics will turn amber, green or other colors when subjected to X-radiation, and glass may turn blue after long-term solar exposure in the desert . It is believed that solarization is caused by the formation of internal defects , called color centers , which selectively absorb portions of the visible light spectrum . In glass, color center absorption can often be reversed by heating the glass to high temperatures (a process called thermal bleaching) to restore the glass to its initial transparent state. Solarization may also permanently degrade a material's physical or mechanical properties, and is one of the mechanisms involved in the breakdown of plastics within the environment. In the field of clinical imaging, with sufficient exposure , solarization of certain screen-film systems can occur which obscures details within the X-ray image and degrades the accuracy of the diagnosis . Even though degradation can occur this was found to be a rare phenomenon . [ 1 ] This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . This atomic, molecular, and optical physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solarization_(physics)
Solarroller is a BEAM dragster photovore robot run by solar panel that utilizes sunlight . In competitions between solarrollers, each one must run one meter in the shortest time possible. Components include pager motors , capacitors , resistors , transistors , and solar panels. There are several different kinds of configurations of solarrollers, with bigger or smaller wheels, one or two motors. Configurations differences include: This robot type always moves forwards. The motor drives one or more wheels. A " Solar Engine " circuit is used to feed the robot. Solarroller's speed is directly related to the amount of light robot registers on its optical sensor. Most are driven by an electronic " relaxation oscillator ", in which a charge is accumulated in a capacitor while at rest and then suddenly released in the drive mechanism.
https://en.wikipedia.org/wiki/Solarroller
Solder ( UK : / ˈ s ɒ l d ə , ˈ s ə ʊ l d ə / ; [ 1 ] NA : / ˈ s ɒ d ər / ) [ 2 ] is a fusible metal alloy used to create a permanent bond between metal workpieces. Solder is melted in order to wet the parts of the joint, where it adheres to and connects the pieces after cooling. Metals or alloys suitable for use as solder should have a lower melting point than the pieces to be joined. The solder should also be resistant to oxidative and corrosive effects that would degrade the joint over time. Solder used in making electrical connections also needs to have favorable electrical characteristics. Soft solder typically has a melting point range of 90 to 450 °C (190 to 840 °F; 360 to 720 K), [ 3 ] and is commonly used in electronics , plumbing , and sheet metal work. Alloys that melt between 180 and 190 °C (360 and 370 °F; 450 and 460 K) are the most commonly used. Soldering performed using alloys with a melting point above 450 °C (840 °F; 720 K) is called "hard soldering", "silver soldering", or brazing . In specific proportions, some alloys are eutectic — that is, the alloy's melting point is the lowest possible for a mixture of those components, and coincides with the freezing point. Non-eutectic alloys can have markedly different solidus and liquidus temperatures, as they have distinct liquid and solid transitions. Non-eutectic mixtures often exist as a paste of solid particles in a melted matrix of the lower-melting phase as they approach high enough temperatures. In electrical work, if the joint is disturbed while in this "pasty" state before it fully solidifies, a poor electrical connection may result; use of eutectic solder reduces this problem. The pasty state of a non-eutectic solder can be exploited in plumbing, as it allows molding of the solder during cooling, e.g. for ensuring watertight joint of pipes, resulting in a so-called "wiped joint". For electrical and electronics work, solder wire is available in a range of thicknesses for hand-soldering (manual soldering is performed using a soldering iron or soldering gun ), and with cores containing flux . It is also available as a room temperature paste, as a preformed foil shaped to match the workpiece which may be more suited for mechanized mass-production , or in small "tabs" that can be wrapped around the joint and melted with a flame where an iron isn't usable or available, as for instance in field repairs. Alloys of lead and tin were commonly used in the past and are still available; they are particularly convenient for hand-soldering. Lead-free solders have been increasing in use due to regulatory requirements plus the health and environmental benefits of avoiding lead-based electronic components. They are almost exclusively used today in consumer electronics. [ 4 ] Plumbers often use bars of solder, much thicker than the wire used for electrical applications, and apply flux separately; many plumbing-suitable soldering fluxes are too corrosive (or conductive) to be used in electrical or electronic work. Jewelers often use solder in thin sheets, which they cut into snippets. The word solder comes from the Middle English word soudur , via Old French solduree and soulder , from the Latin solidare , meaning "to make solid". [ 5 ] Tin - lead (Sn-Pb) solders, also called soft solders , are commercially available with tin concentrations between 5% and 70% by weight. The greater the tin concentration, the greater the solder's tensile and shear strengths . Lead mitigates the formation of tin whiskers , [ 6 ] though the precise mechanism for this is unknown. [ 7 ] Today, many techniques are used to mitigate the problem, including changes to the annealing process (heating and cooling), addition of elements like copper and nickel, and the application of conformal coatings . [ 8 ] Alloys commonly used for electrical soldering are 60/40 Sn-Pb, which melts at 188 °C (370 °F), [ 9 ] and 63/37 Sn-Pb used principally in electrical/electronic work. The latter mixture is a eutectic alloy of these metals, which: In the United States, since 1974, lead is prohibited in solder and flux in plumbing applications for drinking water use, per the Safe Drinking Water Act . [ 10 ] Historically, a higher proportion of lead was used, commonly 50/50. This had the advantage of making the alloy solidify more slowly. With the pipes being physically fitted together before soldering, the solder could be wiped over the joint to ensure water tightness. Although lead water pipes were displaced by copper when the significance of lead poisoning began to be fully appreciated, lead solder was still used until the 1980s because it was thought that the amount of lead that could leach into water from the solder was negligible from a properly soldered joint. The electrochemical couple of copper and lead promotes corrosion of the lead and tin. Tin, however, is protected by insoluble oxide. Since even small amounts of lead have been found detrimental to health as a potent neurotoxin , [ 11 ] lead in plumbing solder was replaced by silver (food-grade applications) or antimony , with copper often added, and the proportion of tin was increased (see lead-free solder ). The addition of tin—more expensive than lead—improves wetting properties of the alloy; lead itself has poor wetting characteristics. High-tin tin-lead alloys have limited use as the workability range can be provided by a cheaper high-lead alloy. [ 12 ] Lead-tin solders readily dissolve gold plating and form brittle intermetallics. [ 13 ] 60/40 Sn-Pb solder oxidizes on the surface, forming a complex 4-layer structure: tin(IV) oxide on the surface, below it a layer of tin(II) oxide with finely dispersed lead, followed by a layer of tin(II) oxide with finely dispersed tin and lead, and the solder alloy itself underneath. [ 14 ] Lead, and to some degree tin, as used in solder contains small but significant amounts of radioisotope impurities. Radioisotopes undergoing alpha decay are a concern due to their tendency to cause soft errors . Polonium-210 is especially troublesome; lead-210 beta decays to bismuth-210 which then beta decays to polonium-210, an intense emitter of alpha particles . Uranium-238 and thorium-232 are other significant contaminants of alloys of lead. [ 15 ] [ 16 ] The European Union Waste Electrical and Electronic Equipment Directive and Restriction of Hazardous Substances Directive were adopted in early 2003 and came into effect on July 1, 2006, restricting the inclusion of lead in most consumer electronics sold in the EU, and having a broad effect on consumer electronics sold worldwide. In the US, manufacturers may receive tax benefits by reducing the use of lead-based solder. Lead-free solders in commercial use may contain tin, copper, silver, bismuth , indium , zinc , antimony , and traces of other metals. Most lead-free replacements for conventional 60/40 and 63/37 Sn-Pb solder have melting points from 50 to 200 °C higher, [ 17 ] though there are also solders with much lower melting points. Lead-free solder typically requires around 2% flux by mass for adequate wetting ability. [ 18 ] When lead-free solder is used in wave soldering , a slightly modified solder pot may be desirable (e.g. titanium liners or impellers) to reduce maintenance cost due to increased tin-scavenging of high-tin solder. Lead-free solder is prohibited in critical applications, such as aerospace , military and medical projects, because joints are likely to suffer from metal fatigue failure under stress (such as that from thermal expansion and contraction). Although this is a property that conventional leaded solder possesses as well (like any metal), the point at which stress fatigue will usually occur in leaded solder is substantially above the level of stresses normally encountered. Tin-silver-copper (Sn-Ag-Cu, or SAC ) solders are used by two-thirds of Japanese manufacturers for reflow and wave soldering , and by about 75% of companies for hand soldering. The widespread use of this popular lead-free solder alloy family is based on the reduced melting point of the Sn-Ag-Cu ternary eutectic behavior (217 °C; 423 °F), which is below the 22/78 Sn-Ag (wt.%) eutectic of 221 °C (430 °F) and the 99.3/0.7 Sn-Cu eutectic of 227 °C (441 °F). [ 19 ] The ternary eutectic behavior of Sn-Ag-Cu and its application for electronics assembly was discovered (and patented) by a team of researchers from Ames Laboratory , Iowa State University , and from Sandia National Laboratories -Albuquerque. Much recent research has focused on the addition of a fourth element to Sn-Ag-Cu solder, in order to provide compatibility for the reduced cooling rate of solder sphere reflow for assembly of ball grid arrays . Examples of these four-element compositions are 18/64/14/4 tin-silver-copper-zinc (Sn-Ag-Cu-Zn) (melting range 217–220 °C) and 18/64/16/2 tin-silver-copper- manganese (Sn-Ag-Cu-Mn; melting range of 211–215 °C). Tin-based solders readily dissolve gold, forming brittle intermetallic joins; for Sn-Pb alloys the critical concentration of gold to embrittle the joint is about 4%. Indium-rich solders (usually indium-lead) are more suitable for soldering thicker gold layers as the dissolution rate of gold in indium is much slower. Tin-rich solders also readily dissolve silver; for soldering silver metallization or surfaces, alloys with addition of silver are suitable; tin-free alloys are also a choice, though their wetting ability is poorer. If the soldering time is long enough to form the intermetallics, the tin surface of a joint soldered to gold is very dull. [ 13 ] Hard solders are used for brazing, and melt at higher temperatures. Alloys of copper with either zinc or silver are the most common. In silversmithing or jewelry making, special hard solders are used that will pass assay . They contain a high proportion of the metal being soldered and lead is not used in these alloys. These solders vary in hardness, designated as "enameling", "hard", "medium", "easy" and "repair". Enameling solder has a high melting point, close to that of the material itself, to prevent the joint desoldering during firing in the enameling process. The remaining solder types are used in decreasing order of hardness during the process of making an item, to prevent a previously soldered seam or joint desoldering while additional sites are soldered. Easy solder or repair solder are also often used for repair work for the same reason. Flux is also used to prevent joints from desoldering. Silver solder is also used in manufacturing to join metal parts that cannot be welded . The alloys used for these purposes contain a high proportion of silver (up to 40%), and may also contain cadmium . Different elements serve different roles in the solder alloy: Impurities usually enter the solder reservoir by dissolving the metals present in the assemblies being soldered. Dissolving of process equipment is not common as the materials are usually chosen to be insoluble in solder. [ 25 ] Board finishes vs wave soldering bath impurities buildup: Flux is a reducing agent designed to help reduce (return oxidized metals to their metallic state) metal oxides at the points of contact to improve the electrical connection and mechanical strength. The two principal types of flux are acid flux (sometimes called "active flux"), containing strong acids, used for metal mending and plumbing, and rosin flux (sometimes called "passive flux"), used in electronics. Rosin flux comes in a variety of "activities", corresponding roughly to the speed and effectiveness of the organic acid components of the rosin in dissolving metallic surface oxides, and consequently the corrosiveness of the flux residue. Due to concerns over atmospheric pollution and hazardous waste disposal, the electronics industry has been gradually shifting from rosin flux to water-soluble flux, which can be removed with deionized water and detergent , instead of hydrocarbon solvents . Water-soluble fluxes are generally more conductive than traditionally used electrical / electronic fluxes and so have more potential for electrically interacting with a circuit; in general it is important to remove their traces after soldering. Some rosin type flux traces likewise should be removed, and for the same reason. In contrast to using traditional bars or coiled wires of all-metal solder and manually applying flux to the parts being joined, much hand soldering since the mid-20th century has used flux-core solder. This is manufactured as a coiled wire of solder, with one or more continuous bodies of inorganic acid or rosin flux embedded lengthwise inside it. As the solder melts onto the joint, it frees the flux and releases that on it as well. The solidifying behavior depends on the alloy composition. Pure metals solidify at a certain temperature, forming crystals of one phase. Eutectic alloys also solidify at a single temperature, all components precipitating simultaneously in so-called coupled growth . Non-eutectic compositions on cooling start to first precipitate the non-eutectic phase; dendrites when it is a metal, large crystals when it is an intermetallic compound. Such a mixture of solid particles in a molten eutectic is referred to as a mushy state. Even a relatively small proportion of solids in the liquid can dramatically lower its fluidity. [ 28 ] The temperature of total solidification is the solidus of the alloy, the temperature at which all components are molten is the liquidus. The mushy state is desired where a degree of plasticity is beneficial for creating the joint, allowing filling larger gaps or being wiped over the joint (e.g. when soldering pipes). In hand soldering of electronics it may be detrimental as the joint may appear solidified while it is not yet. Premature handling of such joint then disrupts its internal structure and leads to compromised mechanical integrity. Many different intermetallic compounds are formed during solidifying of solders and during their reactions with the soldered surfaces. [ 25 ] The intermetallics form distinct phases, usually as inclusions in a ductile solid solution matrix, but also can form the matrix itself with metal inclusions or form crystalline matter with different intermetallics. Intermetallics are often hard and brittle. Finely distributed intermetallics in a ductile matrix yield a hard alloy while coarse structure gives a softer alloy. A range of intermetallics often forms between the metal and the solder, with increasing proportion of the metal; e.g. forming a structure of Cu−Cu 3 Sn−Cu 6 Sn 5 −Sn . Layers of intermetallics can form between the solder and the soldered material. These layers may cause mechanical reliability weakening and brittleness, increased electrical resistance, or electromigration and formation of voids. The gold-tin intermetallics layer is responsible for poor mechanical reliability of tin-soldered gold-plated surfaces where the gold plating did not completely dissolve in the solder. Two processes play a role in a solder joint formation: interaction between the substrate and molten solder, and solid-state growth of intermetallic compounds. The base metal dissolves in the molten solder in an amount depending on its solubility in the solder. The active constituent of the solder reacts with the base metal with a rate dependent on the solubility of the active constituents in the base metal. The solid-state reactions are more complex – the formation of intermetallics can be inhibited by changing the composition of the base metal or the solder alloy, or by using a suitable barrier layer to inhibit diffusion of the metals. [ 29 ] Some example interactions include: A preform is a pre-made shape of solder specially designed for the application where it is to be used. Many methods are used to manufacture the solder preform, stamping being the most common. The solder preform may include the solder flux needed for the soldering process. This can be an internal flux, inside the solder preform, or external, with the solder preform coated. Glass solder is used to join glasses to other glasses, ceramics , metals , semiconductors , mica , and other materials, in a process called glass frit bonding . The glass solder has to flow and wet the soldered surfaces well below the temperature where deformation or degradation of either of the joined materials or nearby structures (e.g., metallization layers on chips or ceramic substrates) occurs. The usual temperature of achieving flowing and wetting is between 450 and 550 °C (840 and 1,020 °F).
https://en.wikipedia.org/wiki/Solder
Solder is a metallic material that is used to connect metal workpieces. The choice of specific solder alloys depends on their melting point , chemical reactivity, mechanical properties, toxicity, and other properties. Hence a wide range of solder alloys exist, and only major ones are listed below. Since early 2000s the use of lead in solder alloys is discouraged by several governmental guidelines in the European Union , Japan and other countries, [ 1 ] such as Restriction of Hazardous Substances Directive and Waste Electrical and Electronic Equipment Directive . In the Sn-Pb alloys, tensile strength increases with increasing tin content. Indium-tin alloys with high indium content have very low tensile strength. [ 11 ] For soldering semiconductor materials, e.g. die attachment of silicon , germanium and gallium arsenide , it is important that the solder contains no impurities that could cause doping in the wrong direction. For soldering n-type semiconductors , solder may be doped with antimony; indium may be added for soldering p-type semiconductors . Pure tin can also be used. [ 59 ] [ 96 ] Various fusible alloys can be used as solders with very low melting points; examples include Field's metal , Lipowitz's alloy , Wood's metal , and Rose's metal . The thermal conductivity of common solders ranges from 30 to 400 W/(m·K), and the density from 9.25 to 15.00 g/cm 3 . [ 97 ] [ 98 ]
https://en.wikipedia.org/wiki/Solder_alloys
Solder fatigue is the mechanical degradation of solder due to deformation under cyclic loading. This can often occur at stress levels below the yield stress of solder as a result of repeated temperature fluctuations, mechanical vibrations , or mechanical loads . Techniques to evaluate solder fatigue behavior include finite element analysis and semi-analytical closed-form equations . [ 1 ] Solder is a metal alloy used to form electrical, thermal, and mechanical interconnections between the component and printed circuit board (PCB) substrate in an electronic assembly. Although other forms of cyclic loading are known to cause solder fatigue, it has been estimated that the largest portion of electronic failures are thermomechanically [ 2 ] driven due to temperature cycling. [ 3 ] Under thermal cycling, stresses are generated in the solder due to coefficient of thermal expansion (CTE) mismatches. This causes the solder joints to experience non-recoverable deformation via creep and plasticity that accumulates and leads to degradation and eventual fracture . Historically, tin-lead solders were common alloys used in the electronics industry . Although they are still used in select industries and applications, lead-free solders have become significantly more popular due to RoHS regulatory requirements. This new trend increased the need to understand the behavior of lead-free solders. Much work has been done to characterize the creep-fatigue behavior of various solder alloys and develop predictive life damage models using a Physics of Failure approach. These models are often used when trying to assess solder joint reliability. The fatigue life of a solder joint depends on several factors including: the alloy type and resulting microstructure , the joint geometry, the component material properties, the PCB substrate material properties, the loading conditions, and the boundary conditions of the assembly. During a product's operational lifetime it undergoes temperature fluctuations from application specific temperature excursions and self-heating due to component power dissipation . Global and local mismatches of coefficient of thermal expansion (CTE) between the component, component leads, PCB substrate, and system level effects [ 4 ] drive stresses in the interconnects (i.e. solder joints). Repeated temperature cycling eventually leads to thermomechanical fatigue. The deformation characteristics of various solder alloys can be described at the microscale due to the differences in composition and resulting microstructure. Compositional differences lead to variations in phase (s), grain size, and intermetallics . This affects susceptibility to deformation mechanisms such as dislocation motion, diffusion , and grain boundary sliding . During thermal cycling, the solder's microstructure (grains/phases) will tend to coarsen [ 5 ] as energy is dissipated from the joint. This eventually leads to crack initiation and propagation which can be described as accumulated fatigue damage. [ 6 ] The resulting bulk behavior of solder is described as viscoplastic (i.e. rate dependent inelastic deformation) with sensitivity to elevated temperatures. Most solders experience temperature exposures near their melting temperature (high homologous temperature ) throughout their operational lifetime which makes them susceptible to significant creep. Several constitutive models have been developed to capture the creep characteristics of lead and lead-free solders. Creep behavior can be described in three stages: primary, secondary, and tertiary creep. When modeling solder, secondary creep, also called steady state creep (constant strain rate), is often the region of interest for describing solder behavior in electronics. Some models also incorporate primary creep. Two of the most popular models are hyperbolic sine models developed by Garofalo [ 7 ] and Anand [ 8 ] [ 9 ] to characterize the steady state creep of solder. These model parameters are often incorporated as inputs in FEA simulations to properly characterize the solder response to loading. Solder damage models take a physics-of-failure based approach by relating a physical parameter that is a critical measure of the damage mechanism process (i.e. inelastic strain range or dissipated strain energy density) to cycles to failure. The relationship between the physical parameter and cycles to failure typically takes on a power law or modified power law relationship with material dependent model constants. These model constants are fit from experimental testing and simulation for different solder alloys. For complex loading schemes, Miner's linear superposition damage law [ 10 ] is employed to calculate accumulated damage. The generalized Coffin–Manson [ 11 ] [ 12 ] [ 13 ] [ 14 ] model considers the elastic and plastic strain range by incorporating Basquin's equation [ 15 ] and takes the form: Δ ϵ 2 = σ f ′ − σ m E ( 2 N f ) b + ϵ f ′ ( 2 N f ) c {\displaystyle {\frac {\Delta \epsilon }{2}}={\frac {\sigma _{f}^{'}-\sigma _{m}}{E}}(2N_{f})^{b}+\epsilon _{f}^{'}(2N_{f})^{c}} Here ∆ε ⁄ 2 represents the elastic-plastic cyclic strain range, E represents elastic modulus, σ m represents means stress, and N f represents cycles to failure. The remaining variables, namely σ f , ε' f , b ,and c are fatigue coefficients and exponents representing material model constants. The generalized Coffin–Manson model accounts for the effects of high cycle fatigue (HCF) primarily due to elastic deformation and low cycle fatigue (LCF) primarily due to plastic deformation. In the 1980s Engelmaier proposed a model, [ 16 ] in conjunction with the work of Wild, [ 17 ] that accounted for some of the limitations of the Coffin–Manson model, such as the effects of the frequency and temperature. His model takes a similar power law form: N f ( 50 % ) = 1 2 ( Δ γ 2 ϵ f ′ ) 1 c {\displaystyle N_{f}(50\%)={\frac {1}{2}}({\frac {\Delta \gamma }{2\epsilon '_{f}}})^{\frac {1}{c}}} c = − 0.442 − 6 ⋅ 10 − 4 T s + 1.74 ⋅ 10 − 2 ln ⁡ ( 1 + f ) {\displaystyle c=-0.442-6\cdot 10^{-4}T_{s}+1.74\cdot 10^{-2}\ln(1+f)} Engelmaier relates the total shear strain (∆γ) to cycles to failure ( N f ). ε' f and c are model constants where c is a function of mean temperature during thermal cycling ( T s ) and thermal cycling frequency ( f ). Δ γ = C ( L D h s ) Δ α Δ T {\displaystyle \Delta \gamma =C({\frac {L_{D}}{h_{s}}})\Delta \alpha \Delta T} ∆γ can be calculated as function of the distance from the neutral point ( L D ) solder joint height ( h s ), coefficient of thermal expansion (∆ α ), and change in temperature (Δ T ). In this case C is empirical model constant. This model was initially proposed for leadless devices with tin-lead solder. The model has since been modified by Engelmaier and others [ who? ] to account for other phenomena such as leaded components, thermal cycling dwell times, and lead-free solders. While initially a substantial improvement over other techniques to predict solder fatigue, such as testing and simple acceleration transforms, it is now generally acknowledged [ citation needed ] that Engelmaier and other models that are based on strain range do not provide a sufficient degree of accuracy. Darveaux [ 18 ] [ 19 ] proposed a model relating the quantity of volume weighted average inelastic work density, the number of cycles to crack initiation, and the crack propagation rate to the characteristic cycles to failure. N 0 = K 1 Δ W a v g K 2 {\displaystyle N_{0}=K_{1}\Delta W_{avg}^{K_{2}}} d a d N = K 3 Δ W a v g K 4 {\displaystyle {\frac {da}{dN}}=K_{3}\Delta W_{avg}^{K_{4}}} N f = N 0 + a d a / d N = K 1 Δ W a v g K 2 + a K 3 Δ W a v g K 4 {\displaystyle N_{f}=N_{0}+{\frac {a}{da/dN}}=K_{1}\Delta W_{avg}^{K_{2}}+{\frac {a}{K_{3}\Delta W_{avg}^{K_{4}}}}} In the first equation N 0 represents the number of cycles to crack initiation, ∆W represents inelastic work density, K 1 and K 2 are material model constants. In the second equation, da/dN represents the crack prorogation rate, ∆W represents inelastic work density, K 3 and K 4 are material model constants. In this case the crack propagation rate is approximated to be constant. N f represents the characteristic cycles to failure and a represents the characteristic crack length. Model constants can be fit for different solder alloys using a combination of experimental testing and finite element analysis (FEA) simulation. The Darveaux model has been found to be relatively accurate by several authors. [ 20 ] [ 21 ] However, due to the expertise, complexity, and simulation resources required, its use has been primarily limited to component manufacturers evaluating component packaging. The model has not received acceptance in regards to modeling solder fatigue across an entire printed circuit assembly and has been found to be inaccurate in predicting system-level effects (triaxiality) on solder fatigue. [ 22 ] The current solder joint fatigue model preferred by the majority of electronic OEMs worldwide [ citation needed ] is the Blattau model, which is available in the Sherlock Automated Design Analysis software. The Blattau model is an evolution [ citation needed ] of the previous models discussed above. Blattau incorporates the use of strain energy proposed by Darveaux, while using closed-form equations based on classic mechanics to calculate the stress and strain being applied to the solder interconnect. [ 23 ] An example of these stress/strain calculations for a simple leadless chip component is shown in the following equation: ( α 1 − α 2 ) ⋅ Δ T ⋅ L D = F ⋅ ( L D E 1 A 1 + L D E 2 A 2 + h S A s G s + h c A c G c + ( 2 − ν 9 G b a ) ) {\displaystyle (\alpha _{1}-\alpha _{2})\cdot \Delta T\cdot L_{D}=F\cdot ({\frac {L_{D}}{E_{1}A_{1}}}+{\frac {L_{D}}{E_{2}A_{2}}}+{\frac {h_{S}}{A_{s}G_{s}}}+{\frac {h_{c}}{A_{c}G_{c}}}+({\frac {2-\nu }{9G_{b}a}}))} Here α is the CTE, T is temperature, L D is the distance to the neutral point, E is elastic modulus, A is the area, h is the thickness, G is shear modulus, ν is Poisson's ratio , and a is the edge length of the copper bond pad. The subscripts 1 refer to the component, 2 and b refer to the board, and s refer to the solder joint. The shear stress (∆τ) is then calculated by dividing this calculated force by the effective solder joint area. Strain energy is computed using the shear strain range and shear stress from the following relationship: Δ W = 1 2 Δ γ Δ τ {\displaystyle \Delta W={\frac {1}{2}}\Delta \gamma \Delta \tau } This approximates the hysteresis loop to be roughly equilateral in shape. Blattau uses this strain energy value in conjunction with models developed by Syed [ 24 ] to relate dissipated strain energy to cycles to failure. The Norris–Landzberg model is a modified Coffin–Manson model. [ 25 ] [ 26 ] Additional strain range and strain energy based models have been proposed by several others. [ 24 ] [ 27 ] [ 28 ] While not as prevalent as thermomechanical solder fatigue, vibration fatigue and cyclic mechanical fatigue are also known to cause solder failures. Vibration fatigue is typically considered to be high cycle fatigue (HCF) with damage driven by elastic deformation and sometimes plastic deformation. This can depend on the input excitation for both harmonic and random vibration . Steinberg [ 29 ] developed a vibration model to predict time to failure based on the calculated board displacement. This model takes into account the input vibration profile such as the power spectral density or acceleration time history, the natural frequency of the circuit card, and the transmissibility. Blattau developed a modified Steinberg model [ 30 ] that uses board level strains rather than displacement and has sensitivity to individual package types. Additionally, low-temperature isothermal mechanical cycling is typically modeled with a combination of LCF and HCF strain range or strain energy models. The solder alloy, assembly geometry and materials, boundary conditions, and loading conditions will affect whether fatigue damage is dominated by elastic (HCF) or plastic (LCF) damage. At lower temperatures and faster strain rates the creep can approximated to be minimal and any inelastic damage will be dominated by plasticity. Several strain range and strain energy models have been employed in this type of a case, such as the Generalized Coffin–Manson model. In this case, much work has been done to characterize the model constants of various damage models for different alloys.
https://en.wikipedia.org/wiki/Solder_fatigue
The Soldier Creek Kilns near Stockton, Utah date from about 1873, the time of their construction, and were in use up until about 1899. Also known as the Waterman Coking Ovens , they were listed on the National Register of Historic Places (NRHP) in 1980. The listing included 14 contributing structures over 30 acres (12 ha). [ 1 ] The site includes four smelting kilns which document smelting technology brought from California and from the eastern U.S. One of the four, the best-preserved, is an eastern beehive-type parabolic-shaped kiln, that would hold more than 10 cords of wood and would be tended from two iron doors. [ 2 ] In 1996, it was argued that these were worth preserving. [ 2 ] The location of the site is not disclosed; they are listed as "Address Restricted", [ 1 ] as is done for archeological resources that may be damaged and lose their information potential, if not protected. This article about a property in Tooele County, Utah on the National Register of Historic Places is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Soldier_Creek_Kilns
Solemya is a genus of saltwater clams, marine bivalve mollusks in the family Solemyidae , the awning clams. Solemya is the type genus of the family Solemyidae. The shell valves of species in this genus are fragile and subcylindrical in shape; there are no hinge teeth . The shell has a persistent thin periostracum which extends beyond the valve margins, hence the common name "awning clams". These clams have chemosynthetic bacterial symbionts that produce their food. The bacteria live within their gill cells, and produce energy by oxidizing hydrogen sulfide , which they then use to fix carbon dioxide via the Calvin cycle . This symbiosis has been best-studied in the Atlantic species S. velum and the Pacific species S. reidi . [ 2 ] Species within the genus Solemya include:
https://en.wikipedia.org/wiki/Solemya
Solemya velum , the Atlantic awning clam , is a species of marine bivalve mollusc in the family Solemyidae , the awning clams. This species is found along the eastern coast of North America, from Nova Scotia to Florida [ 1 ] and inhabits subtidal sediments with high organic matter (OM) content and low Oxygen, such as salt ponds, salt marshes, and sewage outfalls. [ 2 ] Species within the Solemya genus are distinguished by their reduced or absent guts and their association with symbiotic, chemosynthetic bacteria, which produce metabolic energy by oxidizing sulfide in order to fix carbon for their hosts. [ 2 ] Other Solemya species have been discovered near hydrothermal vents and cold-seeps; environments where chemosynthesis and bacterial symbiosis are common. [ 3 ] S. velum is characterized by having an elongated oval shell with parallel ventral and dorsal margins. Individuals range from 8 to 10 cm in length and the shells are lightly calcified, making them distinctively thin and brittle. The periostracum is a smooth, dark brown layer that extends past the shell edge. Unlike most bivalves, the interior of the hinge has no teeth. Sulfur-oxidizing bacterial symbionts are intracellular, harbored in the epithelial cells of S. velum gills and the tissue appears yellow when freshly collected due to the build-up of these sulfuric compounds. [ 2 ] Solemya belongs to a group of "primitive" bivalves called protobranchs , which may be an ancestral or early diverging group. [ 4 ] Most protobranchs live with the anterior end down in sediment so that the gills on the posterior end orient upwards. Opposite of other clams, water is circulated from the anterior end toward the posterior end and across the gills. The protobranchs usually have long extensions of the mouth called labial palps, which they extend into the sediment and pick up particles for feeding, though Solemya species lack labial palps because of their reliance on symbiotic bacteria. Some protobranchs, including Solemya , also have a small flattened "sole" on their foot, which aids the clam in burrowing. The sole has left and right halves which can be folded together to collapse the foot into a narrow profile. The foot is then inserted into the sediment, the sole is unfolded to its wide configuration, and the foot is retracted to draw the clam down into the sediment. Because of their signature foot structure, Solemya creates distinctive U-shaped burrows and can completely bury itself with two thrusts of the foot in this manner. S. velum individuals have been found as deep as 100 m. [ 2 ] Most bivalve species are filter feeders, though with their reduced guts and reliance on symbiotic bacteria, Solemya species either seldom filter-feed or abandoned filter-feeding altogether. Whether or not S. velum engages in filter feeding as a secondary food source is still an active area of research. [ 5 ] The bacterial symbionts within S. velum and other Solemya species are chemoautotrophic, able to fix Carbon by using chemical energy from sulfur oxidation reactions and taking up CO 2 . The presence of these clams and their symbionts in areas with high woody debris or sewage is critical in cycling carbon and breaking down sulfur compounds, reducing the toxicity of near-anoxic sediments. [ 6 ] S. velum is considered to be a model organism for studying bacterial symbiosis in bivalves. [ 4 ] More accessible than its deep-sea relatives, S. velum can be collected from intertidal sediments and is easy to maintain in laboratory experiments. The genome of S. velum was sequenced in 2006 and is valuable for studying the relationships between animal and bacteria cells. The carbon-fixation capabilities of S. velum symbionts are an active area of research for the importance of CO 2 consumption in marine carbon cycling. [ 7 ]
https://en.wikipedia.org/wiki/Solemya_velum
See text Solemyidae is a family of saltwater clams, marine protobranch bivalve mollusks in the order Solemyida . [ 2 ] Solemyids are remarkable in that their digestive tract is either extremely small or non-existent, and their feeding appendages are too short to reach outside the shell. [ 3 ] It has been shown that these clams host sulphur - oxidizing bacteria intracellularly within their gill filaments. As chemoautotrophs , these bacterial symbionts synthesize organic matter from CO 2 and are the primary source of nutrition for the whole organism. [ 4 ] [ 5 ] In turn, the animal host provides its symbionts a habitat in which they have access to the substrates of chemoautotrophy (O 2 , CO 2 , and reduced inorganic compounds such as H 2 S). Together, these partners create "animals" with novel metabolic capabilities. The family Solemyidae includes two genera and the following species:
https://en.wikipedia.org/wiki/Solemyidae
In biology, solenocytes are elongated, flagellated cells commonly found in lower invertebrates, such as flatworms (phylum Platyhelminthes ), chordates (sub-phylum Cephalochordata ) and several other animal species. [ citation needed ] In terms of function, solenocytes play a significant role in the excretory systems of their host organism(s). [ 1 ] For example, the lancelets, also referred to as amphioxus (genus Branchiostoma ), utilize solenocytic protonephridia to perform excretion. [ 2 ] In addition to excretion, these cells contribute to ion regulation and osmoregulation . [ citation needed ] With this in mind, solenocytes form subtypes of protonephridium and are often compared to another specialized excretory cell type, i.e., flame cells . [ citation needed ] Solenocytes have flagella, while flame cells are generally ciliated. [ 3 ] Solenocytes are mesoderm-derived and morphologically diverse cells containing a cytoplasmic cap or enclosed cell body with a nucleus residing in its core. [ citation needed ] A long tubule is attached to the cell body, and within its intracellular lumen lies either one or two long flagella. [ 4 ] The continuously moving vibratile flagella extend from a protein structure, referred to as the basal body , found at the base of the flagellar structure. Extending through the length of the tubule, the flagella are able to protrude into the protonephridium lumen rather designedly (see Figure 1 ). [ citation needed ] The tubule wall structure is composed of thin, pillar-like rods perforated by tiny openings. These pore spaces are likely the site of interstitial fluid filtration. [ 4 ] A nephridium contains approximately 500 solenocytes, each of which is roughly 50 microns in length (this measure includes the nucleated cell body and tubule). [ citation needed ] The excretory organ of Amphioxus ( genus Branchiostoma) belcheri contains clusters of solenocytes (the majority of which are situated along the ligamentum denticulatum coelomic surface). These clusters are composed at patterned intervals, generating groups amongst the renal tubules of B. belcheri , which in a way, resemble mesothelial cells surrounding the human body's internal organs. [ 2 ] Additional studies indicate a resemblance to vertebrate podocytes , as vascular fluid within the ligamentum denticulatum may travel into the coelom through the narrow network of solenocyte gaps or foot processes. [ 5 ] In regards to function, flagella play a significant role in the excretory nature of solenocytes. These motile appendages extend from the solenocyte membrane and utilize the support of an axial filament (or axoneme ), basal body, as well as numerous microtubules . [ 6 ] That said, the stability of the flagellum is crucial to its motility. The basal body, composed of nine triplet microtubules, functions to anchor the flagella in place (acting as a modified centriole). Situated at the center of each flagellum is the highly conserved axoneme, which contains nine doublet microtubules encircling a pair of singlet microtubules (generating a 9+2 pattern). [ 7 ] [ 8 ] Thousands of walking dynein motors are attached to the axoneme doublets, resulting in the hydrolysis of adenosine triphosphate ( ATP ), which fuels flagellar motility. [ 8 ] More specifically, the dyneins anchor onto one doublet within the outer microtubule ring, and as they "walk" towards an adjacent doublet, the entire flagellar structure is able to bend and beat (see Figure 2 ). [ 9 ] In sum, flagellar motility enables solenocytes to waft excretory materials and coelomic fluid down the intracellular tubule lumen. [ 5 ] In several lower invertebrates, solenocyte clusters project directly into coelomic canals, where they are submerged in coelomic fluid. [ 10 ] This fluid contains a variety of materials, including salts , proteins , and corpuscles (e.g., leucocytes , phagocytes , eleocytes, mucocytes, etc.). In that respect, solenocytes play a major role in osmoregulation, ion regulation, and homeostasis through the movement of coelomic fluid. [ 11 ] Branchiostoma nephridia also have tiny blood vessels, and the protonephridia function to absorb nitrogenous waste from coelomic fluid, as well as the blood sinuses via diffusion . [ 10 ] In addition to a greater understanding of excretory organs within other invertebrates, further research on solenocyte composition and function can advance current knowledge on renal function, human health, and even certain genetic diseases within the vertebrate world. The cephalochordate amphioxus (see Figure 3 ) can contribute to this research as a close relative to vertebrates. [ 5 ] [ 12 ] [ 13 ] Compared to paired series of protonephridia, Hatschek's nephridium is a large unpaired excretory structure found within Branchiostoma virginiae . The nephridium, along with its collection tubule, is located to the left of the notochord and beside the left anterior aorta. [ 12 ] Hatschek's nephridium is like a protonephridium with a single, bent branch consisting of numerous solenocytes. The anterior end of this structure sits directly in front of Hatschek's pit , while the posterior end (at the rear of the velum) opens into the endodermal pharynx. Flagellated filtration cells called cyrtopodocytes occupy the length of the collection tubule. These filtration cells closely resemble solenocyte structure and function. [ 12 ] Research suggests that coelomic myoepithelial cells in amphioxus ( Branchiostoma ) may have significance in renal function. [ 5 ] Located along the coelom, myoepithelial cells have both thick (18–25 nm in diameter) and thin (5–7 nm in diameter) microfilaments. That said, these microfilaments appear to be more abundant in myoepithelial cells that are in close proximity to solenocytes attached to the ligamentum denticulatum coelomic surface. [ 5 ] The beating of solenocyte flagella to propel coelomic fluid throughout excretory tubules leads to the idea that myoepithelial cells near solenocyte clusters can impact renal function by regulating fluid motility within the coelomic cavity. [ 5 ] Within the vertebrate lineage, significant genome duplications took place after the divergence of Branchiostoma , thus making it a potentially valuable model for gaining insight into vertebrate biological mechanisms. [ 13 ] Branchiostoma has use for investigating human health and genetic disease . Along with signaling pathways , numerous homologs of vertebrate organs share cellular, developmental, and physiological parameters with their vertebrate equivalents. [ 13 ] On that premise, solenocyte function within Branchiostoma could provide insight into metabolic diseases, such as renal cell carcinoma (RCC). [ 14 ]
https://en.wikipedia.org/wiki/Solenocyte
The solenoid structure of chromatin is a model for the structure of the 30 nm fibre. It is a secondary chromatin structure which helps to package eukaryotic DNA into the nucleus . However, current research casts doubt on its presence in vivo , and tends to show that it is an observational artifact. [ 1 ] Chromatin was first discovered by Walther Flemming by using aniline dyes to stain it. In 1974, it was first proposed by Roger Kornberg that chromatin was based on a repeating unit of a histone octamer and around 200 base pairs of DNA. [ 2 ] The solenoid model was first proposed by John Finch and Aaron Klug in 1976. They used electron microscopy images and X-ray diffraction patterns to determine their model of the structure. [ 3 ] This was the first model to be proposed for the structure of the 30 nm fibre. DNA in the nucleus is wrapped around nucleosomes , which are histone octamers formed of core histone proteins; two histone H2A - H2B dimers, two histone H3 proteins, and two histone H4 proteins. The primary chromatin structure, the least-packed form, is the 11 nm, or “beads on a string” form, where DNA is wrapped around nucleosomes at relatively regular intervals, as Roger Kornberg proposed. Histone H1 protein binds to the site where DNA enters and exits the nucleosome, wrapping 147 base pairs around the histone core and stabilising the nucleosome, [ 4 ] this structure is a chromatosome . [ 5 ] In the solenoid structure, the nucleosomes fold up and are stacked, forming a helix. They are connected by bent linker DNA which positions sequential nucleosomes adjacent to one another in the helix. The nucleosomes are positioned with the histone H1 proteins facing toward the centre where they form a polymer . [ 4 ] Finch and Klug determined that the helical structure had only one-start point because they mostly observed small pitch angles of 11 nm, [ 3 ] which is about the same diameter as a nucleosome. There are approximately 6 nucleosomes in each turn of the helix. [ 3 ] Finch and Klug actually observed a wide range of nucleosomes per turn but they put this down to flattening. [ 3 ] Finch and Klug's electron microscopy images had a lack of visible detail so they were unable to determine helical parameters other than the pitch. [ 3 ] More recent electron microscopy images have been able to define the dimensions of solenoid structures and identified it as a left-handed helix. [ 6 ] The structure of solenoids are insensitive to changes in the length of the linker DNA. The solenoid structure's most obvious function is to help package the DNA so that it is small enough to fit into the nucleus. This is a big task as the nucleus of a mammalian cell has a diameter of approximately 6 μm , whilst the DNA in one human cell would stretch to just over 2 metres long if it were unwound. [ 7 ] The "beads on a string" structure can compact DNA to 7 times smaller. [ 2 ] The solenoid structure can increase this to be 40 times smaller. [ 3 ] When DNA is compacted into the solenoid structure can still be transcriptionally active in certain areas. [ 8 ] It is the secondary chromatin structure that is important for this transcriptional repression as in vivo active genes are assembled in large tertiary chromatin structures . [ 8 ] There are many factors that affect whether the solenoid structure will form or not. Some factors alter the structure of the 30 nm fibre, and some prevent it from forming in that region altogether. The concentration of ions , particularly divalent cations affects the structure of the 30 nm fibre, [ 9 ] which is why Finch and Klug were not able to form solenoid structures in the presence of chelating agents . [ 3 ] There is an acidic patch on the surface of histone H2A and histone H2B proteins which interacts with the tails of histone H4 proteins in adjacent nucleosomes. [ 10 ] These interactions are important for solenoid formation. [ 10 ] Histone variants can affect solenoid formation, for example H2A.Z is a histone variant of H2A, and it has a more acidic patch than the one on H2A, so H2A.Z would have a stronger interaction with histone H4 tails and probably contribute to solenoid formation. [ 10 ] The histone H4 tail is essential for formation of 30 nm fibres. [ 10 ] However, acetylation of core histone tails affects the folding of chromatin by destabilising interactions between the DNA and the nucleosomes, making histone modulation a key factor in solenoid structure. [ 10 ] Acetylation of H4K16 (the lysine which is the 16th amino acid from the N-terminal of histone H4) inhibits 30 nm fibre formation. [ 11 ] To decompact the 30 nm fibre, for instance to transcriptionally activate it, both H4K16 acetylation and removal of the histone H1 proteins are required. [ 12 ] Chromatin can form a tertiary chromatin structure and be compacted even further than the solenoid structure by forming supercoils which have a diameter of around 700 nm. [ 13 ] This supercoil is formed by regions of DNA called scaffold/matrix attachment regions (SMARs) attaching to a central scaffolding matrix in the nucleus creating loops of solenoid chromatin between 4.5 and 112 kilobase pairs long. [ 13 ] The central scaffolding matrix itself forms a spiral shape for an additional layer of compaction. [ 13 ] Several other models have been proposed and there is still a lot of uncertainty about the structure of the 30 nm fibre. Even the more recent research produces conflicting information. There is data from electron microscopy measurements of the 30 nm fibre dimensions that has physical constraints which mean it can only be modelled with a one-start helical structure like the solenoid structure. [ 6 ] It also shows there is no linear relationship between the length of the linker DNA and the dimensions (instead there are two distinct classes). [ 6 ] There is also data from experiments which cross-linked nucleosomes that shows a two-start structure. [ 14 ] There is evidence that suggests both the solenoid and zig-zag (two-start) structures are present in 30 nm fibres. [ 15 ] It is possible that chromatin structure may not be as ordered as previously thought, [ 16 ] or that the 30 nm fibre may not even be present in situ . [ 17 ] The two-start twisted-ribbon model was proposed in 1981 by Worcel, Strogatz and Riley. [ 18 ] This structure involves alternating nucleosomes stacking to form two parallel helices, with the linker DNA zig-zagging up and down the helical axis. The two-start cross-linker model was proposed in 1986 by Williams et al . [ 19 ] This structure, like the two-start twisted-ribbon model, involves alternating nucleosomes stacking to form two parallel helices, but the nucleosomes are on opposite sides of the helices with the linker DNA crossing across the centre of the helical axis. The superbead model was proposed by Renz in 1977. [ 20 ] This structure is not helical like the other models, it instead consists of discrete globular structures along the chromatin which vary in size. [ 21 ] The chromatin in mammalian sperm is the most condensed form of eukaryotic DNA, it is packaged by protamines rather than nucleosomes, [ 22 ] whilst prokaryotes package their DNA through supercoiling .
https://en.wikipedia.org/wiki/Solenoid_(DNA)
In mathematics , a solenoid is a compact connected topological space (i.e. a continuum ) that may be obtained as the inverse limit of an inverse system of topological groups and continuous homomorphisms where each S i {\displaystyle S_{i}} is a circle and f i is the map that uniformly wraps the circle S i + 1 {\displaystyle S_{i+1}} for n i + 1 {\displaystyle n_{i+1}} times ( n i + 1 ≥ 2 {\displaystyle n_{i+1}\geq 2} ) around the circle S i {\displaystyle S_{i}} . [ 1 ] : Ch. 2 Def. (10.12) This construction can be carried out geometrically in the three-dimensional Euclidean space R 3 . A solenoid is a one-dimensional homogeneous indecomposable continuum that has the structure of an abelian compact topological group . Solenoids were first introduced by Vietoris for the n i = 2 {\displaystyle n_{i}=2} case, [ 2 ] and by van Dantzig the n i = n {\displaystyle n_{i}=n} case, where n ≥ 2 {\displaystyle n\geq 2} is fixed. [ 3 ] Such a solenoid arises as a one-dimensional expanding attractor , or Smale–Williams attractor , and forms an important example in the theory of hyperbolic dynamical systems . Each solenoid may be constructed as the intersection of a nested system of embedded solid tori in R 3 . Fix a sequence of natural numbers { n i }, n i ≥ 2. Let T 0 = S 1 × D be a solid torus . For each i ≥ 0, choose a solid torus T i +1 that is wrapped longitudinally n i times inside the solid torus T i . Then their intersection is homeomorphic to the solenoid constructed as the inverse limit of the system of circles with the maps determined by the sequence { n i }. Here is a variant of this construction isolated by Stephen Smale as an example of an expanding attractor in the theory of smooth dynamical systems. Denote the angular coordinate on the circle S 1 by t (it is defined mod 2π) and consider the complex coordinate z on the two-dimensional unit disk D . Let f be the map of the solid torus T = S 1 × D into itself given by the explicit formula This map is a smooth embedding of T into itself that preserves the foliation by meridional disks (the constants 1/2 and 1/4 are somewhat arbitrary, but it is essential that 1/4 < 1/2 and 1/4 + 1/2 < 1). If T is imagined as a rubber tube, the map f stretches it in the longitudinal direction, contracts each meridional disk, and wraps the deformed tube twice inside T with twisting, but without self-intersections. The hyperbolic set Λ of the discrete dynamical system ( T , f ) is the intersection of the sequence of nested solid tori described above, where T i is the image of T under the i th iteration of the map f . This set is a one-dimensional (in the sense of topological dimension ) attractor , and the dynamics of f on Λ has the following interesting properties: General theory of solenoids and expanding attractors, not necessarily one-dimensional, was developed by R. F. Williams and involves a projective system of infinitely many copies of a compact branched manifold in place of the circle, together with an expanding self- immersion . In the toroidal coordinates with radius R {\displaystyle R} , the solenoid can be parametrized by t ∈ R {\displaystyle t\in \mathbb {R} } as ζ = 2 π t , r e i θ = ∑ k = 1 ∞ r k e 2 π i ω k t {\displaystyle \zeta =2\pi t,\quad re^{i\theta }=\sum _{k=1}^{\infty }r_{k}e^{2\pi i\omega _{k}t}} where ω k = 1 n 1 ⋯ n k , r k = R δ 1 ⋯ δ k {\displaystyle \omega _{k}={\frac {1}{n_{1}\cdots n_{k}}},\quad r_{k}=R\delta _{1}\cdots \delta _{k}} Here, δ k {\displaystyle \delta _{k}} are adjustable shape-parameters, with constraint 0 < δ < 1 − 1 1 + sin ⁡ π n k {\displaystyle 0<\delta <1-{\frac {1}{1+\sin {\frac {\pi }{n_{k}}}}}} . In particular, δ = 1 2 n k {\displaystyle \delta ={\frac {1}{2n_{k}}}} works. Let S ⊂ R 3 {\displaystyle S\subset \mathbb {R} ^{3}} be the solenoid constructed this way, then the topology of the solenoid is just the subset topology induced by the Euclidean topology on R 3 {\displaystyle \mathbb {R} ^{3}} . Since the parametrization is bijective, we can pullback the topology on S {\displaystyle S} to R {\displaystyle \mathbb {R} } , which makes R {\displaystyle \mathbb {R} } itself the solenoid. This allows us to construct the inverse limit maps explicitly: g k : R → S k , g k ( t ) = ( r , θ , ζ ) in toroidal coordinates, where ζ = 2 π t , r e i θ = ∑ k = 1 k r k e 2 π i ω k t {\displaystyle g_{k}:\mathbb {R} \to S_{k},\quad g_{k}(t)=(r,\theta ,\zeta ){\text{ in toroidal coordinates, where }}\zeta =2\pi t,\quad re^{i\theta }=\sum _{k=1}^{k}r_{k}e^{2\pi i\omega _{k}t}} Viewed as a set, the solenoid is just a Cantor-continuum of circles, wired together in a particular way. This suggests to us the construction by symbolic dynamics , where we start with a circle as a "racetrack", and append an "odometer" to keep track of which circle we are on. Define S = S 1 × ∏ k = 1 ∞ Z n k {\displaystyle S=S^{1}\times \prod _{k=1}^{\infty }\mathbb {Z} _{n_{k}}} as the solenoid. Next, define addition on the odometer Z × ∏ k = 1 ∞ Z n k → ∏ k = 1 ∞ Z n k {\displaystyle \mathbb {Z} \times \prod _{k=1}^{\infty }\mathbb {Z} _{n_{k}}\to \prod _{k=1}^{\infty }\mathbb {Z} _{n_{k}}} , in the same way as p -adic numbers . Next, define addition on the solenoid + : R × S → S {\displaystyle +:\mathbb {R} \times S\to S} by r + ( θ , n ) = ( ( r + θ mod 1 ) , ⌊ r + θ ⌋ + n ) {\displaystyle r+(\theta ,n)=((r+\theta \mod 1),\lfloor r+\theta \rfloor +n)} The topology on the solenoid is generated by the basis containing the subsets S ′ × Z ( m 1 , . . . , m k ) ′ {\displaystyle S'\times Z'_{(m_{1},...,m_{k})}} , where S ′ {\displaystyle S'} is any open interval in S 1 {\displaystyle S^{1}} , and Z ( m 1 , . . . , m k ) ′ {\displaystyle Z'_{(m_{1},...,m_{k})}} is the set of all elements of ∏ k = 1 ∞ Z n k {\displaystyle \prod _{k=1}^{\infty }\mathbb {Z} _{n_{k}}} starting with the initial segment ( m 1 , . . . , m k ) {\displaystyle (m_{1},...,m_{k})} . Solenoids are compact metrizable spaces that are connected , but not locally connected or path connected . This is reflected in their pathological behavior with respect to various homology theories , in contrast with the standard properties of homology for simplicial complexes . In Čech homology , one can construct a non-exact long homology sequence using a solenoid. In Steenrod -style homology theories, [ 4 ] the 0th homology group of a solenoid may have a fairly complicated structure, even though a solenoid is a connected space.
https://en.wikipedia.org/wiki/Solenoid_(mathematics)
In vector calculus a solenoidal vector field (also known as an incompressible vector field , a divergence-free vector field , or a transverse vector field ) is a vector field v with divergence zero at all points in the field: ∇ ⋅ v = 0. {\displaystyle \nabla \cdot \mathbf {v} =0.} A common way of expressing this property is to say that the field has no sources or sinks . [ note 1 ] The divergence theorem gives an equivalent integral definition of a solenoidal field; namely that for any closed surface, the net total flux through the surface must be zero: where d S {\displaystyle d\mathbf {S} } is the outward normal to each surface element. The fundamental theorem of vector calculus states that any vector field can be expressed as the sum of an irrotational and a solenoidal field. The condition of zero divergence is satisfied whenever a vector field v has only a vector potential component, because the definition of the vector potential A as: v = ∇ × A {\displaystyle \mathbf {v} =\nabla \times \mathbf {A} } automatically results in the identity (as can be shown, for example, using Cartesian coordinates): ∇ ⋅ v = ∇ ⋅ ( ∇ × A ) = 0. {\displaystyle \nabla \cdot \mathbf {v} =\nabla \cdot (\nabla \times \mathbf {A} )=0.} The converse also holds: for any solenoidal v there exists a vector potential A such that v = ∇ × A . {\displaystyle \mathbf {v} =\nabla \times \mathbf {A} .} (Strictly speaking, this holds subject to certain technical conditions on v , see Helmholtz decomposition .) Solenoidal has its origin in the Greek word for solenoid , which is σωληνοειδές (sōlēnoeidēs) meaning pipe-shaped, from σωλην (sōlēn) or pipe.
https://en.wikipedia.org/wiki/Solenoidal_vector_field
Solenopsin is a lipophilic alkaloid with the molecular formula C 17 H 35 N found in the venom of fire ants ( Solenopsis ). It is considered the primary toxin in the venom [ 2 ] and may be the component responsible for the cardiorespiratory failure in people who experience excessive fire ant stings. [ 3 ] Structurally solenopsins are a piperidine ring with a methyl group substitution at position 2 and a long hydrophobic chain at position 6. They are typically oily at room temperature, water-insoluble, and present an absorbance peak at 232 nanometers. [ 4 ] Fire ant venom contains other chemically related piperidines which make purification of solenopsin from ants difficult. [ 5 ] [ 6 ] Therefore, solenopsin and related compounds have been the target of organic synthesis from which pure compounds can be produced for individual study. Originally synthesized in 1993, [ 7 ] several groups have designed novel and creative methods of synthesizing enantiopure solenopsin and other alkaloidal components of ant venom . The total synthesis of solenopsin has been described by several methods. [ 8 ] [ failed verification ] A proposed method of synthesis [ 9 ] ( Figure 1 ) starts with alkylation of 4-chloropyridine with a Grignard reagent derived from 1-bromoundecane, followed by reaction with phenyl chloroformate to form 4-chloro-1-(phenoxycarbonyl)-2- n -undecyl-1,2-dihydropyridine. The phenylcarbamate is converted to the BOC protecting group , and then pyridine is methylated at the 6 position. The pyridine ring is then reduced to a tetrahydropyridine via catalytic hydrogenation with Pd/C and then further reduced with sodium cyanoborohydride to a piperidine ring. The BOC group is finally removed to yield solenopsin. A number of analogs have been synthesized using modifications of this procedure. A shorter method of synthesis stemming from commercially-available lutidine has been more recently proposed. [ 10 ] Solenopsins are described as toxic against vertebrates and invertebrates. For example, the compound known as isosolenopsin A has been demonstrated to have strong insecticidal effects [ 11 ] which may play a central role in the biology of fire ants . In addition to its toxicity, solenopsis has a number of other biological activities. It inhibits angiogenesis in vitro via the phosphoinositide 3-kinase (PI3K) signaling pathway, [ 9 ] inhibits neuronal nitric oxide synthase (nNOS) in a manner that appears to be non-competitive with L -arginine , [ 12 ] and inhibits quorum-sensing signaling in some bacteria. [ 13 ] The biological activities of solenopsins have led researchers to propose a number of biotechnological and biomedical applications for these compounds. For instance, mentioned anti-bacterial and interference in quorum-sensing signalling apparently provide solenopsins with considerable anti-biofilm activity, which suggests the potential of analogs as new disinfectants and surface-conditioning agents. [ 14 ] Also, solenopsins have been demonstrated to inhibit cell division and viability of Trypanosoma cruzi , the cause of Chagas disease , which suggests these alkaloids as potential chemotherapeutic drugs. [ 15 ] Solenopsin and analogs share structural and biological properties with the sphingolipid ceramide , a major endogenous regulator of cell signaling , inducing mitophagy and anti-proliferative effects in different tumor cell lines. [ 16 ] Synthetic analogs of solenopsin are being studied for the potential treatment of psoriasis . [ 17 ]
https://en.wikipedia.org/wiki/Solenopsin
Solid is one of the four fundamental states of matter (along with liquid , gas , and plasma ), [ 1 ] and is a way in which all matter can be arranged on a microscopic scale under certain conditions. [ 2 ] Molecules in a solid are closely packed and do not slide past each other as is the case for fluids . Solids resist compression, expansion, or external forces that would alter its shape, with the degree to which they are resisted dependent upon the specific material under consideration. [ 3 ] Solids also always possess the least amount of kinetic energy per atom/molecule relative to other phases [ 4 ] [ 5 ] or, equivalently stated, solids are formed when matter in the liquid / gas phase is cooled below a certain temperature. [ 6 ] This temperature is called the melting point [ 7 ] of that substance and is an intrinsic [ 8 ] property, i.e. independent of how much of the matter there is. Solids are characterized by structural rigidity and resistance to applied external forces and pressure. [ 5 ] Unlike liquids, solids do not flow to take on the shape of its container, nor does it expand to fill the entire available volume like a gas. [ 9 ] Much like the other three fundamental phases, solids also expand when heated , [ 10 ] the thermal energy put into increasing the distance and reducing the potential energy between atoms. However, solids do this to a much lesser extent. [ 11 ] [ 12 ] When heated to its melting point or sublimation point , solids melt into a liquid or sublimate directly into a gas, respectively. For solids that directly sublimate into a gas, the melting point is replaced by the sublimation point. [ 13 ] As a rule of thumb, melting will occur if the subjected pressure is higher than the substance's triple point 's pressure, [ 14 ] and sublimation will occur otherwise. [ 15 ] Melting and melting points refer exclusively to transitions between solids and liquids. [ 16 ] Melting occurs across a great extent of temperatures, ranging from 0.10 K for helium-3 under 30 bars (3 MPa) of pressure, [ 17 ] to around 4,200 K at 1 atm for the composite refractory material hafnium carbonitride . [ 18 ] The atoms in a solid are tightly bound to each other in one of two ways: regular geometric lattices called crystalline solids (e.g. metals, water ice ), or irregular arrangements called amorphous solids (e.g. glass, plastic). [ 19 ] Molecules and atoms forming crystalline lattices usually organize themselves in a few well-characterized packing structures, [ 19 ] such as body-centered cubic. The adopted structure can and will vary between various pressures and temperatures, as can be seen in phase diagrams of the material (e.g. that of water , see left and upper). When the material is composed of a single species of atom/molecule, the phases are designated as allotropes for atoms (e.g. diamond / graphite for carbon ), and as polymorphs (e.g. calcite / aragonite for calcium carbonate ) [ 20 ] for molecules. Non- porous solids invariably strongly resist any amount of compression that would otherwise result in a decrease of total volume regardless of temperature, [ 21 ] owing to the mutual-repulsion of neighboring electron clouds among its constituent atoms. [ 21 ] [ 22 ] In contrast to solids, gases are very easily compressed as the molecules in a gas are far apart with few intermolecular interactions. [ 23 ] Some solids, especially metallic alloys, can be deformed or pulled apart with enough force. The degree to which this solid resists deformation in differing directions and axes are quantified by the elastic modulus , tensile strength , specific strength , as well as other measurable quantities. [ 24 ] For the vast majority of substances, the solid phases have the highest density , [ 14 ] moderately higher than that of the liquid phase (if there exists one), and solid blocks of these materials will sink below their liquids. [ 25 ] Exceptions include water ( icebergs ), gallium , and plutonium . [ 26 ] [ 27 ] All naturally occurring elements on the periodic table has a melting point at standard atmospheric pressure, with three exceptions: the noble gas helium , which remains a liquid even at absolute zero owing to zero-point energy ; [ 28 ] the metalloid arsenic , sublimating around 900 K; [ 29 ] and the life-forming element carbon, which sublimates around 3,950 K. [ 30 ] When applied pressure is released, solids will (very) rapidly re-expand and release the stored energy in the process [ 22 ] in a manner somewhat similar to those of gases. An example of this is the (oft-attempted) confinement of freezing water in an inflexible container (of steel, for example). [ 31 ] The gradual freezing results in an increase in volume, [ 32 ] as ice is less dense than water. [ 33 ] With no additional volume to expand into, water ice subjects the interior to intense pressures, causing the container to explode with great force. [ 31 ] [ 34 ] Solids' properties on a macroscopic scale can also depend on whether it is contiguous or not. Contiguous (non-aggregate) solids are characterized by structural rigidity (as in rigid bodies ) and strong resistance to applied forces. [ 5 ] For solids aggregates (e.g. gravel, sand, dust on lunar surface [ 35 ] ), solid particles can easily slip past one another, [ 36 ] though changes of individual particles ( quartz particles for sand) will still be greatly hindered. [ 37 ] This leads to a perceived softness and ease of compression by operators. [ 38 ] An illustrating example is the non-firmness of coastal sand [ 36 ] and of the lunar regolith. [ 35 ] The branch of physics that deals with solids is called solid-state physics , [ 39 ] and is a major branch of condensed matter physics (which includes liquids). [ 40 ] Materials science , also one of its numerous branches, is primarily concerned with the way in which a solid's composition and its properties are intertwined. [ 41 ] The atoms, molecules or ions that make up solids may be arranged in an orderly repeating pattern, or irregularly. Materials whose constituents are arranged in a regular pattern are known as crystals . In some cases, the regular ordering can continue unbroken over a large scale, for example diamonds, where each diamond is a single crystal . Solid objects that are large enough to see and handle are rarely composed of a single crystal, but instead are made of a large number of single crystals, known as crystallites , whose size can vary from a few nanometers to several meters. Such materials are called polycrystalline . Almost all common metals, and many ceramics , are polycrystalline. In other materials, there is no long-range order in the position of the atoms. These solids are known as amorphous solids ; examples include polystyrene and glass. Whether a solid is crystalline or amorphous depends on the material involved, and the conditions in which it was formed. Solids that are formed by slow cooling will tend to be crystalline, while solids that are frozen rapidly are more likely to be amorphous. Likewise, the specific crystal structure adopted by a crystalline solid depends on the material involved and on how it was formed. While many common objects, such as an ice cube or a coin, are chemically identical throughout, many other common materials comprise a number of different substances packed together. For example, a typical rock is an aggregate of several different minerals and mineraloids , with no specific chemical composition. Wood is a natural organic material consisting primarily of cellulose fibers embedded in a matrix of organic lignin . In materials science, composites of more than one constituent material can be designed to have desired properties. The forces between the atoms in a solid can take a variety of forms. For example, a crystal of sodium chloride (common salt) is made up of ionic sodium and chlorine , which are held together by ionic bonds . [ 42 ] In diamond [ 43 ] or silicon, the atoms share electrons and form covalent bonds . [ 44 ] In metals, electrons are shared in metallic bonding . [ 45 ] Some solids, particularly most organic compounds, are held together with van der Waals forces resulting from the polarization of the electronic charge cloud on each molecule. The dissimilarities between the types of solid result from the differences between their bonding. Metals typically are strong, dense, and good conductors of both electricity and heat . [ 46 ] [ 47 ] The bulk of the elements in the periodic table , those to the left of a diagonal line drawn from boron to polonium , are metals. Mixtures of two or more elements in which the major component is a metal are known as alloys . People have been using metals for a variety of purposes since prehistoric times. The strength and reliability of metals has led to their widespread use in construction of buildings and other structures, as well as in most vehicles, many appliances and tools, pipes, road signs and railroad tracks. Iron and aluminium are the two most commonly used structural metals. They are also the most abundant metals in the Earth's crust . Iron is most commonly used in the form of an alloy, steel, which contains up to 2.1% carbon , making it much harder than pure iron. Because metals are good conductors of electricity, they are valuable in electrical appliances and for carrying an electric current over long distances with little energy loss or dissipation. Thus, electrical power grids rely on metal cables to distribute electricity. Home electrical systems, for example, are wired with copper for its good conducting properties and easy machinability. The high thermal conductivity of most metals also makes them useful for stovetop cooking utensils. The study of metallic elements and their alloys makes up a significant portion of the fields of solid-state chemistry, physics, materials science and engineering. Metallic solids are held together by a high density of shared, delocalized electrons, known as " metallic bonding ". In a metal, atoms readily lose their outermost ("valence") electrons , forming positive ions . The free electrons are spread over the entire solid, which is held together firmly by electrostatic interactions between the ions and the electron cloud. [ 48 ] The large number of free electrons gives metals their high values of electrical and thermal conductivity. The free electrons also prevent transmission of visible light, making metals opaque, shiny and lustrous . More advanced models of metal properties consider the effect of the positive ions cores on the delocalised electrons. As most metals have crystalline structure, those ions are usually arranged into a periodic lattice. Mathematically, the potential of the ion cores can be treated by various models, the simplest being the nearly free electron model . Minerals are naturally occurring solids formed through various geological processes [ 49 ] under high pressures. To be classified as a true mineral, a substance must have a crystal structure with uniform physical properties throughout. Minerals range in composition from pure elements and simple salts to very complex silicates with thousands of known forms. In contrast, a rock sample is a random aggregate of minerals and/or mineraloids , and has no specific chemical composition. The vast majority of the rocks of the Earth's crust consist of quartz (crystalline SiO 2 ), feldspar, mica, chlorite , kaolin , calcite, epidote , olivine , augite , hornblende , magnetite , hematite , limonite and a few other minerals. Some minerals, like quartz , mica or feldspar are common, while others have been found in only a few locations worldwide. The largest group of minerals by far is the silicates (most rocks are ≥95% silicates), which are composed largely of silicon and oxygen , with the addition of ions of aluminium, magnesium , iron, calcium and other metals. Ceramic solids are composed of inorganic compounds, usually oxides of chemical elements. [ 50 ] They are chemically inert, and often are capable of withstanding chemical erosion that occurs in an acidic or caustic environment. Ceramics generally can withstand high temperatures ranging from 1,000 to 1,600 °C (1,830 to 2,910 °F). Exceptions include non-oxide inorganic materials, such as nitrides , borides and carbides . Traditional ceramic raw materials include clay minerals such as kaolinite , more recent materials include aluminium oxide ( alumina ). The modern ceramic materials, which are classified as advanced ceramics, include silicon carbide and tungsten carbide . Both are valued for their abrasion resistance, and hence find use in such applications as the wear plates of crushing equipment in mining operations. Most ceramic materials, such as alumina and its compounds, are formed from fine powders, yielding a fine grained polycrystalline microstructure that is filled with light-scattering centers comparable to the wavelength of visible light . Thus, they are generally opaque materials, as opposed to transparent materials . Recent nanoscale (e.g. sol-gel ) technology has, however, made possible the production of polycrystalline transparent ceramics such as transparent alumina and alumina compounds for such applications as high-power lasers. Advanced ceramics are also used in the medicine, electrical and electronics industries. Ceramic engineering is the science and technology of creating solid-state ceramic materials, parts and devices. This is done either by the action of heat, or, at lower temperatures, using precipitation reactions from chemical solutions. The term includes the purification of raw materials, the study and production of the chemical compounds concerned, their formation into components, and the study of their structure, composition and properties. Mechanically speaking, ceramic materials are brittle, hard, strong in compression and weak in shearing and tension. Brittle materials may exhibit significant tensile strength by supporting a static load. Toughness indicates how much energy a material can absorb before mechanical failure, while fracture toughness (denoted K Ic ) describes the ability of a material with inherent microstructural flaws to resist fracture via crack growth and propagation. If a material has a large value of fracture toughness , the basic principles of fracture mechanics suggest that it will most likely undergo ductile fracture. Brittle fracture is very characteristic of most ceramic and glass-ceramic materials that typically exhibit low (and inconsistent) values of K Ic . For an example of applications of ceramics, the extreme hardness of zirconia is utilized in the manufacture of knife blades, as well as other industrial cutting tools. Ceramics such as alumina , boron carbide and silicon carbide have been used in bulletproof vests to repel large-caliber rifle fire. Silicon nitride parts are used in ceramic ball bearings, where their high hardness makes them wear resistant. In general, ceramics are also chemically resistant and can be used in wet environments where steel bearings would be susceptible to oxidation (or rust). As another example of ceramic applications, in the early 1980s, Toyota researched production of an adiabatic ceramic engine with an operating temperature of over 6,000 °F (3,320 °C). Ceramic engines do not require a cooling system and hence allow a major weight reduction and therefore greater fuel efficiency. In a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. Work is also being done in developing ceramic parts for gas turbine engines . Turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. Such engines are not in production, however, because the manufacturing of ceramic parts in the sufficient precision and durability is difficult and costly. Processing methods often result in a wide distribution of microscopic flaws that frequently play a detrimental role in the sintering process, resulting in the proliferation of cracks, and ultimate mechanical failure. Glass-ceramic materials share many properties with both non-crystalline glasses and crystalline ceramics . They are formed as a glass, and then partially crystallized by heat treatment, producing both amorphous and crystalline phases so that crystalline grains are embedded within a non-crystalline intergranular phase. Glass-ceramics are used to make cookware (originally known by the brand name CorningWare ) and stovetops that have high resistance to thermal shock and extremely low permeability to liquids. The negative coefficient of thermal expansion of the crystalline ceramic phase can be balanced with the positive coefficient of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net coefficient of thermal expansion close to zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C. Glass ceramics may also occur naturally when lightning strikes the crystalline (e.g. quartz) grains found in most beach sand . In this case, the extreme and immediate heat of the lightning (~2500 °C) creates hollow, branching rootlike structures called fulgurite via fusion . Organic chemistry studies the structure, properties, composition, reactions, and preparation by synthesis (or other means) of chemical compounds of carbon and hydrogen , which may contain any number of other elements such as nitrogen , oxygen and the halogens: fluorine , chlorine , bromine and iodine . Some organic compounds may also contain the elements phosphorus or sulfur . Examples of organic solids include wood, paraffin wax , naphthalene and a wide variety of polymers and plastics . Wood is a natural organic material consisting primarily of cellulose fibers embedded in a matrix of lignin . Regarding mechanical properties, the fibers are strong in tension, and the lignin matrix resists compression. Thus wood has been an important construction material since humans began building shelters and using boats. Wood to be used for construction work is commonly known as lumber or timber . In construction, wood is not only a structural material, but is also used to form the mould for concrete. Wood-based materials are also extensively used for packaging (e.g. cardboard) and paper, which are both created from the refined pulp. The chemical pulping processes use a combination of high temperature and alkaline (kraft) or acidic (sulfite) chemicals to break the chemical bonds of the lignin before burning it out. One important property of carbon in organic chemistry is that it can form certain compounds, the individual molecules of which are capable of attaching themselves to one another, thereby forming a chain or a network. The process is called polymerization and the chains or networks polymers, while the source compound is a monomer. Two main groups of polymers exist: those artificially manufactured are referred to as industrial polymers or synthetic polymers (plastics) and those naturally occurring as biopolymers. Monomers can have various chemical substituents, or functional groups, which can affect the chemical properties of organic compounds, such as solubility and chemical reactivity, as well as the physical properties, such as hardness, density, mechanical or tensile strength, abrasion resistance, heat resistance, transparency, color, etc.. In proteins, these differences give the polymer the ability to adopt a biologically active conformation in preference to others (see self-assembly ). People have been using natural organic polymers for centuries in the form of waxes and shellac , which is classified as a thermoplastic polymer. A plant polymer named cellulose provided the tensile strength for natural fibers and ropes, and by the early 19th century natural rubber was in widespread use. Polymers are the raw materials (the resins) used to make what are commonly called plastics. Plastics are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Polymers that have been around, and that are in current widespread use, include carbon-based polyethylene , polypropylene , polyvinyl chloride , polystyrene , nylons, polyesters , acrylics , polyurethane , and polycarbonates , and silicon-based silicones . Plastics are generally classified as "commodity", "specialty" and "engineering" plastics. Composite materials contain two or more macroscopic phases, one of which is often ceramic. For example, a continuous matrix, and a dispersed phase of ceramic particles or fibers. Applications of composite materials range from structural elements such as steel-reinforced concrete, to the thermally insulative tiles that play a key and integral role in NASA's Space Shuttle thermal protection system , which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is Reinforced Carbon-Carbon (RCC), the light gray material that withstands reentry temperatures up to 1,510 °C (2,750 °F) and protects the nose cap and leading edges of Space Shuttle's wings. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin . After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfural alcohol in a vacuum chamber, and cured/pyrolized to convert the furfural alcohol to carbon. In order to provide oxidation resistance for reuse capability, the outer layers of the RCC are converted to silicon carbide. Domestic examples of composites can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc , glass fibers or carbon fibers have been added for strength, bulk, or electro-static dispersion. These additions may be referred to as reinforcing fibers, or dispersants, depending on their purpose. Thus, the matrix material surrounds and supports the reinforcement materials by maintaining their relative positions. The reinforcements impart their special mechanical and physical properties to enhance the matrix properties. A synergism produces material properties unavailable from the individual constituent materials, while the wide variety of matrix and strengthening materials provides the designer with the choice of an optimum combination. Semiconductors are materials that have an electrical resistivity (and conductivity) between that of metallic conductors and non-metallic insulators. They can be found in the periodic table moving diagonally downward right from boron . They separate the electrical conductors (or metals, to the left) from the insulators (to the right). Devices made from semiconductor materials are the foundation of modern electronics, including radio, computers, telephones, etc. Semiconductor devices include the transistor , solar cells , diodes and integrated circuits . Solar photovoltaic panels are large semiconductor devices that directly convert light into electrical energy. In a metallic conductor, current is carried by the flow of electrons, but in semiconductors, current can be carried either by electrons or by the positively charged " holes " in the electronic band structure of the material. Common semiconductor materials include silicon, germanium and gallium arsenide . Many traditional solids exhibit different properties when they shrink to nanometer sizes. For example, nanoparticles of usually yellow gold and gray silicon are red in color; gold nanoparticles melt at much lower temperatures (~300 °C for 2.5 nm size) than the gold slabs (1064 °C); [ 51 ] and metallic nanowires are much stronger than the corresponding bulk metals. [ 52 ] [ 53 ] The high surface area of nanoparticles makes them extremely attractive for certain applications in the field of energy. For example, platinum metals may provide improvements as automotive fuel catalysts , as well as proton exchange membrane (PEM) fuel cells. Also, ceramic oxides (or cermets) of lanthanum , cerium , manganese and nickel are now being developed as solid oxide fuel cells (SOFC). Lithium, lithium-titanate and tantalum nanoparticles are being applied in lithium-ion batteries. Silicon nanoparticles have been shown to dramatically expand the storage capacity of lithium-ion batteries during the expansion/contraction cycle. Silicon nanowires cycle without significant degradation and present the potential for use in batteries with greatly expanded storage times. Silicon nanoparticles are also being used in new forms of solar energy cells. Thin film deposition of silicon quantum dots on the polycrystalline silicon substrate of a photovoltaic (solar) cell increases voltage output as much as 60% by fluorescing the incoming light prior to capture. Here again, surface area of the nanoparticles (and thin films) plays a critical role in maximizing the amount of absorbed radiation. Many natural (or biological) materials are complex composites with remarkable mechanical properties. These complex structures, which have risen from hundreds of million years of evolution, are inspiring materials scientists in the design of novel materials. Their defining characteristics include structural hierarchy, multifunctionality and self-healing capability. Self-organization is also a fundamental feature of many biological materials and the manner by which the structures are assembled from the molecular level up. Thus, self-assembly is emerging as a new strategy in the chemical synthesis of high performance biomaterials. Physical properties of elements and compounds that provide conclusive evidence of chemical composition include odor, color, volume, density (mass per unit volume), melting point, boiling point, heat capacity, physical form and shape at room temperature (solid, liquid or gas; cubic, trigonal crystals, etc.), hardness, porosity, index of refraction and many others. This section discusses some physical properties of materials in the solid state. The mechanical properties of materials describe characteristics such as their strength and resistance to deformation. For example, steel beams are used in construction because of their high strength, meaning that they neither break nor bend significantly under the applied load. Mechanical properties include elasticity , plasticity , tensile strength , compressive strength , shear strength , fracture toughness , ductility (low in brittle materials) and indentation hardness . Solid mechanics is the study of the behavior of solid matter under external actions such as external forces and temperature changes. A solid does not exhibit macroscopic flow, as fluids do. Any degree of departure from its original shape is called deformation . The proportion of deformation to original size is called strain. If the applied stress is sufficiently low, almost all solid materials behave in such a way that the strain is directly proportional to the stress ( Hooke's law ). The coefficient of the proportion is called the modulus of elasticity or Young's modulus . This region of deformation is known as the linearly elastic region. Three models can describe how a solid responds to an applied stress: Many materials become weaker at high temperatures. Materials that retain their strength at high temperatures, called refractory materials , are useful for many purposes. For example, glass-ceramics have become extremely useful for countertop cooking, as they exhibit excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C. In the aerospace industry, high performance materials used in the design of aircraft and/or spacecraft exteriors must have a high resistance to thermal shock. Thus, synthetic fibers spun out of organic polymers and polymer/ceramic/metal composite materials and fiber-reinforced polymers are now being designed with this purpose in mind. Because solids have thermal energy , their atoms vibrate about fixed mean positions within the ordered (or disordered) lattice. The spectrum of lattice vibrations in a crystalline or glassy network provides the foundation for the kinetic theory of solids . This motion occurs at the atomic level, and thus cannot be observed or detected without highly specialized equipment, such as that used in spectroscopy . Thermal properties of solids include thermal conductivity , which is the property of a material that indicates its ability to conduct heat . Solids also have a specific heat capacity , which is the capacity of a material to store energy in the form of heat (or thermal lattice vibrations). Electrical properties include both electrical resistivity and conductivity , dielectric strength , electromagnetic permeability , and permittivity . Electrical conductors such as metals and alloys are contrasted with electrical insulators such as glasses and ceramics. Semiconductors behave somewhere in between. Whereas conductivity in metals is caused by electrons, both electrons and holes contribute to current in semiconductors. Alternatively, ions support electric current in ionic conductors . Many materials also exhibit superconductivity at low temperatures; they include metallic elements such as tin and aluminium, various metallic alloys, some heavily doped semiconductors, and certain ceramics. The electrical resistivity of most electrical (metallic) conductors generally decreases gradually as the temperature is lowered, but remains finite. In a superconductor, however, the resistance drops abruptly to zero when the material is cooled below its critical temperature. An electric current flowing in a loop of superconducting wire can persist indefinitely with no power source. A dielectric , or electrical insulator, is a substance that is highly resistant to the flow of electric current. A dielectric, such as plastic, tends to concentrate an applied electric field within itself, which property is used in capacitors. A capacitor is an electrical device that can store energy in the electric field between a pair of closely spaced conductors (called 'plates'). When voltage is applied to the capacitor, electric charges of equal magnitude, but opposite polarity, build up on each plate. Capacitors are used in electrical circuits as energy-storage devices, as well as in electronic filters to differentiate between high-frequency and low-frequency signals. Piezoelectricity is the ability of crystals to generate a voltage in response to an applied mechanical stress. The piezoelectric effect is reversible in that piezoelectric crystals, when subjected to an externally applied voltage, can change shape by a small amount. Polymer materials like rubber, wool, hair, wood fiber, and silk often behave as electrets . For example, the polymer polyvinylidene fluoride (PVDF) exhibits a piezoelectric response several times larger than the traditional piezoelectric material quartz (crystalline SiO 2 ). The deformation (~0.1%) lends itself to useful technical applications such as high-voltage sources, loudspeakers, lasers, as well as chemical, biological, and acousto-optic sensors and/or transducers. Materials can transmit (e.g. glass) or reflect (e.g. metals) visible light. Many materials will transmit some wavelengths while blocking others. For example, window glass is transparent to visible light , but much less so to most of the frequencies of ultraviolet light that cause sunburn . This property is used for frequency-selective optical filters, which can alter the color of incident light. For some purposes, both the optical and mechanical properties of a material can be of interest. For example, the sensors on an infrared homing ("heat-seeking") missile must be protected by a cover that is transparent to infrared radiation . The current material of choice for high-speed infrared-guided missile domes is single-crystal sapphire . The optical transmission of sapphire does not actually extend to cover the entire mid-infrared range (3–5 μm), but starts to drop off at wavelengths greater than approximately 4.5 μm at room temperature. While the strength of sapphire is better than that of other available mid-range infrared dome materials at room temperature, it weakens above 600 °C. A long-standing trade-off exists between optical bandpass and mechanical durability; new materials such as transparent ceramics or optical nanocomposites may provide improved performance. Guided lightwave transmission involves the field of fiber optics and the ability of certain glasses to transmit, simultaneously and with low loss of intensity, a range of frequencies (multi-mode optical waveguides) with little interference between them. Optical waveguides are used as components in integrated optical circuits or as the transmission medium in optical communication systems. A solar cell or photovoltaic cell is a device that converts light energy into electrical energy. Fundamentally, the device needs to fulfill only two functions: photo-generation of charge carriers (electrons and holes) in a light-absorbing material, and separation of the charge carriers to a conductive contact that will transmit the electricity (simply put, carrying electrons off through a metal contact into an external circuit). This conversion is called the photoelectric effect , and the field of research related to solar cells is known as photovoltaics. Solar cells have many applications. They have long been used in situations where electrical power from the grid is unavailable, such as in remote area power systems, Earth-orbiting satellites and space probes, handheld calculators, wrist watches, remote radiotelephones and water pumping applications. More recently, they are starting to be used in assemblies of solar modules (photovoltaic arrays) connected to the electricity grid through an inverter, that is not to act as a sole supply but as an additional electricity source. All solar cells require a light absorbing material contained within the cell structure to absorb photons and generate electrons via the photovoltaic effect . The materials used in solar cells tend to have the property of preferentially absorbing the wavelengths of solar light that reach the earth surface. Some solar cells are optimized for light absorption beyond Earth's atmosphere, as well. Materials science is an interdisciplinary field of researching and discovering materials . Materials engineering is an engineering field of finding uses for materials in other fields and industries. The intellectual origins of materials science stem from the Age of Enlightenment , when researchers began to use analytical thinking from chemistry , physics , and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy . [ 55 ] [ 56 ] Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study. Materials scientists emphasize understanding how the history of a material ( processing ) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology , biomaterials , and metallurgy .
https://en.wikipedia.org/wiki/Solid
Solid-phase extraction ( SPE ) [ 1 ] is a solid-liquid extractive technique, by which compounds that are dissolved or suspended in a liquid mixture are separated, isolated or purified, from other compounds in this mixture, according to their physical and chemical properties. Analytical laboratories use solid phase extraction to concentrate and purify samples for analysis. Solid phase extraction can be used to isolate analytes of interest from a wide variety of matrices, including urine, blood, water, beverages, soil, and animal tissue. [ 2 ] [ 3 ] [ 4 ] SPE uses the affinity of solutes, dissolved or suspended in a liquid (known as the mobile phase ), to a solid packing inside a small column, through which the sample is passed (known as the stationary phase ), to separate a mixture into desired and undesired components. The result is that either the desired analytes of interest or undesired impurities in the sample are retained on the stationary phase. The portion that passes through the stationary phase is collected or discarded, depending on whether it contains the desired analytes or undesired impurities. If the portion retained on the stationary phase includes the desired analytes, they can then be removed from the stationary phase for collection in an additional step, in which the stationary phase is rinsed with an appropriate eluent . [ 5 ] It is possible to have an incomplete recovery of the analytes by SPE caused by incomplete extraction or elution. In the case of an incomplete extraction, the analytes do not have enough affinity for the stationary phase and part of them will remain in the permeate. In an incomplete elution, part of the analytes remain in the sorbent because the eluent used does not have a strong enough affinity. [ 6 ] Many of the adsorbents/materials are the same as in chromatographic methods, but SPE is distinctive, with aims separate from chromatography, and so has a unique niche in modern chemical science. SPE is in fact a method of chromatography , in the sense of having a mobile phase, carrying mixtures through a stationary phase, packed inside a column. The chromatographic process is harnessed to create a solid-liquid extractive technique—allowing separation of a mixture of components by taking advantage of large differences between the solid and liquid phase K eq , or equilibrium constant , for each component in the mixture. The chemical considerations for the selection of stationary and mobile phases are similar to those for liquid column chromatography and many of the adsorbents/materials used are the same. The theory, procedures, and aims are different, however, and as an extractive technique it has a unique niche in modern chemical science. A typical solid phase extraction involves five basic steps. First, the cartridge is equilibrated with a non-polar or slightly polar solvent, which wets the surface and penetrates the bonded phase. Then water, or buffer of the same composition as the sample, is typically washed through the column to wet the silica surface. The sample is then added to the cartridge. As the sample passes through the stationary phase, the polar analytes in the sample will interact and retain on the polar sorbent while the solvent, and other non-polar impurities pass through the cartridge. After the sample is loaded, the cartridge is washed with a non-polar solvent to remove further impurities. Then, the analyte is eluted with a polar solvent or a buffer of the appropriate pH. A stationary phase of polar functionally bonded silicas with short carbons chains frequently makes up the solid phase. This stationary phase will adsorb polar molecules which can be collected with a more polar solvent. [ 4 ] Reversed phase SPE separates analytes based on their polarity. The stationary phase of a reversed phase SPE cartridge is derivatized with hydrocarbon chains, which retain compounds of mid to low polarity due to the hydrophobic effect. The analyte can be eluted by washing the cartridge with a non-polar solvent, which disrupts the interaction of the analyte and the stationary phase. [ 4 ] A stationary phase of silicon with carbon chains is commonly used. Relying on mainly non-polar, hydrophobic interactions, only non-polar or very weakly polar compounds will adsorb to the surface. [ 4 ] Ion exchange sorbents separate analytes based on electrostatic interactions between the analyte of interest and the positively or negatively charged groups on the stationary phase. For ion exchange to occur, both the stationary phase and sample must be at a pH where both are charged. Anion exchange sorbents are derivatized with positively charged functional groups that interact and retain negatively charged anions, such as acids. Strong anion exchange sorbents contain quaternary ammonium groups that have a permanent positive charge in aqueous solutions, and weak anion exchange sorbents use amine groups which are charged when the pH is below about 9. Strong anion exchange sorbents are useful because any strongly acidic impurities in the sample will bind to the sorbent and usually will not be eluted with the analyte of interest; to recover a strong acid a weak anion exchange cartridge should be used. To elute the analyte from either the strong or weak sorbent, the stationary phase is washed with a solvent that neutralizes the charge of either the analyte, the stationary phase, or both. Once the charge is neutralized, the electrostatic interaction between the analyte and the stationary phase no longer exists and the analyte will elute from the cartridge. [ 4 ] Cation exchange sorbents are derivatized with functional groups that interact and retain positively charged cations, such as bases. Strong cation exchange sorbents contain aliphatic sulfonic acid groups that are always negatively charged in aqueous solution, and weak cation exchange sorbents contain aliphatic carboxylic acids, which are charged when the pH is above about 5. Strong cation exchange sorbents are useful because any strongly basic impurities in the sample will bind to the sorbent and usually will not be eluted with the analyte of interest; to recover a strong base a weak cation exchange cartridge should be used. To elute the analyte from either the strong or weak sorbent, the stationary phase is washed with a solvent that neutralizes ionic interaction between the analyte and the stationary phase. [ 4 ] The stationary phase comes in the form of a packed syringe-shaped cartridge, a 96 well plate , a 47- or 90-mm flat disk, or a microextraction by packed sorbent ( MEPS ) device, a SPE method that uses a packed sorbent material in a liquid handling syringe . [ 7 ] [ 8 ] These can be mounted on its specific type of extraction manifold. The manifold allows multiple samples to be processed by holding several SPE media in place and allowing for an equal number of samples to pass through them simultaneously. In a standard cartridge SPE manifold up to 24 cartridges can be mounted in parallel, while a typical disk SPE manifold can accommodate 6 disks. Most SPE manifolds are equipped with a vacuum port, where vacuum can be applied to speed up the extraction process by pulling the liquid sample through the stationary phase. The analytes are collected in sample tubes inside or below the manifold after they pass through the stationary phase. Solid phase extraction cartridges and disks can be purchased with several stationary phases, each of which separates analytes depending on different chemical properties. The basis of most stationary phases is silica that has been bonded to a specific functional group. Some of these functional groups include hydrophobic alkyl or aryl chains chains of variable length (for reversed phase), quaternary ammonium or amino groups (for anion exchange), and aliphatic sulfonic acid or carboxyl groups (for cation exchange). [ 4 ] Solid-phase microextraction (SPME), is a solid phase extraction technique that involves the use of a fiber coated with an extracting phase, that can be a liquid ( polymer ) or a solid ( sorbent ), which extracts different kinds of analytes (including both volatile and non-volatile) from different kinds of media, that can be in liquid or gas phase. [ 9 ] The quantity of analyte extracted by the fibre is proportional to its concentration in the sample as long as equilibrium is reached or, in case of short time pre-equilibrium, with help of convection or agitation.
https://en.wikipedia.org/wiki/Solid-phase_extraction
Solid phase microextraction , or SPME , is a solid phase extraction sampling technique that involves the use of a fiber coated with an extracting phase, that can be a liquid ( polymer ) or a solid ( sorbent ), [ 1 ] which extracts different kinds of analytes (including both volatile and non-volatile) from different kinds of media, that can be in liquid or gas phase. [ 2 ] The quantity of analyte extracted by the fibre is proportional to its concentration in the sample as long as equilibrium is reached or, in case of short time pre-equilibrium, with help of convection or agitation. After extraction, the SPME fiber is transferred to the injection port of separating instruments, such as a gas chromatography and mass spectrometry , [ 3 ] where desorption of the analyte takes place and analysis is carried out. The attraction of SPME is that the extraction is fast, simple, can be done usually without solvents, and detection limits can reach parts per trillion (ppt) levels for certain compounds. SPME also has great potential for field applications; on-site sampling can be done even by nonscientists without the need to have gas chromatography-mass spectrometry equipment at each location. When properly stored, samples can be analyzed days later in the laboratory without significant loss of volatiles. [ 4 ] The coating on the SPME fiber can be selected to improve sensitivity for specific analytes of interest; ideally the sorbent layer will have a high affinity for the target analytes. [ 5 ] [ 6 ] There are many commercially available SPME fiber coatings that are combinations of polydimethylsiloxane , divinylbenzene , Carboxen, polyacrylate , and polyethylene glycol . [ 7 ] [ 8 ] However, one downside to many of the commercially available SPME fibers is that they tend to be physically brittle due to their composition. [ 6 ] Depending on the characteristics of the target analytes, certain properties of the coating improve extraction such as polarity, thickness, and surface area. [ 5 ] [ 9 ] The sample matrix can also influence the fiber coating selection. Based on the sample and analytes of interest, the fiber may need to tolerate direct immersion as opposed to a headspace extraction. [ 7 ] In one of the study the fiber coating method significantly enhances the performance of SPME by ensuring a high binding capacity and improved mass transfer efficiency. By preventing the ingress of the polymeric adhesive matrix into the pores of the sorbent particles, the method allows for faster adsorption and desorption times, which is crucial for high-throughput applications. [ 10 ] SPME has become an essential technique in forensic science, particularly for analyzing complex matrices such as blood, urine, and environmental samples. Its advantages include the ability to perform rapid and sensitive extractions without the need for extensive sample preparation, which is crucial in forensic investigations where sample integrity is paramount. For instance, SPME has been successfully employed to detect drugs of abuse, explosives, and other volatile compounds from various samples, allowing for the efficient identification of substances relevant to criminal cases. The automation and miniaturization of SPME techniques further enhance their applicability in forensic settings, enabling high-throughput analysis and reducing the risk of contamination. [ 4 ] SPME is recognized as a green analytical method for sample preparation, particularly in forensic drug analysis. This technique offers several advantages over traditional methods like liquid-liquid extraction (LLE) and solid-phase extraction (SPE), including automation, rapid sample processing, and reduced solvent usage. SPME allows for the extraction of analytes directly from complex matrices, such as biological and environmental samples, while minimizing the environmental impact associated with conventional extraction techniques. [ 4 ] [ 11 ]
https://en.wikipedia.org/wiki/Solid-phase_microextraction
In chemistry , solid-phase synthesis is a method in which molecules are covalently bound on a solid support material and synthesised step-by-step in a single reaction vessel utilising selective protecting group chemistry. Benefits compared with normal synthesis in a liquid state include: The reaction can be driven to completion and high yields through the use of excess reagent . In this method, building blocks are protected at all reactive functional groups . The order of functional group reactions can be controlled by the order of deprotection. This method is used for the synthesis of peptides , [ 1 ] [ 2 ] deoxyribonucleic acid ( DNA ), ribonucleic acid ( RNA ), and other molecules that need to be synthesised in a certain alignment. [ 3 ] More recently, this method has also been used in combinatorial chemistry and other synthetic applications. The process was originally developed in the 1950s and 1960s by Robert Bruce Merrifield in order to synthesise peptide chains, [ 4 ] and which was the basis for his 1984 Nobel Prize in Chemistry . [ 5 ] In the basic method of solid-phase synthesis, building blocks that have two functional groups are used. One of the functional groups of the building block is usually protected by a protective group. The starting material is a bead which binds to the building block. At first, this bead is added into the solution of the protected building block and stirred. After the reaction between the bead and the protected building block is completed, the solution is removed and the bead is washed. Then the protecting group is removed and the above steps are repeated. After all steps are finished, the synthesised compound is chemically cleaved from the bead. If a compound containing more than two kinds of building blocks is synthesised, a step is added before the deprotection of the building block bound to the bead; a functional group which is on the bead and did not react with an added building block has to be protected by another protecting group which is not removed at the deprotective condition of the building block. Byproducts which lack the building block of this step only are prevented by this step. In addition, this step makes it easy to purify the synthesised compound after cleavage from the bead. Solid-phase synthesis is a common technique for peptide synthesis . Usually, peptides are synthesised from the carbonyl group side (C-terminus) to amino group side (N-terminus) of the amino acid chain in the SPPS method, although peptides are biologically synthesised in the opposite direction in cells. In peptide synthesis, an amino-protected amino acid is bound to a solid phase material or resin (most commonly, low cross-linked polystyrene beads), forming a covalent bond between the carbonyl group and the resin, most often an amido or an ester bond. [ 6 ] Then the amino group is deprotected and reacted with the carbonyl group of the next N-protected amino acid. The solid phase now bears a dipeptide. This cycle is repeated to form the desired peptide chain. After all reactions are complete, the synthesised peptide is cleaved from the bead. The protecting groups for the amino groups mostly used in the peptide synthesis are 9-fluorenylmethyloxycarbonyl group ( Fmoc ) and t-butyloxycarbonyl ( Boc ). A number of amino acids bear functional groups in the side chain which must be protected specifically from reacting with the incoming N-protected amino acids. In contrast to Boc and Fmoc groups, these have to be stable over the course of peptide synthesis although they are also removed during the final deprotection of peptides. Relatively short fragments of DNA , RNA , and modified oligonucleotides are also synthesised by the solid-phase method. Although oligonucleotides can be synthesised in a flask, they are almost always synthesised on solid phase using a DNA/RNA synthesizer. For a more comprehensive review, see oligonucleotide synthesis . The method of choice is generally phosphoramidite chemistry, developed in the 1980s.
https://en.wikipedia.org/wiki/Solid-phase_synthesis
Solid-state chemistry , also sometimes referred as materials chemistry , is the study of the synthesis , structure, and properties of solid phase materials. It therefore has a strong overlap with solid-state physics , mineralogy , crystallography , ceramics , metallurgy , thermodynamics , materials science and electronics with a focus on the synthesis of novel materials and their characterization. A diverse range of synthetic techniques, such as the ceramic method and chemical vapour depostion , make solid-state materials. Solids can be classified as crystalline or amorphous on basis of the nature of order present in the arrangement of their constituent particles. [ 1 ] Their elemental compositions, microstructures, and physical properties can be characterized through a variety of analytical methods. Because of its direct relevance to products of commerce, solid state inorganic chemistry has been strongly driven by technology. Progress in the field has often been fueled by the demands of industry, sometimes in collaboration with academia. [ 2 ] Applications discovered in the 20th century include zeolite and platinum -based catalysts for petroleum processing in the 1950s, high-purity silicon as a core component of microelectronic devices in the 1960s, and “high temperature” superconductivity in the 1980s. The invention of X-ray crystallography in the early 1900s by William Lawrence Bragg was an enabling innovation. Our understanding of how reactions proceed at the atomic level in the solid state was advanced considerably by Carl Wagner 's work on oxidation rate theory, counter diffusion of ions, and defect chemistry. Because of his contributions, he has sometimes been referred to as the father of solid state chemistry . [ 3 ] Given the diversity of solid-state compounds, an equally diverse array of methods are used for their preparation. [ 1 ] [ 4 ] Synthesis can range from high-temperature methods, like the ceramic method, to gas methods, like chemical vapour deposition . Often, the methods prevent defect formation [ 5 ] or produce high-purity products. [ 6 ] The ceramic method is one of the most common synthesis techniques. [ 7 ] The synthesis occurs entirely in the solid state. [ 7 ] The reactants are ground together, formed into a pellet using a pellet press and hydraulic press, and heated at high temperatures. [ 7 ] When the temperature of the reactants are sufficient, the ions at the grain boundaries react to form desired phases. Generally ceramic methods give polycrystalline powders, but not single crystals. Using a mortar and pestle , ResonantAcoustic mixer, or ball mill , the reactants are ground together, which decreases size and increases surface area of the reactants. [ 8 ] If the mixing is not sufficient, we can use techniques such as co-precipitation and sol-gel . [ 7 ] A chemist forms pellets from the ground reactants and places the pellets into containers for heating. [ 7 ] The choice of container depends on the precursors, the reaction temperature and the expected product. [ 7 ] For example, metal oxides are typically synthesized in silica or alumina containers. [ 7 ] A tube furnace heats the pellet. [ 7 ] Tube furnaces are available up to maximum temperatures of 2800 o C. [ 9 ] Molten flux synthesis can be an efficient method for obtaining single crystals. In this method, the starting reagents are combined with flux, an inert material with a melting point lower than that of the starting materials. The flux serves as a solvent. After the reaction, the excess flux can be washed away using an appropriate solvent or it can be heat again to remove the flux by sublimation if it is a volatile compound. Crucible materials have a great role to play in molten flux synthesis. The crucible should not react with the flux or the starting reagent. If any of the material is volatile, it is recommended to conduct the reaction in a sealed ampule. If the target phase is sensitive to oxygen, a carbon- coated fused silica tube or a carbon crucible inside a fused silica tube is often used which prevents the direct contact between the tube wall and reagents. Chemical vapour transport results in very pure materials. The reaction typically occurs in a sealed ampoule. [ 10 ] A transporting agent, added to the sealed ampoule, produces a volatile intermediate species from the solid reactant. [ 10 ] For metal oxides, the transporting agent is usually Cl 2 or HCl. [ 10 ] The ampoule has a temperature gradient, and, as the gaseous reactant travels along the gradient, it eventually deposits as a crystal. [ 10 ] An example of an industrially-used chemical vapor transport reaction is the Mond process . The Mond process involves heating impure nickel in a stream of carbon monoxide to produce pure nickel. [ 6 ] Intercalation synthesis is the insertion of molecules or ions between layers of a solid. [ 11 ] The layered solid has weak intermolecular bonds holding its layers together. [ 11 ] The process occurs via diffusion . [ 11 ] Intercalation is further driven by ion exchange , acid-base reactions or electrochemical reactions . [ 11 ] The intercalation method was first used in China with the discovery of porcelain . Also, graphene is produced by the intercalation method, and this method is the principle behind lithium-ion batteries . [ 12 ] It is possible to use solvents to prepare solids by precipitation or by evaporation . [ 5 ] At times, the solvent is a hydrothermal that is under pressure at temperatures higher than the normal boiling point . [ 5 ] A variation on this theme is the use of flux methods , which use a salt with a relatively low melting point as the solvent. [ 5 ] Many solids react vigorously with gas species like chlorine , iodine , and oxygen . [ 13 ] [ 14 ] Other solids form adducts , such as CO or ethylene . Such reactions are conducted in open-ended tubes, which the gasses are passed through. Also, these reactions can take place inside a measuring device such as a TGA . In that case, stoichiometric information can be obtained during the reaction, which helps identify the products. Chemical vapour deposition is a method widely used for the preparation of coatings and semiconductors from molecular precursors. [ 15 ] A carrier gas transports the gaseous precursors to the material for coating. [ 16 ] This is the process in which a material’s chemical composition, structure, and physical properties are determined using a variety of analytical techniques. Synthetic methodology and characterization often go hand in hand in the sense that not one but a series of reaction mixtures are prepared and subjected to heat treatment. Stoichiometry , a numerical relationship between the quantities of reactant and product, is typically varied systematically. It is important to find which stoichiometries will lead to new solid compounds or solid solutions between known ones. A prime method to characterize the reaction products is powder diffraction because many solid-state reactions will produce polycrystalline molds or powders. Powder diffraction aids in the identification of known phases in the mixture. [ 17 ] If a pattern is found that is not known in the diffraction data libraries, an attempt can be made to index the pattern. The characterization of a material's properties is typically easier for a product with crystalline structures. Once the unit cell of a new phase is known, the next step is to establish the stoichiometry of the phase. This can be done in several ways. Sometimes the composition of the original mixture will give a clue, under the circumstances that only a product with a single powder pattern is found or a phase of a certain composition is made by analogy to known material, but this is rare. Often, considerable effort in refining the synthetic procedures is required to obtain a pure sample of the new material. If it is possible to separate the product from the rest of the reaction mixture, elemental analysis methods such as scanning electron microscopy (SEM) and transmission electron microscopy (TEM) can be used. The detection of scattered and transmitted electrons from the surface of the sample provides information about the surface topography and composition of the material. [ 18 ] Energy dispersive X-ray spectroscopy (EDX) is a technique that uses electron beam excitation. Exciting the inner shell of an atom with incident electrons emits characteristic X-rays with specific energy to each element. [ 19 ] The peak energy can identify the chemical composition of a sample, including the distribution and concentration. [ 19 ] Similar to EDX, X-ray diffraction analysis (XRD) involves the generation of characteristic X-rays upon interaction with the sample. The intensity of diffracted rays scattered at different angles is used to analyze the physical properties of a material such as phase composition and crystallographic structure. [ 20 ] These techniques can also be coupled to achieve a better effect. For example, SEM is a useful complement to EDX due to its focused electron beam, it produces a high-magnification image that provides information on the surface topography. [ 18 ] Once the area of interest has been identified, EDX can be used to determine the elements present in that specific spot. Selected area electron diffraction can be coupled with TEM or SEM to investigate the level of crystallinity and the lattice parameters of a sample. [ 21 ] X-ray diffraction is also used due to its imaging capabilities and speed of data generation. [ 22 ] The latter often requires revisiting and refining the preparative procedures and that are linked to the question of which phases are stable at what composition and what stoichiometry. In other words, what the phase diagram looks like. [ 23 ] An important tool in establishing this are thermal analysis techniques like DSC or DTA and increasingly also, due to the advent of synchrotrons , temperature-dependent powder diffraction. Increased knowledge of the phase relations often leads to further refinement in synthetic procedures in an iterative way. New phases are thus characterized by their melting points and their stoichiometric domains. The latter is important for the many solids that are non-stoichiometric compounds. The cell parameters obtained from XRD are particularly helpful to characterize the homogeneity ranges of the latter. In contrast to the large structures of crystals, the local structure describes the interaction of the nearest neighbouring atoms. Methods of nuclear spectroscopy use specific nuclei to probe the electric and magnetic fields around the nucleus. E.g. electric field gradients are very sensitive to small changes caused by lattice expansion/compression (thermal or pressure), phase changes, or local defects. Common methods are Mössbauer spectroscopy and perturbed angular correlation . For metallic materials, their optical properties arise from the collective excitation of conduction electrons. The coherent oscillations of electrons under electromagnetic radiation along with associated oscillations of the electromagnetic field are called surface plasmon resonances . [ 24 ] The excitation wavelength and frequency of the plasmon resonances provide information on the particle's size, shape, composition, and local optical environment. [ 24 ] For non-metallic materials or semiconductors , they can be characterized by their band structure. It contains a band gap that represents the minimum energy difference between the top of the valence band and the bottom of the conduction band. The band gap can be determined using Ultraviolet-visible spectroscopy to predict the photochemical properties of the semiconductors. [ 25 ] In many cases, new solid compounds are further characterized [ 26 ] by a variety of techniques that straddle the fine line that separates solid-state chemistry from solid-state physics. See Characterisation in material science for additional information.
https://en.wikipedia.org/wiki/Solid-state_chemistry
Solid-state electronics are semiconductor electronics: electronic equipment that use semiconductor devices such as transistors , diodes and integrated circuits (ICs). [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The term is also used as an adjective for devices in which semiconductor electronics that have no moving parts replace devices with moving parts, such as the solid-state relay , in which transistor switches are used in place of a moving-arm electromechanical relay , or the solid-state drive (SSD), a type of semiconductor memory used in computers to replace hard disk drives , which store data on a rotating disk. [ 6 ] The term solid-state became popular at the beginning of the semiconductor era in the 1960s to distinguish this new technology. A semiconductor device works by controlling an electric current consisting of electrons or holes moving within a solid crystalline piece of semiconducting material such as silicon , while the thermionic vacuum tubes it replaced worked by controlling a current of electrons or ions in a vacuum within a sealed tube. Although the first solid-state electronic device was the cat's whisker detector , a crude semiconductor diode invented around 1904, solid-state electronics started with the invention of the transistor in 1947. [ 7 ] Before that, all electronic equipment used vacuum tubes , because vacuum tubes were the only electronic components that could amplify —an essential capability in all electronics. The transistor, which was invented by John Bardeen and Walter Houser Brattain while working under William Shockley at Bell Laboratories in 1947, [ 8 ] could also amplify, and replaced vacuum tubes. The first transistor hi-fi system was developed by engineers at GE and demonstrated at the University of Philadelphia in 1955. [ 9 ] In terms of commercial production, The Fisher TR-1 was the first "all transistor" preamplifier , which became available mid-1956. [ 10 ] In 1961, a company named Transis-tronics released a solid-state amplifier, the TEC S-15. [ 11 ] The replacement of bulky, fragile, energy-hungry vacuum tubes by transistors in the 1960s and 1970s created a revolution not just in technology but in people's habits, making possible the first truly portable consumer electronics such as the transistor radio , cassette tape player , walkie-talkie and quartz watch , as well as the first practical computers and mobile phones . Other examples of solid state electronic devices are the microprocessor chip, LED lamp, solar cell , charge coupled device (CCD) image sensor used in cameras, and semiconductor laser . Also during the 1960s and 1970s, television set manufacturers switched from vacuum tubes to semiconductors, and advertised sets as "100% solid state" [ 12 ] even though the cathode-ray tube (CRT) was still a vacuum tube. It meant only the chassis was 100% solid-state, not including the CRT. Early advertisements spelled out this distinction, [ 13 ] but later advertisements assumed the audience had already been educated about it and shortened it to just "100% solid state". LED displays can be said to be truly 100% solid-state. [ 14 ]
https://en.wikipedia.org/wiki/Solid-state_electronics
Solid state fermentation (SSF) is a biomolecule manufacturing process used in the food, pharmaceutical, cosmetic, fuel and textile industries. These biomolecules are mostly metabolites generated by microorganisms grown on a solid support selected for this purpose. This technology for the culture of microorganisms is an alternative to liquid or submerged fermentation , used predominantly for industrial purposes. This process consists of depositing a solid culture substrate, such as rice or wheat bran, on flatbeds after seeding it with microorganisms ; the substrate is then left in a temperature-controlled room for several days. Liquid state fermentation is performed in tanks, which can reach 1,001 to 2,500 square metres (10,770 to 26,910 sq ft) at an industrial scale. Liquid culture is ideal for the growing of unicellular organisms such as bacteria or yeasts. To achieve liquid aerobic fermentation, it is necessary to constantly supply the microorganism with oxygen, which is generally done via stirring the fermentation media. Accurately managing the synthesis of the desired metabolites requires regulating temperature, soluble oxygen, ionic strength, pH and control nutrients. Applying this growing technique to filamentous fungi leads to difficulties. The fungus develops in its vegetative form, generating hyphae or multicellular ramous filaments , while a septum separates the cells. As this mycelium develops in a liquid environment, it generates abundant viscosity in the growing medium, reducing oxygen solubility, while stirring disrupts the cell network increasing cell mortality. In nature, filamentous fungi grow on the ground, decomposing vegetal compounds under naturally ventilated conditions. Therefore, solid state fermentation enables the optimal development of filamentous fungi, allowing the mycelium to spread on the surface of solid compounds among which air can flow. Solid state fermentation uses culture substrates with low water levels (reduced water activity), which is particularly appropriate for mould. The methods used to grow filamentous fungi using solid state fermentation allow the best reproduction of their natural environment. The medium is saturated with water but little of it is free-flowing. The solid medium comprises both the substrate and the solid support on which the fermentation takes place. The substrate used is generally composed of vegetal byproducts such as beet pulp or wheat bran. [ 1 ] [ 2 ] [ 3 ] [ 4 ] At the beginning of the growth process, the substrates and solid culture compounds are non-soluble compounds composed of very large, biochemically complex molecules that the fungus will cut off to get essential C and N nutrients. To develop its natural substrate, the fungal organism sets forth its entire genetic potential to produce the metabolites necessary for its growth. The composition of the growth medium guides the microorganism's metabolism towards the production of enzymes that release bio-available single molecules such as sugars or amino acids by carving out macromolecules. Therefore, when selecting the components of the growth medium it is possible to guide the cells towards the production of the desired metabolite(s), mainly enzymes that transform polymers (cellulose, hemicellulose, pectins, proteins) into single moieties in a very efficient and cost-effective manner. Compared to submerged fermentation processes, solid state fermentation is more cost-effective: smaller vessels, lower water consumption, reduced wastewater treatment costs and lower energy consumption (no need to heat up water, poor mechanical energy input due to smooth stirring). [ 4 ] [ 5 ] Cultivating on heterogeneous substrates requires expertise to maintain optimal growth conditions. Air flow monitoring is key because it impacts temperature, oxygen supply and moisture. In order to maintain sufficient moisture content for the growth of filamentous fungus, waterlogged air is used and may require further addition of water. In most cases, solid state fermentation does not require a completely sterile environment as the initial sterilization of the fermentation substrate associated with the rapid colonization of the substrate by the fungous microorganism limits the development of the autochthonous flora. [ 4 ] Traditionally, SSF has been used in Asian countries to produce Koji using rice to manufacture alcoholic beverages such as Sake or Koji using soybean seeds. The latter produces sauces such as soy sauce or other foods. In Western countries, the traditional manufacturing process of many foods uses SSF. Examples include fermented bakery products such as bread or for the maturing of cheese. SSF is also widely used to prepare raw materials such as chocolate and coffee; typically cacao bean fermentation and coffee bean skin removal are SSF processes carried out under natural tropical conditions. Enzymes and enzymatic complexes able to break down difficult-to-transform macromolecules such as cellulose, hemicelluloses, pectin and proteins. Solid state fermentation is well suited for the production of various enzymatic complexes composed of multiple enzymes. [ 2 ] [ 6 ] [ 4 ] Enzymatic compounds generated by SSF find outlets in all sectors where digestibility, solubility or viscosity is needed. This is why SSF enzymes are widely used in the following industries: Liquid, submerged and solid state fermentation are age-old techniques used for the preservation and manufacturing of foods. During the second half of the twentieth century, liquid state fermentation developed on an industrial scale to manufacture vital metabolites such as antibiotics. Economic changes and growing environmental awareness generate new perspectives for solid state fermentation. SSF adds value to insoluble agricultural byproducts thanks to its higher energy efficiency and reduced water consumption. The renewal of SSF is now possible thanks to engineering firms, mainly from Asia, that have developed a new generation of equipment. Fujiwara makes vessels able to transform substrate areas up to 400 square metres (4,300 sq ft) for the production of soy sauce or sake. Other companies use solid state fermentation for enzyme complexes. In France Lyven has manufactured Pectinases and Hemicellulases on beet pulp and wheat bran since 1980. The company (now part of Soufflet Group) is now involved in a global R&D programme focusing on SSF technology.
https://en.wikipedia.org/wiki/Solid-state_fermentation
Solid-state nuclear magnetic resonance (ssNMR) is a spectroscopy technique used to characterize atomic-level structure and dynamics in solid materials. ssNMR spectra are broader due to nuclear spin interactions which can be categorized as dipolar coupling , chemical shielding, quadrupolar interactions , and j-coupling . These interactions directly affect the lines shapes of experimental ssNMR spectra which can be seen in powder and dipolar patterns. There are many essential solid-state techniques alongside advanced ssNMR techniques that may be applied to elucidate the fundamental aspects of solid materials. ssNMR is often combined with magic angle spinning (MAS) to remove anisotropic interactions and improve the sensitivity of the technique. The applications of ssNMR further extend to biology and medicine . The resonance frequency of a nuclear spin depends on the strength of the magnetic field at the nucleus , which can be modified by isotropic (e.g. chemical shift , isotropic J- coupling ) and anisotropic interactions ( e.g. chemical shift anisotropy , dipolar interactions). In a classical liquid-state NMR experiment, molecular tumbling coming from Brownian motion averages anisotropic interactions to zero and they are therefore not reflected in the NMR spectrum. However, in media with no or little mobility (e.g. crystalline powders, glasses, large membrane vesicles, molecular aggregates), anisotropic local fields or interactions have substantial influence on the behaviour of nuclear spins, which results in the line broadening of the NMR spectra. Chemical shielding is a local property of each nuclear site in a molecule or compound, and is proportional to the applied external magnetic field. The external magnetic field induces currents of the electrons in molecular orbitals. These induced currents create local magnetic fields that lead to characteristic changes in resonance frequency. These changes can be predicted from molecular structure using empirical rules or quantum-chemical calculations. In general, the chemical shielding is anisotropic because of the anisotropic distribution of molecular orbitals around the nuclear sites. Under sufficiently fast magic angle spinning, or under the effect of molecular tumbling in solution-state NMR, the anisotropic dependence of the chemical shielding is time-averaged to zero, leaving only the isotropic chemical shift . Nuclear spins exhibit a magnetic dipole moment , which generates a magnetic field that interacts with the dipole moments of other nuclei ( dipolar coupling ). The magnitude of the interaction is dependent on the gyromagnetic ratio of the spin species, the internuclear distance r , and the orientation, with respect to the external magnetic field B , of the vector connecting the two nuclear spins (see figure). The maximum dipolar coupling is given by the dipolar coupling constant d , where γ 1 and γ 2 are the gyromagnetic ratios of the nuclei, ℏ {\displaystyle \hbar } is the reduced Planck constant , and μ 0 {\displaystyle \mu _{0}} is the vacuum permeability . In a strong magnetic field, the dipolar coupling depends on the angle θ between the internuclear vector and the external magnetic field B (figure) according to D becomes zero for θ m = arccos ⁡ 1 / 3 = arctan ⁡ 2 ≃ 54.7 ∘ {\displaystyle \theta _{m}=\arccos {\sqrt {1/3}}=\arctan {\sqrt {2}}\simeq 54.7^{\circ }} . Consequently, two nuclei with a dipolar coupling vector at an angle of θ m = 54.7° to a strong external magnetic field have zero dipolar coupling. θ m is called the magic angle . Magic angle spinning is typically used to remove dipolar couplings weaker than the spinning rate. Nuclei with a spin quantum number >1/2 have a non-spherical charge distribution and a quadrupole moment. [ 2 ] The quadrupole moment is a second rank tensor that couples to the surrounding electric field gradient , another second rank tensor. [ 2 ] [ 3 ] Nuclear quadrupole coupling is typically the second largest interaction in NMR, comparable in size to the largest interaction called Zeeman interactions . [ 4 ] When the nuclear quadrupole coupling is not negligible relative to the Zeeman coupling, higher order corrections are needed to describe the NMR spectrum correctly. In such cases, the first-order correction to the NMR transition frequency leads to a strong anisotropic line broadening of the NMR spectrum. However, all symmetric transitions, between m I {\displaystyle m_{I}} and − m I {\displaystyle -m_{I}} levels are unaffected by the first-order frequency contribution. The second-order frequency contribution depends on the P 4 Legendre polynomial , which has zero points at 30.6° and 70.1°. These anisotropic broadenings can be removed using DOR (DOuble angle Rotation) where you spin at two angles at the same time, or DAS (Double Angle Spinning) [ 5 ] where you switch quickly between the two angles. Both techniques were developed in the late 1980s, and require specialized hardware (probe). Multiple quantum magic angle spinning (MQMAS) NMR was developed in 1995 and has become a routine method for obtaining high resolution solid-state NMR spectra of quadrupolar nuclei. [ 6 ] [ 7 ] A similar method to MQMAS is satellite transition magic angle spinning (STMAS) NMR developed in 2000. The J-coupling or indirect nuclear spin-spin coupling (sometimes also called "scalar" coupling despite the fact that J is a tensor quantity) describes the interaction of nuclear spins through chemical bonds . J-couplings are not always resolved in solids owing to the typically large linewdiths observed in solid state NMR. Paramagnetic substances are subject to the Knight shift . A powder pattern arises in powdered samples where crystallites are randomly oriented relative to the magnetic field so that all molecular orientations are present. In presence of a chemical shift anisotropy interaction, each orientation with respect to the magnetic field gives a different resonance frequency. If enough crystallites are present, all the different contributions overlap continuously and lead to a smooth spectrum. Fitting of the pattern in a static ssNMR experiment gives information about the shielding tensor, which are often described by the isotropic chemical shift δ i s o {\displaystyle \delta _{iso}} , the chemical shift anisotropy parameter Δ C S {\displaystyle \Delta _{CS}} , and the asymmetry parameter η {\displaystyle \eta } . [ 8 ] The dipolar powder pattern (also Pake pattern) has a very characteristic shape that arises when two nuclear spins are coupled together within a crystallite. The splitting between the maxima (the "horns") of the pattern is equal to the dipolar coupling constant d {\displaystyle d} .: [ 8 ] where γ 1 and γ 2 are the gyromagnetic ratios of the dipolar-coupled nuclei, r {\displaystyle r} is the internuclear distance, ℏ {\displaystyle \hbar } is the reduced Planck constant , and μ 0 {\displaystyle \mu _{0}} is the vacuum permeability . Magic angle spinning (MAS) is a technique routinely used in ssNMR to improve ssNMR spectra resolution. [ 9 ] After applying the MAS technique, NMR spectra will be sharper and narrower. [ 9 ] [ 10 ] This improved resolution results from manipulating a sample's spin interactions with the applied magnetic field . This is achieved by rotating the sample at a certain angle to the magnetic field to fully or partially average out anisotropic nuclear interactions such as dipolar , chemical shift anisotropy , and quadrupolar interactions. [ 9 ] This rotation angle is called the magic angle θ m (ca. 54.74°, where cos 2 θ m = 1/3). To achieve the complete averaging of these interactions, the sample needs to be spun at a rate that is at least higher than the largest anisotropy. [ 9 ] [ 10 ] Spinning a powder sample at a slower rate than the largest component of the chemical shift anisotropy results in an incomplete averaging of the interaction, and produces a set of spinning sidebands in addition to the isotropic line, centred at the isotropic chemical shift. [ 10 ] Spinning sidebands are sharp lines separated from the isotropic frequency by a multiple of the spinning rate. Although spinning sidebands can be used to measure anisotropic interactions, they are often undesirable and removed by spinning the sample faster or by recording the data points synchronously with the rotor period. Cross-polarization (CP) if a fundamental (Radiofrequency) RF pulse sequence and a building-block in many solid-state NMR. It is typically used to enhance the signal of a dilute nuclei with a low gyromagnetic ratio (e.g. 13 C , 15 N ) by magnetization transfer from an abundant nuclei with a high gyromagnetic ratio (e.g. 1 H ), or as a spectral editing method to get through space information (e.g. directed 15 N → 13 C CP in protein spectroscopy). [ 9 ] To establish magnetization transfer, RF pulses ("contact pulses") are simultaneously applied on both frequency channels to produce B 1 {\displaystyle B_{1}} fields whose strength fulfil the Hartmann–Hahn condition: [ 11 ] [ 12 ] where γ {\displaystyle \gamma } are the gyromagnetic ratios , ω R {\displaystyle \omega _{R}} is the spinning rate, and n {\displaystyle n} is an integer. In practice, the pulse power, as well as the length of the contact pulse are experimentally optimised. The power of one contact pulse is typically ramped to achieve a more broadband and efficient magnetisation transfer. Spin interactions can be removed ( decoupled ) to increase the resolution of NMR spectra during the detection, or to extend the lifetime of the nuclear magnetization. Heteronuclear decoupling is achieved by radio-frequency irradiation on at the frequency of the nucleus to be decoupled, which is often 1 H. The irradiation can be continuous (continuous wave decoupling [ 13 ] ), or a series of pulses that extend the performance and the bandwidth of the decoupling (TPPM, [ 14 ] SPINAL-64, [ 15 ] SWf-TPPM [ 16 ] ) Homonuclear decoupling is achieved with multiple-pulse sequences (WAHUHA, [ 17 ] MREV-8, [ 18 ] BR-24, [ 19 ] BLEW-12, [ 19 ] FSLG [ 20 ] ), or continuous wave modulation (DUMBO, [ 21 ] eDUMBO [ 22 ] ). Dipolar interactions can also be removed with magic angle spinning . Ultra fast MAS (from 60 kHz up to above 111 kHz) is an efficient way to average all dipolar interactions, including 1 H– 1 H homonuclear dipolar interactions, which extends the resolution of 1 H spectra and enables the usage of pulse sequences used in solution state NMR. [ 23 ] [ 24 ] Rotational Echo DOuble Resonance (REDOR) experiments, [ 25 ] [ 26 ] are a type of heteronuclear dipolar recoupling experiment which enables the re-introduction of heteronuclear dipolar couplings averaged by MAS. The reintroduction of such dipolar coupling reduces the intensity of the NMR signal compared to a reference spectrum where no dephasing pulse is used. REDOR can be used to measure heteronuclear distances, and are the basis of NMR crystallographic studies. The strong 1 H- 1 H homonuclear dipolar interactions associated with broad NMR lines and short T 2 relaxation time effectively relegate proton for bimolecular NMR. Fast MAS and reduction of dipolar interactions by deuteration have made proton ssNMR as versatile as in solution. This includes spectral dispersion in multi-dimensional experiments [ 27 ] and structurally valuable restraints and parameters important for studying material dynamics. [ 28 ] Ultra-fast NMR and the sharpening of the NMR lines enable NMR pulse sequences to capitalize on proton-detection to improve the sensitivity of the experiments compared to the direct detection of a spin-1/2 system (X). Such enhancement factor ξ {\displaystyle \xi } is given by: where γ {\displaystyle \gamma } are the gyromagnetic ratios , W {\displaystyle W} represent the NMR line widths, and Q {\displaystyle Q} represent the quality factor of the probe resonances. [ 29 ] Magic angle spinning dynamic nuclear polarization (MAS-DNP) is a technique that increases the sensitivity of NMR experiments by several orders of magnitude. [ 30 ] [ 31 ] It involves transferring the very high electron polarisation from unpaired electrons to nearby nuclei. This is achieved at cryogenic temperatures through a continuous microwave irradiation from a klystron or a gyrotron , with a frequency close to the corresponding electron paramagnetic resonance (EPR) frequency. The development in the MAS-DNP instrumentation, and the improvement of polarising agents (TOTAPOL, AMUPOL, TEKPOL, etc. [ 31 ] ) to achieve a more efficient transfer of polarisation has dramatically reduced experiments times which enabled the observation of surfaces, [ 32 ] insensitive isotopes, [ 33 ] and multidimensional experiments on low natural abundance nuclei, [ 34 ] and diluted species. [ 35 ] Beta-detected nuclear magnetic resonance (β-NMR) is specialized technique that has working principles similar to muon spin spectroscopy . [ 36 ] It is used in domains such as chemistry , materials science , condensed matter physics , and biology as a powerful probe. [ 36 ] [ 37 ] [ 38 ] β-NMR is practiced at facilities such as TRIUMF and ISOLDE as well as research groups in Osaka and Moscow . [ 38 ] What makes β-NMR different than conventional NMR is firstly, where and when the spin polarization of the nuclei occurs and secondly, how the signal is produced. [ 36 ] [ 39 ] To conduct a β-NMR experiment, optical pumping is performed on a radioactive beam of particles, such as 8 Li and 31 Mg, to polarize their nuclear spin to nearly one-hundred percent. [ 36 ] [ 40 ] The isotopes are subsequently implanted into a sample in vacuum in the dilute-limit to eliminate homonuclear probe interactions. The spin–lattice relaxation of the probe is monitored by the parity-violating beta-decay of the radioactive isotope. [ 36 ] [ 40 ] This anisotropic decay is where the signal originates for β-NMR experiment. [ 36 ] [ 39 ] This technique allows for investigation of the local magnetic and electronic environment within a material . [ 36 ] [ 40 ] ssNMR spectroscopy serves as an effective analytical tool in biological, organic, and inorganic chemistry due to its close resemblance to liquid-state spectra while providing additional insights into anisotropic interactions. [ 41 ] It is used to characterize chemical composition, structure, local motions, kinetics, and thermodynamics, with the special ability to assign the observed behavior to specific sites in a molecule. It is also crucial in the area of surface and interfacial chemistry. [ 42 ] ssNMR is used to study insoluble proteins and proteins such as membrane proteins [ 43 ] and amyloid fibrils . [ 44 ] Using the principles of MAS, protein tertiary structure information can be determined. [ 45 ] This includes the assessment of protein dynamics. [ 46 ] ssNMR is used to study biomaterials such as bone , [ 47 ] [ 48 ] teeth , [ 49 ] [ 50 ] hair , [ 51 ] silk , [ 52 ] wood , [ 53 ] as well as viruses , [ 54 ] [ 55 ] plants , [ 56 ] [ 57 ] cells , [ 58 ] [ 59 ] and collected biopsies . [ 60 ] ssNMR is used in pharmaceutical research for the characterization of drug polymorphs and solid dispersions. [ 61 ] ssNMR spectroscopy is used in materials science to analyze solid samples. [ 62 ] Here, molecules have restricted motion which leads to complex magnetic interactions, such as dipole-dipole coupling, chemical shift anisotropy, and quadrupolar interactions. [ 63 ] These interactions can provide more detailed information than X-ray diffraction or solution NMR spectroscopy about the material's structure to elucidate information on the solid's (crystalline and non-crystalline) local structure and dynamics. ssNMR has been successfully used to study metal organic frameworks , [ 64 ] solid-state batteries , [ 65 ] surfaces of nanoporous materials , [ 66 ] and polymers . [ 67 ]
https://en.wikipedia.org/wiki/Solid-state_nuclear_magnetic_resonance
A solid-state nuclear track detector or SSNTD (also known as an etched track detector or a dielectric track detector , DTD ) is a sample of a solid material (photographic emulsion , crystal, glass or plastic) exposed to nuclear radiation ( neutrons or charged particles , occasionally also gamma rays ), etched in a corrosive chemical, and examined microscopically. When the nuclear particles pass through the material they leave trails of molecular damage, and these damaged regions are etched faster than the bulk material, generating holes called tracks . The size and shape of these tracks yield information about the mass, charge, energy and direction of motion of the particles. The main advantages over other radiation detectors are the detailed information available on individual particles, the persistence of the tracks allowing measurements to be made over long periods of time, and the simple, cheap and robust construction of the detector. For these reasons, SSNTDs are commonly used to study cosmic rays , long-lived radioactive elements , radon concentration in houses, and the age of geological samples. The basis of SSNTDs is that charged particles damage the detector within nanometers along the track in such a way that the track can be etched many times faster than the undamaged material. Etching, typically for several hours, enlarges the damage to conical pits of micrometer dimensions, that can be observed with a microscope. For a given type of particle, the length of the track gives the energy of the particle. The charge can be determined from the etch rate of the track compared to that of the bulk. If the particles enter the surface at normal incidence, the pits are circular; otherwise the ellipticity and orientation of the elliptical pit mouth indicate the direction of incidence. A material commonly used in SSNTDs is polyallyl diglycol carbonate (also known as CR-39). It is a clear, colorless, rigid plastic with the chemical formula C 12 H 18 O 7 . Etching to expose radiation damage is typically performed using solutions of caustic alkalis such as sodium hydroxide , often at elevated temperatures for several hours.
https://en.wikipedia.org/wiki/Solid-state_nuclear_track_detector
Solid-state physics is the study of rigid matter , or solids , through methods such as solid-state chemistry , quantum mechanics , crystallography , electromagnetism , and metallurgy . It is the largest branch of condensed matter physics . Solid-state physics studies how the large-scale properties of solid materials result from their atomic -scale properties. Thus, solid-state physics forms a theoretical basis of materials science . Along with solid-state chemistry , it also has direct applications in the technology of transistors and semiconductors . Solid materials are formed from densely packed atoms, which interact intensely. These interactions produce the mechanical (e.g. hardness and elasticity ), thermal , electrical , magnetic and optical properties of solids. Depending on the material involved and the conditions in which it was formed, the atoms may be arranged in a regular, geometric pattern ( crystalline solids , which include metals and ordinary water ice ) or irregularly (an amorphous solid such as common window glass ). The bulk of solid-state physics, as a general theory, is focused on crystals . Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling. Likewise, crystalline materials often have electrical , magnetic , optical , or mechanical properties that can be exploited for engineering purposes. The forces between the atoms in a crystal can take a variety of forms. For example, in a crystal of sodium chloride (common salt), the crystal is made up of ionic sodium and chlorine , and held together with ionic bonds . In others, the atoms share electrons and form covalent bonds . In metals, electrons are shared amongst the whole crystal in metallic bonding . Finally, the noble gases do not undergo any of these types of bonding. In solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding. The physical properties of solids have been common subjects of scientific inquiry for centuries, but a separate field going by the name of solid-state physics did not emerge until the 1940s , in particular with the establishment of the Division of Solid State Physics (DSSP) within the American Physical Society . The DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society. [ 1 ] [ 2 ] Large communities of solid state physicists also emerged in Europe after World War II , in particular in England , Germany , and the Soviet Union . [ 3 ] In the United States and Europe, solid state became a prominent field through its investigations into semiconductors , superconductivity , nuclear magnetic resonance , and diverse other phenomena. During the early Cold War, research in solid state physics was often not restricted to solids, which led some physicists in the 1970s and 1980s to found the field of condensed matter physics , which organized around common techniques used to investigate solids, liquids, plasmas, and other complex matter. [ 1 ] Today, solid-state physics is broadly considered to be the subfield of condensed matter physics, often referred to as hard condensed matter, that focuses on the properties of solids with regular crystal lattices. Many properties of materials are affected by their crystal structure . This structure can be investigated using a range of crystallographic techniques, including X-ray crystallography , neutron diffraction and electron diffraction . The sizes of the individual crystals in a crystalline solid material vary depending on the material involved and the conditions when it was formed. Most crystalline materials encountered in everyday life are polycrystalline , with the individual crystals being microscopic in scale, but macroscopic single crystals can be produced either naturally (e.g. diamonds ) or artificially. Real crystals feature defects or irregularities in the ideal arrangements, and it is these defects that critically determine many of the electrical and mechanical properties of real materials. Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics. An early model of electrical conduction was the Drude model , which applied kinetic theory to the electrons in a solid. By assuming that the material contains immobile positive ions and an "electron gas" of classical, non-interacting electrons, the Drude model was able to explain electrical and thermal conductivity and the Hall effect in metals, although it greatly overestimated the electronic heat capacity. Arnold Sommerfeld combined the classical Drude model with quantum mechanics in the free electron model (or Drude-Sommerfeld model). Here, the electrons are modelled as a Fermi gas , a gas of particles which obey the quantum mechanical Fermi–Dirac statistics . The free electron model gave improved predictions for the heat capacity of metals, however, it was unable to explain the existence of insulators . The nearly free electron model is a modification of the free electron model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. By introducing the idea of electronic bands , the theory explains the existence of conductors , semiconductors and insulators . The nearly free electron model rewrites the Schrödinger equation for the case of a periodic potential . The solutions in this case are known as Bloch states . Since Bloch's theorem applies only to periodic potentials, and since unceasing random movements of atoms in a crystal disrupt periodicity, this use of Bloch's theorem is only an approximation, but it has proven to be a tremendously valuable approximation, without which most solid-state physics analysis would be intractable. Deviations from periodicity are treated by quantum mechanical perturbation theory . Modern research topics in solid-state physics include:
https://en.wikipedia.org/wiki/Solid-state_physics
The solid-state reaction route is the most widely used method for the preparation of polycrystalline solids from a mixture of solid starting materials. Solids do not react together at room temperature over normal time scales and it is necessary to heat them to much higher temperatures, often to 1000 to 1500 °C, in order for the reaction to occur at an appreciable rate. The factors on which the feasibility and rate of a solid state reaction depend include, reaction conditions, structural properties of the reactants, surface area of the solids, their reactivity and the thermodynamic free energy change associated with the reaction. [ 1 ] [ 2 ] These are the solid reactants from which it is proposed to prepare a solid crystalline compound. The selection of reactant chemicals depends on the reaction conditions and expected nature of the product. The reactants are dried thoroughly prior to weighing. As increase in surface area enhances the reaction rate, fine grained materials should be used if possible. After the reactants have been weighed out in the required amounts, they are mixed. For manual mixing of small quantities, usually an agate mortar and pestle are employed. Sufficient amount of some volatile organic liquid – preferably acetone or alcohol – is added to the mixture to aid homogenization. This forms a paste which is mixed thoroughly. During the process of grinding and mixing, the organic liquid gradually volatilizes and has usually evaporated completely after 10 to 15 minutes. For quantities much larger than ~20g, mechanical mixing is usually adopted using a ball mill and the pro For the subsequent reaction at high temperatures, it is necessary to choose a suitable container material which is chemically inert to the reactants under the heating conditions used. The noble metals , platinum and gold , are usually suitable. Containers may be crucibles or boats made from foil. For low temperature reactions, other metals like Nickel (below 600–700 °C) can be used. The heating programme to be used depends very much on the form and reactivity of the reactants. In the control of either temperature or atmosphere, nature of the reactant chemicals are considered in detail. A good furnace is used for heat treatment. Pelleting of samples is preferred prior to heating, since it increases the area of contact between the grains. The product materials are analyzed using various characterization techniques such as X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), etc.
https://en.wikipedia.org/wiki/Solid-state_reaction_route
Solid State Ionics is a monthly peer-reviewed scientific journal published by Elsevier . Established in 1980, it covers all aspects of diffusion and mass transport in solids, with a particular focus on defects in solids, intercalation , corrosion , oxidation , sintering , and ion transport . The journal has an irregular publication frequency, having one or two releases per month, with additional conference proceedings; each release may be one issue, multiple issues or one volume. The editor-in-chief is Joachim Maier ( Max Planck Institute for Solid State Research ). The journal is abstracted and indexed in: According to the Journal Citation Reports , the journal has a 2020 impact factor of 3.785. [ 1 ]
https://en.wikipedia.org/wiki/Solid_State_Ionics
A solid bowl centrifuge is a type of centrifuge that uses the principle of sedimentation . A centrifuge is used to separate a mixture that consists of two substances with different densities by using the centrifugal force resulting from continuous rotation. It is normally used to separate solid-liquid, liquid-liquid, and solid-solid mixtures. Solid bowl centrifuges are widely used in various industrial applications , such as wastewater treatment , coal manufacturing, and polymer manufacturing. One advantage of solid bowl centrifuges for industrial uses is the simplicity of installation compared to other types of centrifuge. There are three design types of solid bowl centrifuge, which are conical, cylindrical, and conical-cylindrical. During the industrial process of wastewater treatment, huge quantity of sludge is produced. The sludge needs to be disposed of or having further treatment. One of the treatment methods available is thickening the sludge by using solid bowl centrifuges. While prior sludge have the concentration around 0.5-1 % of dry solid, after the thickening process, it will contain up to 5-6% of dry solids. This process reduces the waste of active sludge volume by more than 80% as well as minimizing the sludge amount for digestion by 30-40%. Furthermore, less disposal sludge also lowers the cost of polymer and improves the characteristics of dewatering. Coal slurry, which contains around 6% solids by weight and nearly 60% of 10 mm material, is thickened using solid bowl centrifuges. By using this centrifuge technique, the concentration of the end product could reach up to 55-60% solids without extra chemical added. Additionally, solid bowl centrifuge is also used in the water removal process from waste slurry obtained from coal-cleaning facility. Solid bowl centrifuge is used in the manufacture of polymer to recover acetates from polymer slurry. The conical beach is used for internal washing for acetates recovery improvement. Previously, the centrifuge only consists of single lead conveyor, and then improved using double lead conveyor in order to increase the capacity. Furthermore, the extra double lead conveyor with minimized pitch is used to improve the acetate yield. Solid bowl centrifuge designs are divided into three different types based on the solid bowl shapes, which are conical, cylindrical, and cylindrical-conical. The choice of the centrifuge design in a particular industry is determined by the characteristics of the slurry and solids. [ 5 ] Among the three designs, initially conical bowl was the most preferable design due to its maximum allowance of water removal and its excellent classifying ability. However, this design is less effective in achieving a high centrate quality, which makes it a poor clarifier. Unlike conical bowl design, cylindrical bowl design does not allow maximum water removal, and thus mainly produces wet cakes. In addition, it is also a less effective classifier. However, cylindrical bowl design is more effective in achieving a high centrate quality compared to the conical bowl design, which makes it a better dewatering device compared to the conical bowl design. Conical-cylindrical bowl design was developed based on the conical and cylindrical bowl designs. This design is basically a combination of the best individual characteristics of the previous two designs, and therefore is a more advanced design. This particular design allows efficient dewatering ability, effective clarification, and fairly good classification within one unit. It has the ability to change and control the balance between the water removal and the centrate quality by the adjustment of its pool length, depending on the required product. Thus, conical-cylindrical bowl design is the most widely used in the industry today. A typical conical-cylindrical solid bowl centrifuge design contains a rotating bowl unit connected to a conveyor with a gear system. The gear system allows the rotating bowl and the conveyor to rotate at different speeds but in the same direction. Commonly, the conveyor operates at the speed between 1900 and 2400 rotation per minute, while the bowl unit operates at 100 rotation per minute higher. [ 6 ] The performances of the shape designed centrifuge types can be seen in the table as follows: Based on the exit stream of the solid cake and liquid centrate, there are two types of solid bowl centrifuge designs, which are: [ 6 ] For this design, the solid cake and the liquid centrate leave the centrifuge bowl at the same end. This design allows the solid cake and the liquid centrate to leave the centrifuge bowl at opposite ends. For this design, the conveyor pushes the sludge towards the end streams and the supernatant liquid is allowed to exit over the weirs. With the help of helical screw conveyor, solid bowl centrifuges separate two substances with different densities by the centrifugal force formed under fast rotation. Feed slurry enters the conveyor and is delivered into the rotating bowl through discharge ports. There is a slight speed difference between the rotation of conveyor and bowl, causing the solids to convey from the stationary zone where the wastewater is introduced to the bowl wall. By centrifugal force, the collected solids moves along the bowl wall, out of the pool and up the dewatering beach located at the tapered end of the bowl. At last the solids separated go to solid discharge while the liquids go to liquid discharge. The clarified liquid flows through the conveyor in the opposite direction through adjustable overflow parts. P e r c e n t a g e o f S o l i d R e c o v e r y = s o l i d i n f e e d − s o l i d i n e f f l u e n t s o l i d i n f e e d × 100 % {\displaystyle {Percentage\ of\ Solid\ Recovery}={\frac {solid\ in\ feed-solid\ in\ effluent}{solid\ in\ feed}}\times {100\%}} The table below shows the percentage of solid separation that has been achieved within different solids and cake solid formed while considering the effect of adding polymer: [ 12 ] The post-treatment of the waste stream produced by the solid bowl centrifuge diverse depending on the industrial application. Since different industries have different feed for the centrifuge system, the waste stream will be different as well, and thus require different post treatments. Below are some examples of the waste stream production and its necessary post treatment in various applications in the industry. In water treatment, the wanted product is the clean water, while the waste is the sludge containing dissolved organic and inorganic materials, fibrous matter, and extracellular polymer (ECP). The sludge is commonly discarded down the sewer or to landfill. Occasionally, the sludge is used in the production of bricks and concrete, in agriculture as a soil additive, or for land reclamation. In this application, solid bowl centrifuge is used as the final step of the water treatment sludge before disposal in order to reduce landfill charges and transport costs. Solid-bowl centrifuge is used for dewatering the coal waste slurry along with plate-and-frame filter press in coal manufacturing to dewater the slurry before being disposed of. The slurry feed was attained from the underflow of a functioning bituminous coal-cleaning thickener device. As for the waste stream, it is usually disposed of into slurry cells or deserted underground mine site if available, or more commonly in slurry impoundments. In this industry, acetate is recovered during the manufacturing of polymers. In this case, the wanted product is actually the polymer; however, the acetate is not a waste either since it is recovered. While the polymer solids are discharged through the solid exit port and further processed, the acetate that is recovered through the liquid exit port and further separated from the washing liquid to recover pure acetate. There are various aspects that can be improved from the current solid bowl centrifuges in order to increase its performance and reliability. To allow more control and simpler operation, support systems such as feed equipment, chemical dosing facility and better transfer pumps were designed and added. Moreover, the operating parameters can be adjusted to optimize the sludge dewatering process. It relies on rotational force in order to throw the solids out and for the sludge to stick to the outer all surfaces. The use of a metal screen or some suitable filtering material can also be added to achieve better solids dryness. [ 15 ]
https://en.wikipedia.org/wiki/Solid_bowl_centrifuge
Solid cell nests , often abbreviated as SCN , also known as solid cell rests , are specific groups of cells found in the thyroid gland of babies. [ 1 ] Typically they are a fraction of a millimeter in size but can rarely become larger. [ 1 ] They are considered to be the remains of the ultimobranchial body that exists in early development. [ 1 ] Solid cell nests were discovered in 1907 by pathologist Sophia Getzowa , as documented in her paper titled " Über die Glandula parathyreoidea, intrathyreoideale Zellhaufen derselben und Reste des postbranchialen Körpers ". [ 2 ]
https://en.wikipedia.org/wiki/Solid_cell_nests
In computing , solid compression is a method for data compression of multiple files, wherein all the uncompressed files are concatenated and treated as a single data block. Such an archive is called a solid archive. It is used natively in the 7z [ 1 ] and RAR [ 2 ] formats, as well as indirectly in tar -based formats such as .tar. gz and .tar. bz2 . By contrast, the ZIP format is not solid because it stores separately compressed files (though solid compression can be emulated for small archives by combining the files into an uncompressed archive file and then compressing that archive file inside a second compressed ZIP file). [ 3 ] [ 4 ] Compressed file formats often feature both compression (storing the data in a small space) and archiving (storing multiple files and metadata in a single file). One can combine these in two natural ways: The order matters (these operations do not commute ), and the latter is solid compression. In Unix, compression and archiving are traditionally separate operations, which allows one to understand this distinction: A rough graphical representation: In this example, three files each have a common part with the same information, a unique part with information not in the other files, and an "air" part with low-entropy and accordingly well-compressible information. original file A original file B original file C non solid archive: solid archive: Solid compression allows for much better compression rates when all the files are similar, which is often the case if they are of the same file format . It can also be efficient when archiving a large number of small files. On the other hand, getting a single file out of a solid archive requires processing all the files before it, so modifying solid archives could be slow and inconvenient. On newer formats such as 7-zip, there is a solid block size option that allows for the concatenated data block to be split into individually-compressed smaller blocks, so that only a limited amount of data in the block must be processed in order to extract one file. Parameters control the maximum solid block window size, the number of files in a block, and whether blocks are separated by file extension. [ 5 ] Additionally, if the archive becomes even slightly damaged, some of the data (sometimes even all data) after the damaged part in the block can be unusable (depending on the compression and archiving format), whereas in a non-solid archive format, usually only one file is unusable and the subsequent files can usually still be extracted.
https://en.wikipedia.org/wiki/Solid_compression
Solid fat index (SFI) is a measure of the percentage of fat in crystalline (solid) phase to total fat (the remainder being in liquid phase) across a temperature gradient. The SFI of a fat is measured using a dilatometer that measures the expansion of a fat as it is heated; density measurements are taken at a series of standardized temperature check points. [ 1 ] [ 2 ] The resulting SFI/temperature curve is related to melting qualities and flavor. For example, butter has a sharp SFI curve, indicating that it melts quickly and that it releases flavor quickly. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Solid_fat_index
A solid ground floor consists of a layer of concrete , which in the case of a domestic building will be the surface layer brought up to ground floor level with hardcore filling under it. The advantage of a solid ground floor is the elimination of dry rot and other problems normally associated with hollow joisted floors. The disadvantage is that the floor is less resilient to walk upon and may be more tiring for the user. Solid ground floors are usually found or situated in a kitchen but will be necessary for other rooms where wood blocks and other similar finishes are required. The concrete floor may be topped with a 25 mm thick cement and sand screed trowelled to a smooth finish. The usual mix is 1:3 and a colouring agent may be added to the mix to obtain a more attractive finish. The mix should be as dry as possible and the sand should be coarsely graded and clean to avoid shrinkage and cracking which might occur with a wet mix. The floor finish is carefully cured after laying. Granolithic is composed of cement and fine aggregate mortar , the aggregate being granite chippings, which will give the hard wearing quality of the finish. It will be laid with screed, troweled or floated to an even and fine finish. Granolithic paving will be suitable in areas which are to receive hard wear although its appearance would not normally be suitable for internal domestic work. Polyvinyl Chloride Tiles- These are another commonly used floor finish. After the floor has been laid with screed, these tiles are fixed with adhesive. They are attractive, smooth and cool, and damage can be repaired very easily as they are made in small square size, usually 150 mm to 225 mm. Though due to poor workmanship and dust this type of floor finish fails through lifting. [ 1 ] Terrazzo consists of a colored element binder or matrix and marble chips mixed to specified proportions. The finish is hardwearing, attractive and resistant to chemical attack. Metal (or ebonite ) strips divide the terrazzo into bays to avoid shrinkage and expansion. This architecture -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solid_ground_floor
Solid hydrogen is the solid state of the element hydrogen . At standard pressure , this is achieved by decreasing the temperature below hydrogen's melting point of 14.01 K (−259.14 °C; −434.45 °F). It was collected for the first time by James Dewar in 1899 and published with the title "Sur la solidification de l'hydrogène" (English: On the freezing of hydrogen) in the Annales de Chimie et de Physique , 7th series, vol. 18, Oct. 1899. [ 1 ] [ 2 ] Solid hydrogen has a density of 0.086 g/cm 3 making it one of the lowest-density solids. At low temperatures and at pressures up to around 400 GPa (3,900,000 atm), hydrogen forms a series of solid phases formed from discrete H 2 molecules. Phase I occurs at low temperatures and pressures, and consists of a hexagonal close-packed array of freely rotating H 2 molecules. Upon increasing the pressure at low temperature, a transition to Phase II occurs at up to 110 GPa. [ 3 ] Phase II is a broken-symmetry structure in which the H 2 molecules are no longer able to rotate freely. [ 4 ] If the pressure is further increased at low temperature, a Phase III is encountered at about 160 GPa. Upon increasing the temperature, a transition to a Phase IV occurs at a temperature of a few hundred kelvin at a range of pressures above 220 GPa. [ 5 ] [ 6 ] Identifying the atomic structures of the different phases of molecular solid hydrogen is extremely challenging, because hydrogen atoms interact with X-rays very weakly and only small samples of solid hydrogen can be achieved in diamond anvil cells , so that X-ray diffraction provides very limited information about the structures. Nevertheless, phase transitions can be detected by looking for abrupt changes in the Raman spectra of samples. Furthermore, atomic structures can be inferred from a combination of experimental Raman spectra and first-principles modelling. [ 7 ] Density functional theory calculations have been used to search for candidate atomic structures for each phase. These candidate structures have low free energies and Raman spectra in agreement with the experimental spectra. [ 8 ] [ 9 ] [ 10 ] Quantum Monte Carlo methods together with a first-principles treatment of anharmonic vibrational effects have then been used to obtain the relative Gibbs free energies of these structures and hence to obtain a theoretical pressure-temperature phase diagram that is in reasonable quantitative agreement with experiment. [ 11 ] On this basis, Phase II is believed to be a molecular structure of P 2 1 / c symmetry; Phase III is (or is similar to) a structure of C 2/ c symmetry consisting of flat layers of molecules in a distorted hexagonal arrangement; and Phase IV is (or is similar to) a structure of Pc symmetry, consisting of alternate layers of strongly bonded molecules and weakly bonded graphene-like sheets.
https://en.wikipedia.org/wiki/Solid_hydrogen
Solid light , or hard light , is a hypothetical material consisting of light in a solidified state . It primarily appears in science fiction . It has been theorized that solid light could exist. [ 1 ] [ 2 ] Some experiments claim to have created solid photonic matter or molecules by inducing strong interaction between photons. [ 3 ] [ 4 ] [ 5 ] Potential applications of solid light could include logic gates for quantum computers [ 4 ] and room-temperature superconductor development. [ 3 ] A team of Italian scientists published in Nature Journal in March 2025 that they have found a way to make light act like a "supersolid". [ 6 ] Solid light appears in several video game franchises , including Halo , Portal , and Overwatch . In Portal 2 , sunlight is used to create "hard light bridges", which act as solid semi-transparent walkways or barriers. [ 7 ] In Overwatch , the fictional Vishkar Corporation uses solid light as a construction material. [ 7 ] In Halo , solid light is the foundation of Forerunner weapons and many of their utilitarian devices like retractable bridges. [ citation needed ] Solid holograms appear many times in the TV show Star Trek . [ 7 ] [ 8 ] In Red Dwarf , the character Rimmer is a hologram who obtains a "hard light drive", allowing him to become tangible. [ 7 ] In the animated show Steven Universe , several main characters are aliens who have physical forms made out of light, with a gemstone as the only material part of their body. [ 9 ] [ 10 ] In DC Comics' Green Lantern , the various Lantern Corps use solid light constructs. [ 7 ] In Marvel Comics properties, "hard light" manifests in many forms. For example, Ms. Marvel utilizes photon blasts to generate concussive force. [ 11 ] The X-Men 's Danger Room also utilizes hard light constructs in its simulations. Photons , the particles that make up forms of electromagnetic radiation like light, do not normally interact with one another, but may be made to interact in a nonlinear medium. [ 12 ] The MIT-Harvard Center for Ultracold Atoms conducted experiments on photon interaction in the 2010s. Single photons were fired from weak lasers into a dense cloud of rubidium cooled to near absolute zero . The speed of light in the cloud was about 100,000 times slower than in a vacuum. Within the cloud, photons lost energy and gained mass. The conditions allowed photons to attract and bind to other photons, and exit the cloud as molecules. Reportedly, photon pairs were observed in 2013, and triplets in 2018. [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/Solid_light
Solid mechanics (also known as mechanics of solids ) is the branch of continuum mechanics that studies the behavior of solid materials, especially their motion and deformation under the action of forces , temperature changes, phase changes, and other external or internal agents. Solid mechanics is fundamental for civil , aerospace , nuclear , biomedical and mechanical engineering , for geology , and for many branches of physics and chemistry such as materials science . [ 1 ] It has specific applications in many other areas, such as understanding the anatomy of living beings, and the design of dental prostheses and surgical implants . One of the most common practical applications of solid mechanics is the Euler–Bernoulli beam equation . Solid mechanics extensively uses tensors to describe stresses, strains, and the relationship between them. Solid mechanics is a vast subject because of the wide range of solid materials available, such as steel, wood, concrete, biological materials, textiles, geological materials, and plastics. A solid is a material that can support a substantial amount of shearing force over a given time scale during a natural or industrial process or action. This is what distinguishes solids from fluids , because fluids also support normal forces which are those forces that are directed perpendicular to the material plane across from which they act and normal stress is the normal force per unit area of that material plane. Shearing forces in contrast with normal forces , act parallel rather than perpendicular to the material plane and the shearing force per unit area is called shear stress . Therefore, solid mechanics examines the shear stress, deformation and the failure of solid materials and structures. The most common topics covered in solid mechanics include: As shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics . A material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation , the proportion of deformation to original size is called strain. If the applied stress is sufficiently low (or the imposed strain is small enough), almost all solid materials behave in such a way that the strain is directly proportional to the stress; the coefficient of the proportion is called the modulus of elasticity . This region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, due to ease of computation. However, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, non-linear material models are becoming more common. These are basic models that describe how a solid responds to an applied stress:
https://en.wikipedia.org/wiki/Solid_mechanics
In geometry , a solid of revolution is a solid figure obtained by rotating a plane figure around some straight line (the axis of revolution ), which may not intersect the generatrix (except at its boundary). The surface created by this revolution and which bounds the solid is the surface of revolution . Assuming that the curve does not cross the axis, the solid's volume is equal to the length of the circle described by the figure's centroid multiplied by the figure's area ( Pappus's second centroid theorem ). A representative disc is a three- dimensional volume element of a solid of revolution. The element is created by rotating a line segment (of length w ) around some axis (located r units away), so that a cylindrical volume of π r 2 w units is enclosed. Two common methods for finding the volume of a solid of revolution are the disc method and the shell method of integration . To apply these methods, it is easiest to draw the graph in question; identify the area that is to be revolved about the axis of revolution; determine the volume of either a disc-shaped slice of the solid, with thickness δx , or a cylindrical shell of width δx ; and then find the limiting sum of these volumes as δx approaches 0, a value which may be found by evaluating a suitable integral. A more rigorous justification can be given by attempting to evaluate a triple integral in cylindrical coordinates with two different orders of integration. The disc method is used when the slice that was drawn is perpendicular to the axis of revolution; i.e. when integrating parallel to the axis of revolution. The volume of the solid formed by rotating the area between the curves of f ( y ) and g ( y ) and the lines y = a and y = b about the y -axis is given by V = π ∫ a b | f ( y ) 2 − g ( y ) 2 | d y . {\displaystyle V=\pi \int _{a}^{b}\left|f(y)^{2}-g(y)^{2}\right|\,dy\,.} If g ( y ) = 0 (e.g. revolving an area between the curve and the y -axis), this reduces to: V = π ∫ a b f ( y ) 2 d y . {\displaystyle V=\pi \int _{a}^{b}f(y)^{2}\,dy\,.} The method can be visualized by considering a thin horizontal rectangle at y between f ( y ) on top and g ( y ) on the bottom, and revolving it about the y -axis; it forms a ring (or disc in the case that g ( y ) = 0 ), with outer radius f ( y ) and inner radius g ( y ) . The area of a ring is π( R 2 − r 2 ) , where R is the outer radius (in this case f ( y ) ), and r is the inner radius (in this case g ( y ) ). The volume of each infinitesimal disc is therefore π f ( y ) 2 dy . The limit of the Riemann sum of the volumes of the discs between a and b becomes integral (1). Assuming the applicability of Fubini's theorem and the multivariate change of variables formula, the disk method may be derived in a straightforward manner by (denoting the solid as D): V = ∭ D d V = ∫ a b ∫ g ( z ) f ( z ) ∫ 0 2 π r d θ d r d z = 2 π ∫ a b ∫ g ( z ) f ( z ) r d r d z = 2 π ∫ a b 1 2 r 2 ‖ g ( z ) f ( z ) d z = π ∫ a b ( f ( z ) 2 − g ( z ) 2 ) d z {\displaystyle V=\iiint _{D}dV=\int _{a}^{b}\int _{g(z)}^{f(z)}\int _{0}^{2\pi }r\,d\theta \,dr\,dz=2\pi \int _{a}^{b}\int _{g(z)}^{f(z)}r\,dr\,dz=2\pi \int _{a}^{b}{\frac {1}{2}}r^{2}\Vert _{g(z)}^{f(z)}\,dz=\pi \int _{a}^{b}(f(z)^{2}-g(z)^{2})\,dz} The shell method (sometimes referred to as the "cylinder method") is used when the slice that was drawn is parallel to the axis of revolution; i.e. when integrating perpendicular to the axis of revolution. The volume of the solid formed by rotating the area between the curves of f ( x ) and g ( x ) and the lines x = a and x = b about the y -axis is given by V = 2 π ∫ a b x | f ( x ) − g ( x ) | d x . {\displaystyle V=2\pi \int _{a}^{b}x|f(x)-g(x)|\,dx\,.} If g ( x ) = 0 (e.g. revolving an area between curve and x -axis), this reduces to: V = 2 π ∫ a b x | f ( x ) | d x . {\displaystyle V=2\pi \int _{a}^{b}x|f(x)|\,dx\,.} The method can be visualized by considering a thin vertical rectangle at x with height f ( x ) − g ( x ) , and revolving it about the y -axis; it forms a cylindrical shell. The lateral surface area of a cylinder is 2π rh , where r is the radius (in this case x ), and h is the height (in this case f ( x ) − g ( x ) ). Summing up all of the surface areas along the interval gives the total volume. This method may be derived with the same triple integral, this time with a different order of integration: V = ∭ D d V = ∫ a b ∫ g ( r ) f ( r ) ∫ 0 2 π r d θ d z d r = 2 π ∫ a b ∫ g ( r ) f ( r ) r d z d r = 2 π ∫ a b r ( f ( r ) − g ( r ) ) d r . {\displaystyle V=\iiint _{D}dV=\int _{a}^{b}\int _{g(r)}^{f(r)}\int _{0}^{2\pi }r\,d\theta \,dz\,dr=2\pi \int _{a}^{b}\int _{g(r)}^{f(r)}r\,dz\,dr=2\pi \int _{a}^{b}r(f(r)-g(r))\,dr.} When a curve is defined by its parametric form ( x ( t ), y ( t )) in some interval [ a , b ] , the volumes of the solids generated by revolving the curve around the x -axis or the y -axis are given by [ 1 ] V x = ∫ a b π y 2 d x d t d t , V y = ∫ a b π x 2 d y d t d t . {\displaystyle {\begin{aligned}V_{x}&=\int _{a}^{b}\pi y^{2}\,{\frac {dx}{dt}}\,dt\,,\\V_{y}&=\int _{a}^{b}\pi x^{2}\,{\frac {dy}{dt}}\,dt\,.\end{aligned}}} Under the same circumstances the areas of the surfaces of the solids generated by revolving the curve around the x -axis or the y -axis are given by [ 2 ] A x = ∫ a b 2 π y ( d x d t ) 2 + ( d y d t ) 2 d t , A y = ∫ a b 2 π x ( d x d t ) 2 + ( d y d t ) 2 d t . {\displaystyle {\begin{aligned}A_{x}&=\int _{a}^{b}2\pi y\,{\sqrt {\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}}}\,dt\,,\\A_{y}&=\int _{a}^{b}2\pi x\,{\sqrt {\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}}}\,dt\,.\end{aligned}}} This can also be derived from multivariable integration. If a plane curve is given by ⟨ x ( t ) , y ( t ) ⟩ {\displaystyle \langle x(t),y(t)\rangle } then its corresponding surface of revolution when revolved around the x-axis has Cartesian coordinates given by r ( t , θ ) = ⟨ y ( t ) cos ⁡ ( θ ) , y ( t ) sin ⁡ ( θ ) , x ( t ) ⟩ {\displaystyle \mathbf {r} (t,\theta )=\langle y(t)\cos(\theta ),y(t)\sin(\theta ),x(t)\rangle } with 0 ≤ θ ≤ 2 π {\displaystyle 0\leq \theta \leq 2\pi } . Then the surface area is given by the surface integral A x = ∬ S d S = ∬ [ a , b ] × [ 0 , 2 π ] ‖ ∂ r ∂ t × ∂ r ∂ θ ‖ d θ d t = ∫ a b ∫ 0 2 π ‖ ∂ r ∂ t × ∂ r ∂ θ ‖ d θ d t . {\displaystyle A_{x}=\iint _{S}dS=\iint _{[a,b]\times [0,2\pi ]}\left\|{\frac {\partial \mathbf {r} }{\partial t}}\times {\frac {\partial \mathbf {r} }{\partial \theta }}\right\|\ d\theta \ dt=\int _{a}^{b}\int _{0}^{2\pi }\left\|{\frac {\partial \mathbf {r} }{\partial t}}\times {\frac {\partial \mathbf {r} }{\partial \theta }}\right\|\ d\theta \ dt.} Computing the partial derivatives yields ∂ r ∂ t = ⟨ d y d t cos ⁡ ( θ ) , d y d t sin ⁡ ( θ ) , d x d t ⟩ , {\displaystyle {\frac {\partial \mathbf {r} }{\partial t}}=\left\langle {\frac {dy}{dt}}\cos(\theta ),{\frac {dy}{dt}}\sin(\theta ),{\frac {dx}{dt}}\right\rangle ,} ∂ r ∂ θ = ⟨ − y sin ⁡ ( θ ) , y cos ⁡ ( θ ) , 0 ⟩ {\displaystyle {\frac {\partial \mathbf {r} }{\partial \theta }}=\left\langle -y\sin(\theta ),y\cos(\theta ),0\right\rangle } and computing the cross product yields ∂ r ∂ t × ∂ r ∂ θ = ⟨ y cos ⁡ ( θ ) d x d t , y sin ⁡ ( θ ) d x d t , y d y d t ⟩ = y ⟨ cos ⁡ ( θ ) d x d t , sin ⁡ ( θ ) d x d t , d y d t ⟩ {\displaystyle {\frac {\partial \mathbf {r} }{\partial t}}\times {\frac {\partial \mathbf {r} }{\partial \theta }}=\left\langle y\cos(\theta ){\frac {dx}{dt}},y\sin(\theta ){\frac {dx}{dt}},y{\frac {dy}{dt}}\right\rangle =y\left\langle \cos(\theta ){\frac {dx}{dt}},\sin(\theta ){\frac {dx}{dt}},{\frac {dy}{dt}}\right\rangle } where the trigonometric identity sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1 {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1} was used. With this cross product, we get A x = ∫ a b ∫ 0 2 π ‖ ∂ r ∂ t × ∂ r ∂ θ ‖ d θ d t = ∫ a b ∫ 0 2 π ‖ y ⟨ y cos ⁡ ( θ ) d x d t , y sin ⁡ ( θ ) d x d t , y d y d t ⟩ ‖ d θ d t = ∫ a b ∫ 0 2 π y cos 2 ⁡ ( θ ) ( d x d t ) 2 + sin 2 ⁡ ( θ ) ( d x d t ) 2 + ( d y d t ) 2 d θ d t = ∫ a b ∫ 0 2 π y ( d x d t ) 2 + ( d y d t ) 2 d θ d t = ∫ a b 2 π y ( d x d t ) 2 + ( d y d t ) 2 d t {\displaystyle {\begin{aligned}A_{x}&=\int _{a}^{b}\int _{0}^{2\pi }\left\|{\frac {\partial \mathbf {r} }{\partial t}}\times {\frac {\partial \mathbf {r} }{\partial \theta }}\right\|\ d\theta \ dt\\[1ex]&=\int _{a}^{b}\int _{0}^{2\pi }\left\|y\left\langle y\cos(\theta ){\frac {dx}{dt}},y\sin(\theta ){\frac {dx}{dt}},y{\frac {dy}{dt}}\right\rangle \right\|\ d\theta \ dt\\[1ex]&=\int _{a}^{b}\int _{0}^{2\pi }y{\sqrt {\cos ^{2}(\theta )\left({\frac {dx}{dt}}\right)^{2}+\sin ^{2}(\theta )\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}}}\ d\theta \ dt\\[1ex]&=\int _{a}^{b}\int _{0}^{2\pi }y{\sqrt {\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}}}\ d\theta \ dt\\[1ex]&=\int _{a}^{b}2\pi y{\sqrt {\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}}}\ dt\end{aligned}}} where the same trigonometric identity was used again. The derivation for a surface obtained by revolving around the y-axis is similar. For a polar curve r = f ( θ ) {\displaystyle r=f(\theta )} where α ≤ θ ≤ β {\displaystyle \alpha \leq \theta \leq \beta } and f ( θ ) ≥ 0 {\displaystyle f(\theta )\geq 0} , the volumes of the solids generated by revolving the curve around the x-axis or y-axis are V x = ∫ α β ( π r 2 sin 2 ⁡ θ cos ⁡ θ d r d θ − π r 3 sin 3 ⁡ θ ) d θ , V y = ∫ α β ( π r 2 sin ⁡ θ cos 2 ⁡ θ d r d θ + π r 3 cos 3 ⁡ θ ) d θ . {\displaystyle {\begin{aligned}V_{x}&=\int _{\alpha }^{\beta }\left(\pi r^{2}\sin ^{2}{\theta }\cos {\theta }\,{\frac {dr}{d\theta }}-\pi r^{3}\sin ^{3}{\theta }\right)d\theta \,,\\V_{y}&=\int _{\alpha }^{\beta }\left(\pi r^{2}\sin {\theta }\cos ^{2}{\theta }\,{\frac {dr}{d\theta }}+\pi r^{3}\cos ^{3}{\theta }\right)d\theta \,.\end{aligned}}} The areas of the surfaces of the solids generated by revolving the curve around the x -axis or the y -axis are given A x = ∫ α β 2 π r sin ⁡ θ r 2 + ( d r d θ ) 2 d θ , A y = ∫ α β 2 π r cos ⁡ θ r 2 + ( d r d θ ) 2 d θ , {\displaystyle {\begin{aligned}A_{x}&=\int _{\alpha }^{\beta }2\pi r\sin {\theta }\,{\sqrt {r^{2}+\left({\frac {dr}{d\theta }}\right)^{2}}}\,d\theta \,,\\A_{y}&=\int _{\alpha }^{\beta }2\pi r\cos {\theta }\,{\sqrt {r^{2}+\left({\frac {dr}{d\theta }}\right)^{2}}}\,d\theta \,,\end{aligned}}}
https://en.wikipedia.org/wiki/Solid_of_revolution
The principle of solid phase DNA sequencing was described in 1989 based on binding of biotinylated DNA to streptavidin -coated magnetic beads and elution of single DNA strands selectively using alkali . [ 1 ] The method allowed robotic applications suitable for clinical sequencing, but the magnetic handling has also found frequent use in many molecular applications, including sample handling for DNA diagnostics. [ 2 ] The use of solid phase methods for DNA handling is now frequently used as an integrated part of many of the next generation DNA sequencing methods, as well as numerous molecular diagnostics applications. [ citation needed ]
https://en.wikipedia.org/wiki/Solid_phase_sequencing
In mathematics, specifically in order theory and functional analysis , a subset S {\displaystyle S} of a vector lattice X {\displaystyle X} is said to be solid and is called an ideal if for all s ∈ S {\displaystyle s\in S} and x ∈ X , {\displaystyle x\in X,} if | x | ≤ | s | {\displaystyle |x|\leq |s|} then x ∈ S . {\displaystyle x\in S.} An ordered vector space whose order is Archimedean is said to be Archimedean ordered . [ 1 ] If S ⊆ X {\displaystyle S\subseteq X} then the ideal generated by S {\displaystyle S} is the smallest ideal in X {\displaystyle X} containing S . {\displaystyle S.} An ideal generated by a singleton set is called a principal ideal in X . {\displaystyle X.} The intersection of an arbitrary collection of ideals in X {\displaystyle X} is again an ideal and furthermore, X {\displaystyle X} is clearly an ideal of itself; thus every subset of X {\displaystyle X} is contained in a unique smallest ideal. In a locally convex vector lattice X , {\displaystyle X,} the polar of every solid neighborhood of the origin is a solid subset of the continuous dual space X ′ {\displaystyle X^{\prime }} ; moreover, the family of all solid equicontinuous subsets of X ′ {\displaystyle X^{\prime }} is a fundamental family of equicontinuous sets, the polars (in bidual X ′ ′ {\displaystyle X^{\prime \prime }} ) form a neighborhood base of the origin for the natural topology on X ′ ′ {\displaystyle X^{\prime \prime }} (that is, the topology of uniform convergence on equicontinuous subset of X ′ {\displaystyle X^{\prime }} ). [ 2 ]
https://en.wikipedia.org/wiki/Solid_set
A solid solution, a term popularly used for metals, is a homogeneous mixture of two compounds in solid state and having a single crystal structure . [ 1 ] Many examples can be found in metallurgy , geology , and solid-state chemistry . The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species. In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na 1-x K x )Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na 0.33 K 0.66 )Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite ; a physical mixture of the two is referred to as sylvinite . Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe) 2 SiO 4 , which is equivalent to (Mg 1−x Fe x ) 2 SiO 4 . The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg 2 SiO 4 ) and fayalite (Fe-endmember: Fe 2 SiO 4 ) [ 2 ] but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation. The IUPAC definition of a solid solution is a "solid in which components are compatible and form a unique phase". [ 3 ] The definition "crystal containing a second constituent which fits into and is distributed in the lattice of the host crystal" given in refs., [ 4 ] [ 5 ] is not general and, thus, is not recommended. The expression is to be used to describe a solid phase containing more than one substance when, for convenience, one (or more) of the substances, called the solvent, is treated differently from the other substances, called solutes. One or several of the components can be macromolecules . Some of the other components can then act as plasticizers, i.e., as molecularly dispersed substances that decrease the glass-transition temperature at which the amorphous phase of a polymer is converted between glassy and rubbery states. In pharmaceutical preparations, the concept of solid solution is often applied to the case of mixtures of drug and polymer . The number of drug molecules that do behave as solvent (plasticizer) of polymers is small. [ 6 ] On a phase diagram a solid solution is represented by an area, often labeled with the structure type, which covers the compositional and temperature/pressure ranges. Where the end members are not isostructural there are likely to be two solid solution ranges with different structures dictated by the parents. In this case the ranges may overlap and the materials in this region can have either structure, or there may be a miscibility gap in solid state indicating that attempts to generate materials with this composition will result in mixtures. In areas on a phase diagram which are not covered by a solid solution there may be line phases, these are compounds with a known crystal structure and set stoichiometry. Where the crystalline phase consists of two (non-charged) organic molecules the line phase is commonly known as a cocrystal . In metallurgy alloys with a set composition are referred to as intermetallic compounds. A solid solution is likely to exist when the two elements (generally metals ) involved are close together on the periodic table , an intermetallic compound generally results when two metals involved are not near each other on the periodic table. [ 7 ] The solute may incorporate into the solvent crystal lattice substitutionally , by replacing a solvent particle in the lattice, or interstitially , by fitting into the space between solvent particles. Both of these types of solid solution affect the properties of the material by distorting the crystal lattice and disrupting the physical and electrical homogeneity of the solvent material. [ 8 ] Where the atomic radii of the solute atom is larger than the solvent atom it replaces the crystal structure ( unit cell ) often expands to accommodate it, this means that the composition of a material in a solid solution can be calculated from the unit cell volume a relationship known as Vegard's law . [ 9 ] Some mixtures will readily form solid solutions over a range of concentrations, while other mixtures will not form solid solutions at all. The propensity for any two substances to form a solid solution is a complicated matter involving the chemical , crystallographic , and quantum properties of the substances in question. Substitutional solid solutions, in accordance with the Hume-Rothery rules , may form if the solute and solvent have: a solid solution mixes with others to form a new solution The phase diagram in the above diagram displays an alloy of two metals which forms a solid solution at all relative concentrations of the two species. In this case, the pure phase of each element is of the same crystal structure, and the similar properties of the two elements allow for unbiased substitution through the full range of relative concentrations. Solid solution of pseudo-binary systems in complex systems with three or more components may require a more involved representation of the phase diagram with more than one solvus curves drawn corresponding to different equilibrium chemical conditions. [ 10 ] Solid solutions have important commercial and industrial applications, as such mixtures often have superior properties to pure materials. Many metal alloys are solid solutions. Even small amounts of solute can affect the electrical and physical properties of the solvent. The binary phase diagram in the above diagram shows the phases of a mixture of two substances in varying concentrations, A {\displaystyle A} and B {\displaystyle B} . The region labeled " α {\displaystyle \alpha } " is a solid solution, with B {\displaystyle B} acting as the solute in a matrix of A {\displaystyle A} . On the other end of the concentration scale, the region labeled " β {\displaystyle \beta } " is also a solid solution, with A {\displaystyle A} acting as the solute in a matrix of B {\displaystyle B} . The large solid region in between the α {\displaystyle \alpha } and β {\displaystyle \beta } solid solutions, labeled " α {\displaystyle \alpha } + β {\displaystyle \beta } ", is not a solid solution. Instead, an examination of the microstructure of a mixture in this range would reveal two phases—solid solution A {\displaystyle A} -in- B {\displaystyle B} and solid solution B {\displaystyle B} -in- A {\displaystyle A} would form separate phases, perhaps lamella or grains . In the phase diagram, at three different concentrations, the material will be solid until heated to its melting point , and then (after adding the heat of fusion ) become liquid at that same temperature: At other proportions, the material will enter a mushy or pasty phase until it warms up to being completely melted. The mixture at the dip point of the diagram is called a eutectic alloy. Lead-tin mixtures formulated at that point (37/63 mixture) are useful when soldering electronic components, particularly if done manually, since the solid phase is quickly entered as the solder cools. In contrast, when lead-tin mixtures were used to solder seams in automobile bodies a pasty state enabled a shape to be formed with a wooden paddle or tool, so a 70–30 lead to tin ratio was used. (Lead is being removed from such applications owing to its toxicity and consequent difficulty in recycling devices and components that include lead.) When a solid solution becomes unstable—due to a lower temperature, for example—exsolution occurs and the two phases separate into distinct microscopic to megascopic lamellae . This is mainly caused by difference in cation size. Cations which have a large difference in radii are not likely to readily substitute. [ 11 ] Alkali feldspar minerals , for example, have end members of albite , NaAlSi 3 O 8 and microcline , KAlSi 3 O 8 . At high temperatures Na + and K + readily substitute for each other and so the minerals will form a solid solution, yet at low temperatures albite can only substitute a small amount of K + and the same applies for Na + in the microcline. This leads to exsolution where they will separate into two separate phases. In the case of the alkali feldspar minerals, thin white albite layers will alternate between typically pink microcline, [ 11 ] resulting in a perthite texture.
https://en.wikipedia.org/wiki/Solid_solution
In metallurgy , solid solution strengthening is a type of alloying that can be used to improve the strength of a pure metal . [ 1 ] The technique works by adding atoms of one element (the alloying element) to the crystalline lattice of another element (the base metal), forming a solid solution . The local nonuniformity in the lattice due to the alloying element makes plastic deformation more difficult by impeding dislocation motion through stress fields . In contrast, alloying beyond the solubility limit can form a second phase , leading to strengthening via other mechanisms (e.g. the precipitation of intermetallic compounds). Depending on the size of the alloying element, a substitutional solid solution or an interstitial solid solution can form. [ 2 ] In both cases, atoms are visualised as rigid spheres where the overall crystal structure is essentially unchanged. The rationale of crystal geometry to atom solubility prediction is summarized in the Hume-Rothery rules and Pauling's rules . Substitutional solid solution strengthening occurs when the solute atom is large enough that it can replace solvent atoms in their lattice positions. Some alloying elements are only soluble in small amounts, whereas some solvent and solute pairs form a solution over the whole range of binary compositions. Generally, higher solubility is seen when solvent and solute atoms are similar in atomic size (15% according to the Hume-Rothery rules ) and adopt the same crystal structure in their pure form. Examples of completely miscible binary systems are Cu-Ni and the Ag-Au face-centered cubic (FCC) binary systems, and the Mo-W body-centered cubic (BCC) binary system. Interstitial solid solutions form when the solute atom is small enough (radii up to 57% the radii of the parent atoms) [ 2 ] to fit at interstitial sites between the solvent atoms. The atoms crowd into the interstitial sites, causing the bonds of the solvent atoms to compress and thus deform (this rationale can be explained with Pauling's rules ). Elements commonly used to form interstitial solid solutions include H, Li, Na, N, C, and O. Carbon in iron (steel) is one example of interstitial solid solution. The strength of a material is dependent on how easily dislocations in its crystal lattice can be propagated. These dislocations create stress fields within the material depending on their character. When solute atoms are introduced, local stress fields are formed that interact with those of the dislocations, impeding their motion and causing an increase in the yield stress of the material, which means an increase in strength of the material. This gain is a result of both lattice distortion and the modulus effect . When solute and solvent atoms differ in size, local stress fields are created that can attract or repel dislocations in their vicinity. This is known as the size effect. By relieving tensile or compressive strain in the lattice, the solute size mismatch can put the dislocation in a lower energy state. In substitutional solid solutions, these stress fields are spherically symmetric, meaning they have no shear stress component. As such, substitutional solute atoms do not interact with the shear stress fields characteristic of screw dislocations. Conversely, in interstitial solid solutions, solute atoms cause a tetragonal distortion, generating a shear field that can interact with edge, screw, and mixed dislocations. The attraction or repulsion of the dislocation to the solute atom depends on whether the atom sits above or below the slip plane. For example, consider an edge dislocation encountering a smaller solute atom above its slip plane. In this case, the interaction energy is negative, resulting in attraction of the dislocation to the solute. This is due to the reduced dislocation energy by the compressed volume lying above the dislocation core. If the solute atom were positioned below the slip plane, the dislocation would be repelled by the solute. However, the overall interaction energy between an edge dislocation and a smaller solute is negative because the dislocation spends more time at sites with attractive energy. This is also true for solute atom with size greater than the solvent atom. Thus, the interaction energy dictated by the size effect is generally negative. [ 3 ] The elastic modulus of the solute atom can also determine the extent of strengthening. For a “soft” solute with elastic modulus lower than that of the solvent, the interaction energy due to modulus mismatch ( U modulus ) is negative, which reinforce the size interaction energy ( U size ). In contrast, U modulus is positive for a “hard” solute, which results in lower total interaction energy than a soft atom. Even though the interaction force is negative (attractive) in both cases when the dislocation is approaching the solute. The maximum force ( F max ) necessary to tear dislocation away from the lowest energy state (i.e. the solute atom) is greater for the soft solute than the hard one. As a result, a soft solute will strengthen a crystal more than a hard solute due to the synergistic strengthening by combining both size and modulus effects. [ 3 ] The elastic interaction effects (i.e. size and modulus effects) dominate solid-solution strengthening for most crystalline materials. However, other effects, including charge and stacking fault effects, may also play a role. For ionic solids where electrostatic interaction dictates bond strength, charge effect is also important. For example, addition of divalent ion to a monovalent material may strengthen the electrostatic interaction between the solute and the charged matrix atoms that comprise a dislocation. However, this strengthening is to a less extent than the elastic strengthening effects. For materials containing a higher density of stacking faults , solute atoms may interact with the stacking faults either attractively or repulsively. This lowers the stacking fault energy, leading to repulsion of the partial dislocations , which thus makes the material stronger. [ 3 ] Surface carburizing, or case hardening , is one example of solid solution strengthening in which the density of solute carbon atoms is increased close to the surface of the steel, resulting in a gradient of carbon atoms throughout the material. This provides superior mechanical properties to the surface of the steel without having to use a higher-cost material for the component. [ 4 ] Solid solution strengthening increases yield strength of the material by increasing the shear stress, τ {\displaystyle \tau } , to move dislocations: [ 1 ] [ 2 ] Δ τ = G b ϵ 3 2 c {\displaystyle \Delta \tau =Gb\epsilon ^{\tfrac {3}{2}}{\sqrt {c}}} where c is the concentration of the solute atoms, G is the shear modulus , b is the magnitude of the Burger's vector , and ϵ {\displaystyle \epsilon } is the lattice strain due to the solute. This is composed of two terms, one describing lattice distortion and the other local modulus change. ϵ = | ϵ G − β ϵ a | {\displaystyle \epsilon =|\epsilon _{G}-\beta \epsilon _{a}|} Here, ϵ G {\displaystyle \epsilon _{G}} the term that captures the local modulus change, β {\displaystyle \beta } a constant dependent on the solute atoms and ϵ a {\displaystyle \epsilon _{a}} is the lattice distortion term. The lattice distortion term can be described as: ϵ a = Δ a a Δ c {\displaystyle \epsilon _{a}={\dfrac {\Delta a}{a\Delta c}}} , where a is the lattice parameter of the material. Meanwhile, the local modulus change is captured in the following expression: ϵ G = Δ G G Δ c {\displaystyle \epsilon _{G}={\dfrac {\Delta G}{G\Delta c}}} , where G is shear modulus of the solute material. In order to achieve noticeable material strengthening via solution strengthening, one should alloy with solutes of higher shear modulus, hence increasing the local shear modulus in the material. In addition, one should alloy with elements of different equilibrium lattice constants. The greater the difference in lattice parameter, the higher the local stress fields introduced by alloying. Alloying with elements of higher shear modulus or of very different lattice parameters will increase the stiffness and introduce local stress fields respectively. In either case, the dislocation propagation will be hindered at these sites, impeding plasticity and increasing yield strength proportionally with solute concentration. Solid solution strengthening depends on: For many common alloys, rough experimental fits can be found for the addition in strengthening provided in the form of: [ 2 ] Δ σ s = k s c {\displaystyle \Delta \sigma _{s}=k_{s}{\sqrt {c}}} where k s {\displaystyle k_{s}} is a solid solution strengthening coefficient and c {\displaystyle c} is the concentration of solute in atomic fractions. Nevertheless, one should not add so much solute as to precipitate a new phase. This occurs if the concentration of the solute reaches a certain critical point given by the binary system phase diagram. This critical concentration therefore puts a limit to the amount of solid solution strengthening that can be achieved with a given material. An example of aluminum alloys where solid solution strengthening happens by adding magnesium and manganese into the aluminum matrix. Commercially Mn can be added to the AA3xxx series and Mg can be added to the AA5xxx series. [ 5 ] Mn addition to the Aluminum alloys assists in the recrystallization and recovery of the alloy which influences the grain size as well. [ 5 ] Both of these systems are used in low to medium-strength applications, with appreciable formability and corrosion resistance. [ 6 ] Many nickel-based superalloys depend on solid solution as a strengthening mechanism. The most popular example is the Inconel family, where many of these alloys contain chromium and iron and some other additions of cobalt, molybdenum, niobium, and titanium. [ 7 ] The nickel-based superalloys are well known for their intensive use in the industrial field especially the aeronautical and the aerospace industry due to their superior mechanical and corrosion properties at high temperatures. [ 8 ] An example of the use of the nickel-based superalloys in the industrial field would be turbine blades. In practice, this alloy is known as MAR—M200 and is solid solution strengthened by chromium, tungsten and cobalt in the matrix and is also precipitation hardened by carbide and boride precipitates at the grain boundaries. [ 9 ] [ 10 ] The key impacting factor for these turbine blades lies in the grain size which an increase in grain size can lead to a significant reduction in the strain rate. An example of this reduced strain rate in MAR--M200 can be seen in the figures to the right where the figure on the bottom has a grain size of 100um and the figure on the top has a grain size of 10mm. [ 11 ] This reduced strain rate is extremely important for turbine blade operation because they undergo significant mechanical stress and high temperatures which can lead to the onset of creep deformation. Therefore, the precise control of grain size in nickel-based superalloys is key to creep resistance and mechanical reliability and longevity. Some ways to control the grain size lie in the manufacturing techniques like directional solidification and single crystal casting. [ 12 ] Stainless steel is one of the most commonly used metals in many industries. Solid solution strengthening of steel is one of the mechanisms used to enhance the properties of the alloy. Austenitic steels mainly contain chromium, nickel, molybdenum, and manganese. [ 13 ] It is being used mostly for cookware, kitchen equipment, and in marine applications for its good corrosion properties in saline environments. Titanium and titanium alloys have been wide usage in aerospace, medical, and maritime applications. The most known titanium alloy that adopts solid solution strengthening is Ti-6Al-4V. Also, the addition of oxygen to pure Ti alloy adopts a solid solution strengthening as a mechanism to the material, while adding it to Ti-6Al-4V alloy doesn’t have the same influence. [ 14 ] Bronze and brass are both copper alloys that are solid solution strengthened. Bronze is the result of adding about 12% tin to copper while brass is the result of adding about 34% zinc to copper. Both of these alloys are being utilized in coins production, ship hardware, and art.
https://en.wikipedia.org/wiki/Solid_solution_strengthening
Solid-state ionics is the study of ionic-electronic mixed conductor and fully ionic conductors ( solid electrolytes ) and their uses. Some materials that fall into this category include inorganic crystalline and polycrystalline solids, ceramics, glasses, polymers, and composites. Solid-state ionic devices, such as solid oxide fuel cells , can be much more reliable and long-lasting, especially under harsh conditions, than comparable devices with fluid electrolytes. [ 1 ] The field of solid-state ionics was first developed in Europe, starting with the work of Michael Faraday on solid electrolytes Ag 2 S and PbF 2 in 1834. Fundamental contributions were later made by Walther Nernst , who derived the Nernst equation and detected ionic conduction in heterovalently doped zirconia , which he applied in his Nernst lamp . Another major step forward was the characterization of silver iodide in 1914. Around 1930, the concept of point defects was established by Yakov Frenkel , Walter Schottky and Carl Wagner , including the development of point-defect thermodynamics by Schottky and Wagner; this helped explain ionic and electronic transport in ionic crystals, ion-conducting glasses, polymer electrolytes and nanocomposites. In the late 20th and early 21st centuries, solid-state ionics focused on the synthesis and characterization of novel solid electrolytes and their applications in solid state battery systems, fuel cells and sensors. [ 2 ] The term solid state ionics was coined in 1967 by Takehiko Takahashi, [ 3 ] but did not become widely used until the 1980s, with the emergence of the journal Solid State Ionics . The first international conference on this topic was held in 1972 in Belgirate , Italy, under the name "Fast Ion Transport in Solids, Solid State Batteries and Devices". [ 2 ] In the early 1830s, Michael Faraday laid the foundations of electrochemistry and solid-state ionics by discovering the motion of ions in liquid and solid electrolytes. Earlier, around 1800, Alessandro Volta used a liquid electrolyte in his voltaic pile , the first electrochemical battery, but failed to realize that ions are involved in the process. Meanwhile, in his work on decomposition of solutions by electric current, Faraday used not only the ideas of ion , cation , anion , electrode , anode , cathode , electrolyte and electrolysis , but even the present-day terms for them. [ 4 ] [ 5 ] Faraday associated electric current in an electrolyte with the motion of ions, and discovered that ions can exchange their charges with an electrode while they were transformed into elements by electrolysis. He quantified those processes by two laws of electrolysis . The first law (1832) stated that the mass of a product at the electrode, Δm, increases linearly with the amount of charge passed through the electrolyte, Δq. The second law (1833) established the proportionality between Δm and the “electrochemical equivalent” and defined the Faraday constant F as F = (Δq/Δm)(M/z), where M is the molar mass and z is the charge of the ion. In 1834, Faraday discovered ionic conductivity in heated solid electrolytes Ag 2 S and PbF 2 . [ 4 ] In PbF 2 , the conductivity increase upon heating was not sudden, but spread over a hundred degrees Celsius. Such behavior, called Faraday transition, [ 6 ] is observed in the cation conductors Na 2 S and Li 4 SiO 4 and anion conductors PbF 2 , CaF 2 , SrF 2 , SrCl 2 and LaF 3 . [ 2 ] Later in 1891, Johann Wilhelm Hittorf reported on the ion transport numbers in electrochemical cells, [ 7 ] and in the early 20th century those numbers were determined for solid electrolytes. [ 8 ] The voltaic pile stimulated a series of improved batteries, such as the Daniell cell , fuel cell and lead acid battery . Their operation was largely understood in the late 1800s from the theories by Wilhelm Ostwald and Walther Nernst . In 1894 Ostwald explained the energy conversion in a fuel cell and stressed that its efficiency was not limited by thermodynamics . [ 9 ] Ostwald, together with Jacobus Henricus van 't Hoff , and Svante Arrhenius , was a founding father of electrochemistry and chemical ionic theory, and received a Nobel prize in chemistry in 1909. His work was continued by Walther Nernst, who derived the Nernst equation and described ionic conduction in heterovalently doped zirconia , which he used in his Nernst lamp . Nernst was inspired by the dissociation theory of Arrhenius published in 1887, which relied on ions in solution. [ 10 ] In 1889 he realized the similarity between electrochemical and chemical equilibria, and formulated his equation that correctly predicted the output voltage of various electrochemical cells based on liquid electrolytes from the thermodynamic properties of their components. [ 11 ] Besides his theoretical work, in 1897 Nernst patented the first lamp that used a solid electrolyte. [ 12 ] Contrary to the existing carbon-filament lamps, Nernst lamp could operate in air and was twice more efficient as its emission spectrum was closer to that of daylight. AEG, a lighting company in Berlin, bought the Nernst’s patent for one million German gold marks , which was a fortune at the time, and used 800 of Nernst lamps to illuminate their booth at the world’s fair Exposition Universelle (1900) . [ 2 ] Among several solid electrolytes described in the 19th and early 20th century, α-AgI, the high-temperature crystalline form of silver iodide, is widely regarded as the most important one. Its electrical conduction was characterized by Carl Tubandt and E. Lorenz in 1914. [ 13 ] Their comparative study of AgI, AgCl and AgBr demonstrated that α-AgI, is thermally stable and highly conductive between 147 and 555 °C; the conductivity weakly increased with temperature in this range and then dropped upon melting. This behavior was fully reversible and excluded non-equilibrium effects. Tubandt and Lorenz described other materials with a similar behavior, such as α-CuI, α-CuBr, β-CuBr, and high-temperature phases of Ag 2 S, Ag 2 Se and Ag 2 Te. [ 14 ] They associated the conductivity with cations in silver and cuprous halides and with ions and electrons in silver chalcogenides. In 1926, Yakov Frenkel suggested that in an ionic crystal like AgI, in thermodynamic equilibrium, a small fraction of the cations, α, are displaced from their regular lattice sites into interstitial positions. [ 15 ] He related α with the Gibbs energy for the formation of one mol of Frenkel pairs, ΔG, as α = exp(-ΔG/2RT), where T is temperature and R is the gas constant ; for a typical value of ΔG = 100 kJ/mol, α ~ 1 × 10 −6 at 100 °C and ~6 × 10 −4 at 400 °C. This idea naturally explained the presence of an appreciable fraction of mobile ions in otherwise defect-free ionic crystals, and thus the ionic conductivity in them. [ 2 ] Frenkel’s idea was expanded by Carl Wagner and Walter Schottky in their 1929 theory, which described the equilibrium thermodynamics of point defects in ionic crystals. In particular, Wagner and Schottky related the deviations from stoichiometry in those crystals with the chemical potentials of the crystal components, and explained the phenomenon of mixed electronic and ionic conduction. [ 16 ] [ 17 ] Wagner and Schottky considered four extreme cases of point-defect disorder in a stoichiometric binary ionic crystal of type AB: [ 17 ] Type-3 disorder does not occur in practice, and type 2 is observed only in rare cases when anions are smaller than cations, while both types 1 and 4 are common and show the same exp(-ΔG/2RT) temperature dependence. [ 2 ] Later in 1933, Wagner suggested that in metal oxides an excess of metal would result in extra electrons, while a deficit of metal would produce electron holes, i.e., that atomic non-stoichiometry would result in a mixed ionic-electronic conduction. [ 18 ] The studies of crystalline ionic conductors where excess ions were provided by point defect continued through 1950s, and the specific mechanism of conduction was established for each compound depending on its ionic structure. The emergence of glassy and polymeric electrolytes in the late 1970s provided new ionic conduction mechanisms. A relatively wide range of conductivities was attained in glasses, wherein mobile ions were dynamically decoupled from the matrix. [ 19 ] It was found that the conductivity could be increased by doping a glass with certain salts, or by using a glass mixture. The conductivity values could be as high as 0.03 S/cm at room temperature, with activation energies as low as 20 kJ/mol. [ 20 ] Compared to crystals, glasses have isotropic properties, continuously tunable composition and good workability; they lack the detrimental grain boundaries and can be molded into any shape, but understanding their ionic transport was complicated by the lack of long-range order. [ 2 ] Historically, an evidence for ionic conductivity was provided back in the 1880s, when German scientists noticed that a well-calibrated thermometer made of Thuringian glass would show −0.5 °C instead of 0 °C when placed in ice shortly after immersion in boiling water, and recover only after several months. In 1883, they reduced this effect 10 times by replacing a mixture of sodium and potassium in the glass by either sodium or potassium. [ 21 ] This finding helped Otto Schott develop the first accurate lithium-based thermometer. More systematic studies on ionic conductivity in glass appeared in 1884, [ 22 ] but received broad attention only a century later. Several universal laws have been empirically formulated for ionic glasses and extended to other ionic conductors, such as the frequency dependence of electrical conductivity σ(ν) – σ(0) ~ ν p , where the exponent p depends on the material, but not on temperature, at least below ~100 K. This behavior is a fingerprint of activated hopping conduction among nearby sites. [ 2 ] In 1975, Peter V. Wright, a polymer chemist from Sheffield (UK), produced the first polymer electrolyte, which contained sodium and potassium salts in a polyethylene oxide (PEO) matrix. [ 23 ] Later another type of polymer electrolytes, polyelectrolyte , was put forward, where ions moved through an electrically charged, rather than neutral, polymer matrix. Polymer electrolytes showed lower conductivities than glasses, but they were cheaper, much more flexible and could be easier machined and shaped into various forms. [ 24 ] While ionic glasses are typically operated below, polymer conductors are typically heated above their glass transition temperatures. Consequently, both the electric field and mechanical deformation decay on a similar time scale in polymers, but not in glasses. [ 19 ] [ 24 ] Between 1983 and 2001 it was believed that the amorphous fraction is responsible for ionic conductivity, i.e., that (nearly) complete structural disorder is essential for the fast ionic transport in polymers. [ 19 ] However, a number of crystalline polymers have been described in 2001 and later with ionic conductivity as high as 0.01 S/cm 30 °C and activation energy of only 0.24 eV. [ 2 ] In the 1970s–80s, it was realized that nanosized systems may affect ionic conductivity, opening a new field of nanoionics . In 1973, it was reported that ionic conductivity of lithium iodide (LiI) crystals could be increased 50 times by adding to it a fine powder of ‘’insulating’’ material (alumina). [ 25 ] This effect was reproduced in the 1980s in Ag- and Tl-halides doped with alumina nanoparticles. [ 26 ] [ 27 ] [ 28 ] Similarly, addition of insulating nanoparticles helped increase the conductivity of ionic polymers. [ 29 ] [ 30 ] These unexpected results were explained by charge separation at the matrix-nanoparticle interface that provided additional conductive channels to the matrix, and the small size of the filler particles was required to increase the area of this interface. [ 26 ] Similar charge-separation effects were observed for grain boundaries in crystalline ionic conductors. [ 2 ] By 1971, solid-state cells and batteries based on rubidium silver iodide (RbAg 4 I 5 ) have been designed and tested in a wide range of temperatures and discharge currents. [ 31 ] Despite the relatively high conductivity of RbAg 4 I 5 , they have never been commercialized due to a low overall energy content per unit weight (ca. 5 W·h/kg). [ 32 ] On the contrary, LiI, which had a conductivity of only ca. 1 × 10 −7 S/cm at room temperature, found a wide-scale application in batteries for artificial pacemakers . The first such device, based on undoped LiI, was implanted into a human in March 1972 in Ferrara , Italy. [ 33 ] Later models used as electrolyte a film of LiI, which was doped with alumina nanoparticles to increase its conductivity. [ 25 ] LiI was formed in an in situ chemical reaction between the Li anode and iodine-poly( 2-vinylpyridine ) cathode, and therefore was self-healed from erosion and cracks during the operation. [ 34 ] Sodium-sulfur cells, based on ceramic β-Al 2 O 3 electrolyte sandwiched between molten-sodium anode and molten-sulfur cathode showed high energy densities and were considered for car batteries in the 1990s, but disregarded due to the brittleness of alumina, which resulted in cracks and critical failure due to reaction between molten sodium and sulfur. Replacement of β-Al 2 O 3 with NASICON did not save this application because it did not solve the cracking problem, and because NASICON reacted with the molten sodium. [ 2 ] Yttria-stabilized zirconia is used as a solid electrolyte in oxygen sensors in cars, generating voltage that depends on the ratio of oxygen and exhaust gas and providing electronic feedback to the fuel injector. [ 35 ] Such sensors are also installed at many metallurgical and glass-making factories. [ 36 ] Similar sensors of CO 2 , chlorine and other gases based on solid silver halide electrolytes have been proposed in the 1980s–1990s. [ 2 ] Since mid-1980s, a Li-based solid electrolyte is used to separate the electrochromic film (typically WO 3 ) and ion-storing film (typically LiCoO 2 ) in the smart glass , [ 37 ] a window whose transparency is controlled by external voltage. [ 38 ] Solid-state ionic conductors are essential components of lithium-ion batteries , proton exchange membrane fuel cells (PEMFCs), supercapacitors , a novel class of electrochemical energy storage devices, and solid oxide fuel cells , devices that produces electricity from oxidizing a fuel. Nafion , a flexible fluoropolymer - copolymer discovered in the late 1960s, is widely used as a polymer electrolyte in PEMFCs. [ 2 ]
https://en.wikipedia.org/wiki/Solid_state_ionics
The sweep S w of a solid S is defined as the solid created when a motion M is applied to a given solid. The solid S should be considered to be a set of points in the Euclidean space R3. Then the solid S w which is generated by sweeping S over M will contain all the points over which the points of S have moved during the motion M. Solid sweeping which uses this process is employed in different fields, including the modelling of fillets and rounds, interference detection and the simulation of the numerical controlled machining process. [ 1 ] This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solid_sweep
In mathematics and physics , a soliton is a nonlinear, self-reinforcing, localized wave packet that is strongly stable , in that it preserves its shape while propagating freely, at constant velocity, and recovers it even after collisions with other such localized wave packets. Its remarkable stability can be traced to a balanced cancellation of nonlinear and dispersive effects in the medium. [ nb 1 ] Solitons were subsequently found to provide stable solutions of a wide class of weakly nonlinear dispersive partial differential equations describing physical systems. The soliton phenomenon was first described in 1834 by John Scott Russell who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the " Wave of Translation ". The Korteweg–de Vries equation was later formulated to model such waves, and the term " soliton " was coined by Zabusky and Kruskal to describe localized, strongly stable propagating solutions to this equation. The name was meant to characterize the solitary nature of the waves, with the "on" suffix recalling the usage for particles such as electrons , baryons or hadrons , reflecting their observed particle -like behaviour. [ 1 ] A single, consensus definition of a soliton is difficult to find. Drazin & Johnson (1989 , p. 15) ascribe three properties to solitons: More formal definitions exist, but they require substantial mathematics. Moreover, some scientists use the term soliton for phenomena that do not quite have these three properties (for instance, the ' light bullets ' of nonlinear optics are often called solitons despite losing energy during interaction). [ 2 ] Dispersion and nonlinearity can interact to produce permanent and localized wave forms. Consider a pulse of light traveling in glass. This pulse can be thought of as consisting of light of several different frequencies. Since glass shows dispersion, these different frequencies travel at different speeds and the shape of the pulse therefore changes over time. However, also the nonlinear Kerr effect occurs; the refractive index of a material at a given frequency depends on the light's amplitude or strength. If the pulse has just the right shape, the Kerr effect exactly cancels the dispersion effect and the pulse's shape does not change over time. Thus, the pulse is a soliton. See soliton (optics) for a more detailed description. Many exactly solvable models have soliton solutions, including the Korteweg–de Vries equation , the nonlinear Schrödinger equation , the coupled nonlinear Schrödinger equation, and the sine-Gordon equation . The soliton solutions are typically obtained by means of the inverse scattering transform , and owe their stability to the integrability of the field equations. The mathematical theory of these equations is a broad and very active field of mathematical research. Some types of tidal bore , a wave phenomenon of a few rivers including the River Severn , are 'undular': a wavefront followed by a train of solitons. Other solitons occur as the undersea internal waves , initiated by seabed topography , that propagate on the oceanic pycnocline . Atmospheric solitons also exist, such as the morning glory cloud of the Gulf of Carpentaria , where pressure solitons traveling in a temperature inversion layer produce vast linear roll clouds . The recent and not widely accepted soliton model in neuroscience proposes to explain the signal conduction within neurons as pressure solitons. A topological soliton , also called a topological defect, is any solution of a set of partial differential equations that is stable against decay to the "trivial solution". Soliton stability is due to topological constraints, rather than integrability of the field equations. The constraints arise almost always because the differential equations must obey a set of boundary conditions , and the boundary has a nontrivial homotopy group , preserved by the differential equations. Thus, the differential equation solutions can be classified into homotopy classes . No continuous transformation maps a soliton in one homotopy class to another. The solitons are truly distinct, and maintain their integrity, even in the face of extremely powerful forces. Examples of topological solitons include the screw dislocation in a crystalline lattice , the Dirac string and the magnetic monopole in electromagnetism , the Skyrmion and the Wess–Zumino–Witten model in quantum field theory , the magnetic skyrmion in condensed matter physics, and cosmic strings and domain walls in cosmology . In 1834, John Scott Russell described his wave of translation : [ nb 2 ] [ nb 3 ] I was observing the motion of a boat which was rapidly drawn along a narrow channel by a pair of horses, when the boat suddenly stopped – not so the mass of water in the channel which it had put in motion; it accumulated round the prow of the vessel in a state of violent agitation, then suddenly leaving it behind, rolled forward with great velocity, assuming the form of a large solitary elevation, a rounded, smooth and well-defined heap of water, which continued its course along the channel apparently without change of form or diminution of speed. I followed it on horseback, and overtook it still rolling on at a rate of some eight or nine miles an hour, preserving its original figure some thirty feet long and a foot to a foot and a half in height. Its height gradually diminished, and after a chase of one or two miles I lost it in the windings of the channel. Such, in the month of August 1834, was my first chance interview with that singular and beautiful phenomenon which I have called the Wave of Translation. [ 3 ] Scott Russell spent some time making practical and theoretical investigations of these waves. He built wave tanks at his home and noticed some key properties: Scott Russell's experimental work seemed at odds with Isaac Newton 's and Daniel Bernoulli 's theories of hydrodynamics . George Biddell Airy and George Gabriel Stokes had difficulty accepting Scott Russell's experimental observations because they could not be explained by the existing water wave theories. Additional observations were reported by Henry Bazin in 1862 after experiments carried out in the canal de Bourgogne in France. [ 4 ] Their contemporaries spent some time attempting to extend the theory but it would take until the 1870s before Joseph Boussinesq [ 5 ] and Lord Rayleigh published a theoretical treatment and solutions. [ nb 4 ] In 1895 Diederik Korteweg and Gustav de Vries provided what is now known as the Korteweg–de Vries equation , including solitary wave and periodic cnoidal wave solutions. [ 6 ] [ nb 5 ] In 1965 Norman Zabusky of Bell Labs and Martin Kruskal of Princeton University first demonstrated soliton behavior in media subject to the Korteweg–de Vries equation (KdV equation) in a computational investigation using a finite difference approach. They also showed how this behavior explained the puzzling earlier work of Fermi, Pasta, Ulam, and Tsingou . [ 1 ] In 1967, Gardner, Greene, Kruskal and Miura discovered an inverse scattering transform enabling analytical solution of the KdV equation. [ 8 ] The work of Peter Lax on Lax pairs and the Lax equation has since extended this to solution of many related soliton-generating systems. Solitons are, by definition, unaltered in shape and speed by a collision with other solitons. [ 9 ] So solitary waves on a water surface are near -solitons, but not exactly – after the interaction of two (colliding or overtaking) solitary waves, they have changed a bit in amplitude and an oscillatory residual is left behind. [ 10 ] Solitons are also studied in quantum mechanics, thanks to the fact that they could provide a new foundation of it through de Broglie 's unfinished program, known as "Double solution theory" or "Nonlinear wave mechanics". This theory, developed by de Broglie in 1927 and revived in the 1950s, is the natural continuation of his ideas developed between 1923 and 1926, which extended the wave–particle duality introduced by Albert Einstein for the light quanta , to all the particles of matter. The observation of accelerating surface gravity water wave soliton using an external hydrodynamic linear potential was demonstrated in 2019. This experiment also demonstrated the ability to excite and measure the phases of ballistic solitons. [ 11 ] Much experimentation has been done using solitons in fiber optics applications. Solitons in a fiber optic system are described by the Manakov equations . Solitons' inherent stability make long-distance transmission possible without the use of repeaters , and could potentially double transmission capacity as well. [ 12 ] The above impressive experiments have not translated to actual commercial soliton system deployments however, in either terrestrial or submarine systems, chiefly due to the Gordon–Haus (GH) jitter . The GH jitter requires sophisticated, expensive compensatory solutions that ultimately makes dense wavelength-division multiplexing (DWDM) soliton transmission in the field unattractive, compared to the conventional non-return-to-zero/return-to-zero paradigm. Further, the likely future adoption of the more spectrally efficient phase-shift-keyed/QAM formats makes soliton transmission even less viable, due to the Gordon–Mollenauer effect. Consequently, the long-haul fiberoptic transmission soliton has remained a laboratory curiosity. Solitons may occur in proteins [ 16 ] and DNA. [ 17 ] Solitons are related to the low-frequency collective motion in proteins and DNA . [ 18 ] A recently developed model in neuroscience proposes that signals, in the form of density waves, are conducted within neurons in the form of solitons. [ 19 ] [ 20 ] [ 21 ] Solitons can be described as almost lossless energy transfer in biomolecular chains or lattices as wave-like propagations of coupled conformational and electronic disturbances. [ 22 ] Solitons can occur in materials, such as ferroelectrics , in the form of domain walls. Ferroelectric materials exhibit spontaneous polarization, or electric dipoles, which are coupled to configurations of the material structure. Domains of oppositely poled polarizations can be present within a single material as the structural configurations corresponding to opposing polarizations are equally favorable with no presence of external forces. The domain boundaries, or “walls”, that separate these local structural configurations are regions of lattice dislocations . [ 23 ] The domain walls can propagate as the polarizations, and thus, the local structural configurations can switch within a domain with applied forces such as electric bias or mechanical stress. Consequently, the domain walls can be described as solitons, discrete regions of dislocations that are able to slip or propagate and maintain their shape in width and length. [ 24 ] [ 25 ] [ 26 ] In recent literature, ferroelectricity has been observed in twisted bilayers of van der Waal materials such as molybdenum disulfide and graphene . [ 23 ] [ 27 ] [ 28 ] The moiré superlattice that arises from the relative twist angle between the van der Waal monolayers generates regions of different stacking orders of the atoms within the layers. These regions exhibit inversion symmetry breaking structural configurations that enable ferroelectricity at the interface of these monolayers. The domain walls that separate these regions are composed of partial dislocations where different types of stresses, and thus, strains are experienced by the lattice. It has been observed that soliton or domain wall propagation across a moderate length of the sample (order of nanometers to micrometers) can be initiated with applied stress from an AFM tip on a fixed region. The soliton propagation carries the mechanical perturbation with little loss in energy across the material, which enables domain switching in a domino-like fashion. [ 25 ] It has also been observed that the type of dislocations found at the walls can affect propagation parameters such as direction. For instance, STM measurements showed four types of strains of varying degrees of shear, compression, and tension at domain walls depending on the type of localized stacking order in twisted bilayer graphene. Different slip directions of the walls are achieved with different types of strains found at the domains, influencing the direction of the soliton network propagation. [ 25 ] Nonidealities such as disruptions to the soliton network and surface impurities can influence soliton propagation as well. Domain walls can meet at nodes and get effectively pinned, forming triangular domains, which have been readily observed in various ferroelectric twisted bilayer systems. [ 23 ] In addition, closed loops of domain walls enclosing multiple polarization domains can inhibit soliton propagation and thus, switching of polarizations across it. [ 25 ] Also, domain walls can propagate and meet at wrinkles and surface inhomogeneities within the van der Waal layers, which can act as obstacles obstructing the propagation. [ 25 ] In magnets, there also exist different types of solitons and other nonlinear waves. [ 29 ] These magnetic solitons are an exact solution of classical nonlinear differential equations — magnetic equations, e.g. the Landau–Lifshitz equation , continuum Heisenberg model , Ishimori equation , nonlinear Schrödinger equation and others. Atomic nuclei may exhibit solitonic behavior. [ 30 ] Here the whole nuclear wave function is predicted to exist as a soliton under certain conditions of temperature and energy. Such conditions are suggested to exist in the cores of some stars in which the nuclei would not react but pass through each other unchanged, retaining their soliton waves through a collision between nuclei. The Skyrme Model is a model of nuclei in which each nucleus is considered to be a topologically stable soliton solution of a field theory with conserved baryon number. The bound state of two solitons is known as a bion , [ 31 ] [ 32 ] [ 33 ] [ 34 ] or in systems where the bound state periodically oscillates, a breather . The interference-type forces between solitons could be used in making bions. [ 35 ] However, these forces are very sensitive to their relative phases. Alternatively, the bound state of solitons could be formed by dressing atoms with highly excited Rydberg levels. [ 34 ] The resulting self-generated potential profile [ 34 ] features an inner attractive soft-core supporting the 3D self-trapped soliton, an intermediate repulsive shell (barrier) preventing solitons’ fusion, and an outer attractive layer (well) used for completing the bound state resulting in giant stable soliton molecules. In this scheme, the distance and size of the individual solitons in the molecule can be controlled dynamically with the laser adjustment. In field theory bion usually refers to the solution of the Born–Infeld model . The name appears to have been coined by G. W. Gibbons in order to distinguish this solution from the conventional soliton, understood as a regular , finite-energy (and usually stable) solution of a differential equation describing some physical system. [ 36 ] The word regular means a smooth solution carrying no sources at all. However, the solution of the Born–Infeld model still carries a source in the form of a Dirac-delta function at the origin. As a consequence it displays a singularity in this point (although the electric field is everywhere regular). In some physical contexts (for instance string theory) this feature can be important, which motivated the introduction of a special name for this class of solitons. On the other hand, when gravity is added (i.e. when considering the coupling of the Born–Infeld model to general relativity) the corresponding solution is called EBIon , where "E" stands for Einstein. Erik Lentz, a physicist at the University of Göttingen, has theorized that solitons could allow for the generation of Alcubierre warp bubbles in spacetime without the need for exotic matter, i.e., matter with negative mass. [ 37 ]
https://en.wikipedia.org/wiki/Soliton
A soliton distribution is a type of discrete probability distribution that arises in the theory of erasure correcting codes , which use information redundancy to compensate for transmission errors manifesting as missing (erased) data. A paper by Luby [ 1 ] introduced two forms of such distributions, the ideal soliton distribution and the robust soliton distribution . The ideal soliton distribution is a probability distribution on the integers from 1 to K , where K is the single parameter of the distribution. The probability mass function is given by [ 2 ] The robust form of distribution is defined by adding an extra set of values t(i) to the elements of mass function of the ideal soliton distribution and then normalizing so that the values add up to 1. The extra set of values, t(i) , are defined in terms of an additional real-valued parameter δ (which is interpreted as a failure probability) and c , a constant parameter. Define R as R = c ln ( K / δ ) √ K . Then the values added to p ( i ), before the final normalization, are [ 2 ] While the ideal soliton distribution has a mode (or spike) at 2, the effect of the extra component in the robust distribution is to add an additional spike at the value K/R .
https://en.wikipedia.org/wiki/Soliton_distribution
The soliton hypothesis in neuroscience is a model that claims to explain how action potentials are initiated and conducted along axons based on a thermodynamic theory of nerve pulse propagation. [ 1 ] It proposes that the signals travel along the cell's membrane in the form of certain kinds of solitary sound (or density ) pulses that can be modeled as solitons . The model is proposed as an alternative to the Hodgkin–Huxley model [ 2 ] in which action potentials : voltage-gated ion channels in the membrane open and allow sodium ions to enter the cell (inward current). The resulting decrease in membrane potential opens nearby voltage-gated sodium channels, thus propagating the action potential. The transmembrane potential is restored by delayed opening of potassium channels. Soliton hypothesis proponents assert that energy is mainly conserved during propagation except dissipation losses; Measured temperature changes are completely inconsistent with the Hodgkin-Huxley model. [ 3 ] [ 4 ] The soliton model (and sound waves in general) depends on adiabatic propagation in which the energy provided at the source of excitation is carried adiabatically through the medium, i.e. plasma membrane. The measurement of a temperature pulse and the claimed absence of heat release during an action potential [ 5 ] [ 6 ] were the basis of the proposal that nerve impulses are an adiabatic phenomenon much like sound waves. Synaptically evoked action potentials in the electric organ of the electric eel are associated with substantial positive (only) heat production followed by active cooling to ambient temperature. [ 7 ] In the garfish olfactory nerve, the action potential is associated with a biphasic temperature change; however, there is a net production of heat. [ 8 ] These published results are inconsistent with the Hodgkin-Huxley Model and the authors interpret their work in terms of that model: The initial sodium current releases heat as the membrane capacitance is discharged; heat is absorbed during recharge of the membrane capacitance as potassium ions move with their concentration gradient but against the membrane potential. This mechanism is called the "Condenser Theory". Additional heat may be generated by membrane configuration changes driven by the changes in membrane potential. An increase in entropy during depolarization would release heat; entropy increase during repolarization would absorb heat. However, any such entropic contributions are incompatible with Hodgkin and Huxley model [ 9 ] Ichiji Tasaki pioneered a thermodynamic approach to the phenomenon of nerve pulse propagation which identified several phenomena that were not included in the Hodgkin–Huxley model . [ 10 ] Along with measuring various non-electrical components of a nerve impulse, Tasaki investigated the physical chemistry of phase transitions in nerve fibers and its importance for nerve pulse propagation. Based on Tasaki's work, Konrad Kaufman proposed sound waves as a physical basis for nerve pulse propagation in an unpublished manuscript. [ 11 ] The basic idea at the core of the soliton model is the balancing of intrinsic dispersion of the two dimensional sound waves in the membrane by nonlinear elastic properties near a phase transition. The initial impulse can acquire a stable shape under such circumstances, in general known as a solitary wave. [ 12 ] Solitons are the simplest solution of the set of nonlinear wave equations governing such phenomenon and were applied to model nerve impulse in 2005 by Thomas Heimburg and Andrew D. Jackson, [ 13 ] [ 14 ] [ 15 ] both at the Niels Bohr Institute of the University of Copenhagen . Heimburg heads the institute's Membrane Biophysics Group. The biological physics group of Matthias Schneider has studied propagation of two-dimensional sound waves in lipid interfaces and their possible role in biological signalling [ 16 ] [ 17 ] [ 18 ] [ 19 ] The model starts with the observation that cell membranes always have a freezing point (the temperature below which the consistency changes from fluid to gel-like) only slightly below the organism's body temperature, and this allows for the propagation of solitons. An action potential traveling along a mixed nerve results in a slight increase in temperature followed by a decrease in temperature. [ 20 ] Soliton model proponents claim that no net heat is released during the overall pulse and that the observed temperature changes are inconsistent with the Hodgkin-Huxley model. However, this is untrue: the Hodgkin Huxley model predicts a biphasic release and absorption of heat. [ 9 ] In addition, the action potential causes a slight local thickening of the membrane and a force acting outwards; [ 21 ] this effect is not predicted by the Hodgkin–Huxley model but does not contradict it, either. The soliton model attempts to explain the electrical currents associated with the action potential as follows: the traveling soliton locally changes density and thickness of the membrane, and since the membrane contains many charged and polar substances, this will result in an electrical effect, akin to piezoelectricity . Indeed, such nonlinear sound waves have now been shown to exist at lipid interfaces that show superficial similarity to action potentials (electro-opto-mechanical coupling, velocities, biphasic pulse shape, threshold for excitation etc.). [ 17 ] Furthermore, the waves remain localized in the membrane and do not spread out in the surrounding due to an impedance mismatch. [ 22 ] The soliton representing the action potential of nerves is the solution of the partial differential equation where t is time and x is the position along the nerve axon. Δ ρ is the change in membrane density under the influence of the action potential, c 0 is the sound velocity of the nerve membrane, p and q describe the nature of the phase transition and thereby the nonlinearity of the elastic constants of the nerve membrane. The parameters c 0 , p and q are dictated by the thermodynamic properties of the nerve membrane and cannot be adjusted freely. They have to be determined experimentally. The parameter h describes the frequency dependence of the sound velocity of the membrane ( dispersion relation ). The above equation does not contain any fit parameters. It is formally related to the Boussinesq approximation for solitons in water canals. The solutions of the above equation possess a limiting maximum amplitude and a minimum propagation velocity that is similar to the pulse velocity in myelinated nerves. Under restrictive assumptions, there exist periodic solutions that display hyperpolarization and refractory periods. [ 23 ] Advocates of the soliton model claim that it explains several aspects of the action potential, which are not explained by the Hodgkin–Huxley model. Since it is of thermodynamic nature it does not address the properties of single macromolecules like ion channel proteins on a molecular scale. It is rather assumed that their properties are implicitly contained in the macroscopic thermodynamic properties of the nerve membranes. The soliton model predicts membrane current fluctuations during the action potential. These currents are of similar appearance as those reported for ion channel proteins. [ 24 ] They are thought to be caused by lipid membrane pores spontaneously generated by the thermal fluctuations. Such thermal fluctuations explain the specific ionic selectivity or the specific time-course of the response to voltage changes on the basis of their effect on the macroscopic susceptibilities of the system. The authors claim that their model explains the previously obscure mode of action of numerous anesthetics . The Meyer–Overton observation holds that the strength of a wide variety of chemically diverse anesthetics is proportional to their lipid solubility, suggesting that they do not act by binding to specific proteins such as ion channels but instead by dissolving in and changing the properties of the lipid membrane. Dissolving substances in the membrane lowers the membrane's freezing point, and the resulting larger difference between body temperature and freezing point inhibits the propagation of solitons. [ 25 ] By increasing pressure, lowering pH or lowering temperature, this difference can be restored back to normal, which should cancel the action of anesthetics: this is indeed observed. The amount of pressure needed to cancel the action of an anesthetic of a given lipid solubility can be computed from the soliton model and agrees reasonably well with experimental observations. The following is a list of some of the disagreements between experimental observations and the "soliton model": A recent theoretical model, proposed by Ahmed El Hady and Benjamin Machta, proposes that there is a mechanical surface wave which co-propagates with the electrical action potential. These surface waves are called "action waves". [ 35 ] In the El Hady–Machta's model, these co-propagating waves are driven by voltage changes across the membrane caused by the action potential.
https://en.wikipedia.org/wiki/Soliton_model_in_neuroscience
Solitude , also known as social withdrawal , is a state of seclusion or isolation, meaning lack of socialisation . Effects can be either positive or negative, depending on the situation. Short-term solitude is often valued as a time when one may work, think, or rest without disturbance. It may be desired for the sake of privacy . Long-term solitude may stem from soured relationships, loss of loved ones, deliberate choice, infectious disease , mental disorders , neurological disorders such as circadian rhythm sleep disorder , or circumstances of employment or situation. A distinction has been made between solitude and loneliness . In this sense, these two words refer, respectively, to the joy and the pain of being alone. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Symptoms from complete isolation, called sensory deprivation , may include anxiety , sensory illusions , or distortions of time and perception. However, this is the case when there is no stimulation of the sensory systems at all and not just lack of contact with people. Thus, this can be avoided by having other things to keep one's mind busy. [ 5 ] Long-term solitude is often seen as undesirable, causing loneliness or reclusion resulting from inability to establish relationships . Furthermore, it might lead to clinical depression , although some people do not react to it negatively. Buddhist monks regard long-term solitude as a means of enlightenment . Marooned people have been left in solitude for years without any report of psychological symptoms afterwards. [ citation needed ] Some psychological conditions (such as schizophrenia [ 6 ] and schizoid personality disorder ) are strongly linked to a tendency to seek solitude. Enforced loneliness ( solitary confinement ) has been a punishment method throughout history. It is often considered a form of torture. Emotional isolation is a state of isolation where one feels emotionally separated from others despite having a well-functioning social network . [ 7 ] [ 8 ] Researchers, including Robert J. Coplan and Julie C. Bowker, have rejected the notion that solitary practices and solitude are inherently dysfunctional and undesirable. In their 2013 book A Handbook of Solitude , the authors note how solitude can allow for enhancements in self-esteem, generates clarity, and can be highly therapeutic. [ 9 ] In the edited work, Coplan and Bowker invite not only fellow psychology colleagues to chime in on this issue but also a variety of other faculty from different disciplines to address the issue. Fong's chapter offers an alternative view on how solitude is more than just a personal trajectory for one to take inventory on life; it also yields a variety of important sociological cues that allow the protagonist to navigate through society, even highly politicized societies. [ 10 ] In the process, political prisoners in solitary confinement were examined to see how they concluded their views on society. Thus Fong, Coplan, and Bowker conclude that a person's experienced solitude generates immanent and personal content as well as collective and sociological content, depending on context. There are both positive and negative psychological effects of solitude. Much of the time, these effects and the longevity is determined by the amount of time a person spends in isolation . [ 11 ] The positive effects can range anywhere from more freedom to increased spirituality , [ 12 ] while the negative effects are socially depriving and may trigger the onset of mental illness . [ 13 ] While positive solitude is often desired, negative solitude is often involuntary or undesired at the time it occurs. [ 14 ] Freedom is considered to be one of the benefits of solitude; the constraints of others will not have any effect on a person who is spending time in solitude, therefore giving the person more latitude in their actions. With increased freedom, a person’s choices are less likely to be affected by exchanges with others. [ 12 ] A person's creativity can be sparked when given freedom. Solitude can increase freedom and moreover, freedom from distractions has the potential to spark creativity. In 1994, psychologist Mihaly Csikszentmihalyi found that adolescents who cannot bear to be alone often stop enhancing creative talents. [ 12 ] Another proven benefit to time given in solitude is the development of the self. When a person spends time in solitude from others, they may experience changes to their self-concept. This can also help a person to form or discover their identity without any outside distractions. Solitude also provides time for contemplation, growth in personal spirituality, and self-examination. In these situations, loneliness can be avoided as long as the person in solitude knows that they have meaningful relations with others. [ 12 ] Negative effects have been observed in prisoners. The behavior of prisoners who spend extensive time in solitude may worsen. [ 13 ] Solitude can trigger physiological responses that increase health risks. [ 15 ] Negative effects of solitude may also depend on age. Elementary age school children who experience frequent solitude may react negatively. [ 16 ] This is largely because often, solitude at this age is not the child's choice. Solitude in elementary-age children may occur when they are unsure of how to interact socially, so they prefer to be alone, causing shyness or social rejection . While teenagers are more likely to feel lonely or unhappy when not around others, they are also more likely to have a more enjoyable experience with others if they have had time alone first. However, teenagers who frequently spend time alone do not have as good a global adjustment as those who balance their time of solitude with their time of socialization. [ 16 ] Solitude does not necessarily entail feelings of loneliness, and it may in fact be one's sole source of genuine pleasure for those who choose it with deliberate intent. Some individuals seek solitude for discovering a more meaningful and vital existence. For example, in religious contexts, some saints preferred silence, finding immense pleasure in their uniformity with God. Solitude is a state that can be positively modified utilizing it for prayer allowing to "be alone with ourselves and with God, to put ourselves in listening to His will, but also of what moves in our hearts, let purify our relationships; solitude and silence thus become spaces inhabited by God, and ability to recover ourselves and grow in humanity." [ 17 ] In psychology, introverted persons may require spending time alone to recharge, whereas those who are simply socially apathetic might find it a pleasurable setting in which to occupy oneself with solitary tasks. The Buddha attained enlightenment through uses of meditation, deprived of sensory input, bodily necessities, and external desires, including social interaction. The context of solitude is attainment of pleasure from within, but this does not necessitate complete detachment from the external world. This is well demonstrated in the writings of Edward Abbey with particular regard to Desert Solitaire where solitude focused only on isolation from other people allows for a more complete connection to the external world, as in the absence of human interaction the natural world itself takes on the role of the companion. In this context, the individual seeking solitude does so not strictly for personal gain or introspection, though this is often an unavoidable outcome, but instead in an attempt to gain an understanding of the natural world as entirely removed from the human perspective as possible, a state of mind much more readily attained in the complete absence of outside human presence. Isolation in the form of solitary confinement is a punishment or precaution used in many countries throughout the world for prisoners accused of serious crimes, those who may be at risk in the prison population, those who may commit suicide, or those unable to participate in the prison population due to sickness or injury. Research has found that solitary confinement does not deter inmates from committing further violence in prison. [ 18 ] Psychiatric institutions may institute full or partial isolation for certain patients, particularly the violent or subversive, in order to address their particular needs and to protect the rest of the recovering population from their influence.
https://en.wikipedia.org/wiki/Solitude
In the Gemara , the shamir ( Biblical Hebrew : שָׁמִיר , romanized: šāmir ) is a worm or a substance that had the power to cut through or disintegrate stone, iron and diamond. Solomon is said to have used it in the building of the first Temple in Jerusalem in place of cutting tools. For the construction of Solomon's Temple , which promoted peace, it was inappropriate to use tools that could also cause war and bloodshed. [ 2 ] Referenced throughout the Talmud and midrashim , the Shamir was reputed to have existed in the time of Moses as one of the ten wonders created on the eve of the first Shabbat just before God finished creation . [ a ] Moses reputedly used the Shamir to engrave the stones of the priestly breastplate of the High Priest of Israel . [ 4 ] King Solomon, aware of the existence of the Shamir but unaware of its location, commissioned a search that turned up a "grain of Shamir the size of a barleycorn." Solomon's artisans reputedly used the Shamir in the construction of the Temple. The material to be worked, whether stone, wood or metal, was affected by being "shown to the Shamir." Following this line of logic (anything that can be 'shown' something must have eyes to see), early Rabbinical scholars described the Shamir almost as a living being. Other early sources, however, describe it as a green stone. This is supported by contemporary scholars who believe that the Shamir was emery , a blue-green stone mined as an abrasive powder for thousands of years. The word emery comes from Koinē Greek : σμύρις , romanized: smúris , which likely shares the same root as the Semitic shamir . [ 5 ] For storage, the Shamir was meant to have been always wrapped in wool and stored in a container made of lead; any other vessel would burst and disintegrate under the Shamir's gaze. The Shamir was said to have been either lost or had lost its potency (along with the "dripping of the honeycomb") by the time of the destruction of the First Temple during the Siege of Jerusalem (587 BC) . [ 6 ] According to the Asmodeus legend from the Talmud , Tractate Gittin 68a-b, the location of the Shamir was told to King Solomon by Asmodeus, whom Solomon captured. Asmodeus was captured by Benaiah ben Jehoiada, [ 4 ] who captured the demon king by pouring wine into Asmodeus' well, making him drunk, and wrapping him in chains that were engraved with a sacred name of God. Once captured, Asmodeus is brought to Solomon in Jerusalem, where Asmodeus informs Solomon that the Shamir was not given to him, but to Rahab , the angel of the sea. [ 7 ] [ 8 ] The angel of the sea had then given the Shamir to a bird, identified by the Talmud as the Hoopoe ( Hebrew : דּוּכִיפַת , romanized : Duk̲ip̄at̲ ), who had been using the Shamir to split rocks to build its nests. The Shamir is then retrieved by placing glass over the Hoopoe's nest, forcing the bird to use the Shamir to break through the glass. [ 9 ] King Solomon also used the Shamir to engrave gemstones . He also used the blood of the Shamir worm to make carved jewels with a mystical seal or design. According to an interview with George Frederick Kunz , an expert in gemstone and jewelry lore, this led to the belief that gemstones so engraved would have magical virtues, and they often also ended up with their own powers or guardian angel associated with either the gem or the precisely engraved gemstones. [ 10 ] The Quran mentions a creature thought to be the Shamir, [ 11 ] when pointing out the ignorance of the jinn who worked for Solomon concerning the occult , and emphasizing that all knowledge rests only with God: And when We decreed death for him, nothing showed his death to them save a creeping creature of the earth which gnawed away his staff. And when he fell, the jinn saw clearly how, if they had known the Unseen, they would not have continued in despised toil. surat Saba' 34:14 [ 12 ] According to commentators such as ibn Abbas , when Solomon died his body remained leaning on his staff for a long time, nearly a year, until "a creature of the earth, which was a kind of worm," gnawed through the stick weakening it and the body fell to the ground. [ 13 ] It was then that the jinn knew that he had died a long time before and until then they were working hard thinking he was supervising them. It also became clear to humans who divined and engaged in occult activities or spirit-consulting, or worshipped the jinn , that they do not possess knowledge of the occult. [ 13 ] Vajrakita is a kind of worm able to gnaw through stone, esp. shaligrams . This Judaism -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solomon's_shamir
Solomonoff's theory of inductive inference proves that, under its common sense assumptions (axioms), the best possible scientific model is the shortest algorithm that generates the empirical data under consideration. In addition to the choice of data, other assumptions are that, to avoid the post-hoc fallacy, the programming language must be chosen prior to the data [ 1 ] and that the environment being observed is generated by an unknown algorithm. This is also called a theory of induction . Due to its basis in the dynamical ( state-space model ) character of Algorithmic Information Theory , it encompasses statistical as well as dynamical information criteria for model selection . It was introduced by Ray Solomonoff , based on probability theory and theoretical computer science . [ 2 ] [ 3 ] [ 4 ] In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory. Solomonoff proved that this induction is incomputable (or more precisely, lower semi-computable), but noted that "this incomputability is of a very benign kind", and that it "in no way inhibits its use for practical prediction" (as it can be approximated from below more accurately with more computational resources). [ 3 ] It is only "incomputable" in the benign sense that no scientific consensus is able to prove that the best current scientific theory is the best of all possible theories. However, Solomonoff's theory does provide an objective criterion for deciding among the current scientific theories explaining a given set of observations. Solomonoff's induction naturally formalizes Occam's razor [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] by assigning larger prior credences to theories that require a shorter algorithmic description. The theory is based in philosophical foundations, and was founded by Ray Solomonoff around 1960. [ 10 ] It is a mathematically formalized combination of Occam's razor [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] and the Principle of Multiple Explanations . [ 11 ] All computable theories which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Marcus Hutter 's universal artificial intelligence builds upon this to calculate the expected value of an action. Solomonoff's induction has been argued to be the computational formalization of pure Bayesianism . [ 4 ] To understand, recall that Bayesianism derives the posterior probability P [ T | D ] {\displaystyle \mathbb {P} [T|D]} of a theory T {\displaystyle T} given data D {\displaystyle D} by applying Bayes rule, which yields where theories A {\displaystyle A} are alternatives to theory T {\displaystyle T} . For this equation to make sense, the quantities P [ D | T ] {\displaystyle \mathbb {P} [D|T]} and P [ D | A ] {\displaystyle \mathbb {P} [D|A]} must be well-defined for all theories T {\displaystyle T} and A {\displaystyle A} . In other words, any theory must define a probability distribution over observable data D {\displaystyle D} . Solomonoff's induction essentially boils down to demanding that all such probability distributions be computable . Interestingly, the set of computable probability distributions is a subset of the set of all programs, which is countable . Similarly, the sets of observable data considered by Solomonoff were finite. Without loss of generality , we can thus consider that any observable data is a finite bit string . As a result, Solomonoff's induction can be defined by only invoking discrete probability distributions. Solomonoff's induction then allows to make probabilistic predictions of future data F {\displaystyle F} , by simply obeying the laws of probability. Namely, we have P [ F | D ] = E T [ P [ F | T , D ] ] = ∑ T P [ F | T , D ] P [ T | D ] {\displaystyle \mathbb {P} [F|D]=\mathbb {E} _{T}[\mathbb {P} [F|T,D]]=\sum _{T}\mathbb {P} [F|T,D]\mathbb {P} [T|D]} . This quantity can be interpreted as the average predictions P [ F | T , D ] {\displaystyle \mathbb {P} [F|T,D]} of all theories T {\displaystyle T} given past data D {\displaystyle D} , weighted by their posterior credences P [ T | D ] {\displaystyle \mathbb {P} [T|D]} . The proof of the "razor" is based on the known mathematical properties of a probability distribution over a countable set . These properties are relevant because the infinite set of all programs is a denumerable set. The sum S of the probabilities of all programs must be exactly equal to one (as per the definition of probability ) thus the probabilities must roughly decrease as we enumerate the infinite set of all programs, otherwise S will be strictly greater than one. To be more precise, for every ϵ {\displaystyle \epsilon } > 0, there is some length l such that the probability of all programs longer than l is at most ϵ {\displaystyle \epsilon } . This does not, however, preclude very long programs from having very high probability. Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity . The universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer ) that compute something starting with p . Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion. The remarkable property of Solomonoff's induction is its completeness. In essence, the completeness theorem guarantees that the expected cumulative errors made by the predictions based on Solomonoff's induction are upper-bounded by the Kolmogorov complexity of the (stochastic) data generating process. The errors can be measured using the Kullback–Leibler divergence or the square of the difference between the induction's prediction and the probability assigned by the (stochastic) data generating process. Unfortunately, Solomonoff also proved that Solomonoff's induction is uncomputable. In fact, he showed that computability and completeness are mutually exclusive: any complete theory must be uncomputable. The proof of this is derived from a game between the induction and the environment. Essentially, any computable induction can be tricked by a computable environment, by choosing the computable environment that negates the computable induction's prediction. This fact can be regarded as an instance of the no free lunch theorem . Though Solomonoff's inductive inference is not computable , several AIXI -derived algorithms approximate it in order to make it run on a modern computer. The more computing power they are given, the closer their predictions are to the predictions of inductive inference (their mathematical limit is Solomonoff's inductive inference). [ 12 ] [ 13 ] [ 14 ] Another direction of inductive inference is based on E. Mark Gold 's model of learning in the limit from 1967 and has developed since then more and more models of learning. [ 15 ] The general scenario is the following: Given a class S of computable functions, is there a learner (that is, recursive functional) which for any input of the form ( f (0), f (1),..., f ( n )) outputs a hypothesis (an index e with respect to a previously agreed on acceptable numbering of all computable functions; the indexed function may be required consistent with the given values of f ). A learner M learns a function f if almost all its hypotheses are the same index e , which generates the function f ; M learns S if M learns every f in S . Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. [ citation needed ] Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. A far reaching extension of the Gold’s approach is developed by Schmidhuber's theory of generalized Kolmogorov complexities, [ 16 ] which are kinds of super-recursive algorithms .
https://en.wikipedia.org/wiki/Solomonoff's_theory_of_inductive_inference
In quantum information and computation, the Solovay–Kitaev theorem says that if a set of single- qubit quantum gates generates a dense subgroup of SU(2) , then that set can be used to approximate any desired quantum gate with a short sequence of gates that can also be found efficiently. This theorem is considered one of the most significant results in the field of quantum computation and was first announced by Robert M. Solovay in 1995 and independently proven by Alexei Kitaev in 1997. [ 1 ] [ 2 ] Michael Nielsen and Christopher M. Dawson have noted its importance in the field. [ 3 ] A consequence of this theorem is that a quantum circuit of m {\displaystyle m} constant-qubit gates can be approximated to ε {\displaystyle \varepsilon } error (in operator norm ) by a quantum circuit of O ( m log c ⁡ ( m / ε ) ) {\displaystyle O(m\log ^{c}(m/\varepsilon ))} gates from a desired finite universal gate set (where c is a constant). [ 4 ] By comparison, just knowing that a gate set is universal only implies that constant-qubit gates can be approximated by a finite circuit from the gate set, with no bound on its length. So, the Solovay–Kitaev theorem shows that this approximation can be made surprisingly efficient , thereby justifying that quantum computers need only implement a finite number of gates to gain the full power of quantum computation. Let G {\displaystyle {\mathcal {G}}} be a finite set of elements in SU(2) containing its own inverses (so g ∈ G {\displaystyle g\in {\mathcal {G}}} implies g − 1 ∈ G {\displaystyle g^{-1}\in {\mathcal {G}}} ) and such that the group ⟨ G ⟩ {\displaystyle \langle {\mathcal {G}}\rangle } they generate is dense in SU(2). Consider some ε > 0 {\displaystyle \varepsilon >0} . Then there is a constant c {\displaystyle c} such that for any U ∈ S U ( 2 ) {\displaystyle U\in \mathrm {SU} (2)} , there is a sequence S {\displaystyle S} of gates from G {\displaystyle {\mathcal {G}}} of length O ( log c ⁡ ( 1 / ε ) ) {\displaystyle O(\log ^{c}(1/\varepsilon ))} such that ‖ S − U ‖ ≤ ε {\displaystyle \|S-U\|\leq \varepsilon } . That is, S {\displaystyle S} approximates U {\displaystyle U} to operator norm error. [ 3 ] Furthermore, there is an efficient algorithm to find such a sequence. More generally, the theorem also holds in SU(d) for any fixed d. This theorem also holds without the assumption that G {\displaystyle {\mathcal {G}}} contains its own inverses, although presently with a larger value of c {\displaystyle c} that also increases with the dimension d {\displaystyle d} . [ 5 ] The constant c {\displaystyle c} can be made to be log ( 1 + 5 ) / 2 ⁡ 2 + δ = 1.44042 … + δ {\displaystyle \log _{(1+{\sqrt {5}})/2}2+\delta =1.44042\ldots +\delta } for any fixed δ > 0 {\displaystyle \delta >0} . [ 6 ] However, there exist particular gate sets for which we can take c = 1 {\displaystyle c=1} , which makes the length of the gate sequence optimal up to a constant factor. [ 7 ] Every known proof of the fully general Solovay–Kitaev theorem proceeds by recursively constructing a gate sequence giving increasingly good approximations to U ∈ SU ⁡ ( 2 ) {\displaystyle U\in \operatorname {SU} (2)} . [ 3 ] Suppose we have an approximation U n − 1 ∈ SU ⁡ ( 2 ) {\displaystyle U_{n-1}\in \operatorname {SU} (2)} such that ‖ U − U n − 1 ‖ ≤ ε n − 1 {\displaystyle \|U-U_{n-1}\|\leq \varepsilon _{n-1}} . Our goal is to find a sequence of gates approximating U U n − 1 − 1 {\displaystyle UU_{n-1}^{-1}} to ε n {\displaystyle \varepsilon _{n}} error, for ε n < ε n − 1 {\displaystyle \varepsilon _{n}<\varepsilon _{n-1}} . By concatenating this sequence of gates with U n − 1 {\displaystyle U_{n-1}} , we get a sequence of gates U n {\displaystyle U_{n}} such that ‖ U − U n ‖ ≤ ε n {\displaystyle \|U-U_{n}\|\leq \varepsilon _{n}} . The main idea in the original argument of Solovay and Kitaev is that commutators of elements close to the identity can be approximated "better-than-expected". Specifically, for V , W ∈ SU ⁡ ( 2 ) {\displaystyle V,W\in \operatorname {SU} (2)} satisfying ‖ V − I ‖ ≤ δ 1 {\displaystyle \|V-I\|\leq \delta _{1}} and ‖ W − I ‖ ≤ δ 1 {\displaystyle \|W-I\|\leq \delta _{1}} and approximations V ~ , W ~ ∈ SU ⁡ ( 2 ) {\displaystyle {\tilde {V}},{\tilde {W}}\in \operatorname {SU} (2)} satisfying ‖ V − V ~ ‖ ≤ δ 2 {\displaystyle \|V-{\tilde {V}}\|\leq \delta _{2}} and ‖ W − W ~ ‖ ≤ δ 2 {\displaystyle \|W-{\tilde {W}}\|\leq \delta _{2}} , then where the big O notation hides higher-order terms. One can naively bound the above expression to be O ( δ 2 ) {\displaystyle O(\delta _{2})} , but the group commutator structure creates substantial error cancellation. We can use this observation to approximate U U n − 1 − 1 {\displaystyle UU_{n-1}^{-1}} as a group commutator V n − 1 W n − 1 V n − 1 − 1 W n − 1 − 1 {\displaystyle V_{n-1}W_{n-1}V_{n-1}^{-1}W_{n-1}^{-1}} . This can be done such that both V n − 1 {\displaystyle V_{n-1}} and W n − 1 {\displaystyle W_{n-1}} are close to the identity (since ‖ U U n − 1 − 1 − I ‖ ≤ ε n − 1 {\displaystyle \|UU_{n-1}^{-1}-I\|\leq \varepsilon _{n-1}} ). So, if we recursively compute gate sequences approximating V n − 1 {\displaystyle V_{n-1}} and W n − 1 {\displaystyle W_{n-1}} to ε n − 1 {\displaystyle \varepsilon _{n-1}} error, we get a gate sequence approximating U U n − 1 − 1 {\displaystyle UU_{n-1}^{-1}} to the desired better precision ε n {\displaystyle \varepsilon _{n}} with ε n {\displaystyle \varepsilon _{n}} . We can get a base case approximation with constant ε 0 {\displaystyle \varepsilon _{0}} with an exhaustive search of bounded-length gate sequences. Let us choose the initial value ε 0 {\displaystyle \varepsilon _{0}} so that ε 0 < ε ′ {\displaystyle \varepsilon _{0}<\varepsilon '} to be able to apply the iterated “shrinking” lemma. In addition we want s ε 0 < 1 {\displaystyle s\varepsilon _{0}<1} to make sure that ε k {\displaystyle \varepsilon _{k}} decreases as we increase k {\displaystyle k} . Moreover, we also make sure that ε 0 {\displaystyle \varepsilon _{0}} is small enough so that ε k 2 < ε k + 1 {\displaystyle \varepsilon _{k}^{2}<\varepsilon _{k+1}} . Since ⟨ G ⟩ {\displaystyle \langle G\rangle } is dense in SU ⁡ ( 2 ) {\displaystyle \operatorname {SU} (2)} , we can choose l 0 {\displaystyle l_{0}} large enough [ 8 ] so that G l 0 {\displaystyle G^{l_{0}}} is an ε 0 2 {\displaystyle \varepsilon _{0}^{2}} -net for SU ⁡ ( 2 ) {\displaystyle \operatorname {SU} (2)} (and hence for S ε 0 {\displaystyle \operatorname {S} _{\varepsilon _{0}}} , as well), no matter how small ε 0 {\displaystyle \varepsilon _{0}} is. Thus, given any U ∈ SU ⁡ ( 2 ) {\displaystyle U\in \operatorname {SU} (2)} , we can choose U 0 ∈ G l 0 {\displaystyle U_{0}\in G^{l_{0}}} such that | | U − U 0 | | < ε 0 2 {\displaystyle ||U-U_{0}||<\varepsilon _{0}^{2}} . Let Δ := U U 0 + {\displaystyle \Delta :=UU_{0}^{+}} be the “difference” of U {\displaystyle U} and U 0 {\displaystyle U_{0}} . Then Hence, Δ 1 ∈ S ε 1 {\displaystyle \Delta _{1}\in \operatorname {S_{\varepsilon _{1}}} } . By invoking the iterated "shrinking" lemma with k = 1 {\displaystyle k=1} , there exists U 1 ∈ G l 1 {\displaystyle U_{1}\in G^{l_{1}}} such that Similarly let Δ 2 := Δ 1 U 1 + = U U 0 + U 1 + {\displaystyle \Delta _{2}:=\Delta _{1}U_{1}^{+}=UU_{0}^{+}U_{1}^{+}} . Then Thus, Δ 2 ∈ S ε 2 {\displaystyle \Delta _{2}\in \operatorname {S_{\varepsilon _{2}}} } and we can invoke the iterated "shrinking" lemma (with k = 2 {\displaystyle k=2} this time) to get U 2 ∈ G l 2 {\displaystyle U_{2}\in G^{l_{2}}} such that ‖ Δ 2 − U 2 ‖ = ‖ U U 0 + U 1 + − U 2 ‖ = ‖ U − U 2 U 1 U 0 ‖ < ε 2 2 . {\displaystyle \|\Delta _{2}-U_{2}\|=\|UU_{0}^{+}U_{1}^{+}-U_{2}\|=\|U-U_{2}U_{1}U_{0}\|<\varepsilon _{2}^{2}.} If we continue in this way, after k steps we get U k ∈ G l k {\displaystyle U_{k}\in \operatorname {G} ^{l_{k}}} such that Thus, we have obtained a sequence of gates that approximates U {\displaystyle U} to accuracy ε k 2 {\displaystyle \varepsilon _{k}^{2}} . To determine the value of k {\displaystyle k} , we set ε k 2 = ( ( s ε 0 ) ( 3 / 2 ) k / s ) 2 = ε {\displaystyle \varepsilon _{k}^{2}=\left((s\varepsilon _{0})^{(3/2)^{k}}/s\right)^{2}=\varepsilon } and solve for k: Now we can always choose ε 0 {\displaystyle \varepsilon _{0}} slightly smaller so that the obtained value of k {\displaystyle k} is an integer. [ 9 ] Let c = log 5 / log ( 3 / 2 ) ≈ 3.97 {\displaystyle c={\text{log}}5/{\text{log}}(3/2)\approx 3.97} so that 5 k = ( 3 2 ) k c {\displaystyle 5^{k}=\left({\frac {3}{2}}\right)^{kc}} . Then Hence for any U ∈ SU ⁡ ( 2 ) {\displaystyle U\in \operatorname {SU} (2)} there is a sequence of L = O ( log c ( 1 / ε ) ) {\displaystyle L=O({\text{log}}^{c}(1/\varepsilon ))} gates that approximates U {\displaystyle U} to accuracy ε {\displaystyle \varepsilon } . Here the main ideas that are used in the SK algorithm have been presented. The SK algorithm may be expressed in nine lines of pseudocode. Each of these lines are explained in detail below, but present it here in its entirety both for the reader's reference, and to stress the conceptual simplicity of the algorithm: Let us examine each of these lines in detail. The first line: indicates that the algorithm is a function with two inputs: an arbitrary single-qubit quantum gate, U {\displaystyle U} , which we desire to approximate, and a non-negative integer, n {\displaystyle n} , which controls the accuracy of the approximation. The function returns a sequence of instructions which approximates U {\displaystyle U} to an accuracy ε n {\displaystyle \varepsilon _{n}} , where ε n {\displaystyle \varepsilon _{n}} is a decreasing function of n {\displaystyle n} , so that as n {\displaystyle n} gets larger, the accuracy gets better, with ε n → 0 {\displaystyle \varepsilon _{n}\to 0} as n → ∞ {\displaystyle n\to \infty } . ε n {\displaystyle \varepsilon _{n}} is described in detail below. The Solovay-Kitaev function is recursive, so that to obtain an ε n {\displaystyle \varepsilon _{n}} -approximation to U {\displaystyle U} , it will call itself to obtain ε n − 1 {\displaystyle \varepsilon _{n-1}} -approximations to certain unitaries. The recursion terminates at n = 0 {\displaystyle n=0} , beyond which no further recursive calls are made: In order to implement this step it is assumed that a preprocessing stage has been completed which allows one to find a basic ε 0 {\displaystyle \varepsilon _{0}} -approximation to arbitrary U ∈ SU ⁡ ( 2 ) {\displaystyle U\in \operatorname {SU} (2)} . Since ε 0 {\displaystyle \varepsilon _{0}} is a constant, in principle this preprocessing stage may be accomplished simply by enumerating and storing a large number of instruction sequences from G {\displaystyle G} , say up to some sufficiently large (but fixed) length l 0 {\displaystyle l_{0}} , and then providing a lookup routine which, given U {\displaystyle U} , returns the closest sequence. At higher levels of recursion, to find an ε n {\displaystyle \varepsilon _{n}} -approximation to U {\displaystyle U} , one begins by finding an ε n − 1 {\displaystyle \varepsilon _{n-1}} -approximation to U {\displaystyle U} : U n − 1 {\displaystyle U_{n-1}} is used as a step towards finding an improved approximation to U {\displaystyle U} . Defining Δ {\displaystyle \Delta } ≡ U U n − 1 + {\displaystyle UU_{n-1}^{+}} , the next three steps of the algorithm aim to find an ε n {\displaystyle \varepsilon _{n}} -approximation to Δ {\displaystyle \Delta } , where ε n {\displaystyle \varepsilon _{n}} is some improved level of accuracy, i.e., ε n < ε n − 1 {\displaystyle \varepsilon _{n}<\varepsilon _{n-1}} . Finding such an approximation also enables us to obtain an ε n {\displaystyle \varepsilon _{n}} -approximation to U {\displaystyle U} , simply by concatenating exact sequence of instructions for U n − 1 {\displaystyle U_{n-1}} with ε n {\displaystyle \varepsilon _{n}} -approximating sequence for Δ {\displaystyle \Delta } . How do we find such an approximation to? First, observe that Δ {\displaystyle \Delta } is within a distance ε n − 1 {\displaystyle \varepsilon _{n-1}} of the identity. This follows from the definition of Δ {\displaystyle \Delta } and the fact that U n − 1 {\displaystyle U_{n-1}} is within a distance ε n − 1 {\displaystyle \varepsilon _{n-1}} of U {\displaystyle U} . Second, decompose Δ {\displaystyle \Delta } as a group commutator Δ = V W V + W + {\displaystyle \Delta =VWV^{+}W^{+}} of unitary gates V {\displaystyle V} and W {\displaystyle W} . For any Δ {\displaystyle \Delta } it turns out that this is not obvious and that there is always an infinite set of choices for V {\displaystyle V} and W {\displaystyle W} such that Δ = V W V + W + {\displaystyle \Delta =VWV^{+}W^{+}} . For our purposes it is important that we find V {\displaystyle V} and W {\displaystyle W} such that d ( I , V ) , d ( I , W ) < c g c ε n − 1 {\displaystyle d(I,V),d(I,W)<c_{gc}{\sqrt {\varepsilon _{n-1}}}} for some constant c g c {\displaystyle c_{gc}} . We call such a decomposition a balanced group commutator . For practical implementations we will see below that it is useful to have c g c {\displaystyle c_{gc}} as small as possible. The next step is to find instruction sequences which are ε n − 1 {\displaystyle \varepsilon _{n-1}} -approximations to V {\displaystyle V} and W {\displaystyle W} : The group commutator of V n − 1 {\displaystyle V_{n-1}} and W n {\displaystyle W_{n}} turns out to be an ε n − 1 {\displaystyle \varepsilon _{n-1}} ≡ c approx ε n − 1 3 / 2 {\displaystyle c_{\text{approx}}\varepsilon _{n-1}^{3/2}} -approximation to Δ {\displaystyle \Delta } , for some small constant c approx {\displaystyle c_{\text{approx}}} . Provided ε n − 1 < 1 / c approx 2 {\displaystyle \varepsilon _{n-1}<1/c_{\text{approx}}^{2}} , we see that ε n < ε n − 1 {\displaystyle \varepsilon _{n}<\varepsilon _{n-1}} , and this procedure therefore provides an improved approximation to Δ {\displaystyle \Delta } , and thus to U {\displaystyle U} . The constant c approx {\displaystyle c_{\text{approx}}} is important as it determines the precision ε 0 {\displaystyle \varepsilon _{0}} required of the initial approximations. In particular, we see that for this construction to guarantee that ε 0 > ε 1 > . . . {\displaystyle \varepsilon _{0}>\varepsilon _{1}>...} we must have ε 0 < 1 / c approx 2 {\displaystyle \varepsilon _{0}<1/c_{\text{approx}}^{2}} . The algorithm concludes by returning the sequences approximating the group commutator, as well as U n − 1 {\displaystyle U_{n-1}} : Summing up, the function Solovay-Kitaev(U, n) returns a sequence which provides an ε n = c approx ε n − 1 3 / 2 {\displaystyle \varepsilon _{n}=c_{\text{approx}}\varepsilon _{n-1}^{3/2}} -approximation to the desired unitary U {\displaystyle U} . The five constituents in this sequence are all obtained by calling the function at the n − 1 {\displaystyle n-1} th level of recursion. [ 10 ]
https://en.wikipedia.org/wiki/Solovay–Kitaev_theorem
The Solovay–Strassen primality test , developed by Robert M. Solovay and Volker Strassen in 1977, is a probabilistic primality test to determine if a number is composite or probably prime . The idea behind the test was discovered by M. M. Artjuhov in 1967 [ 1 ] (see Theorem E in the paper). This test has been largely superseded by the Baillie–PSW primality test and the Miller–Rabin primality test , but has great historical importance in showing the practical feasibility of the RSA cryptosystem . Euler proved [ 2 ] that for any odd prime number p and any integer a , where ( a p ) {\displaystyle \left({\tfrac {a}{p}}\right)} is the Legendre symbol . The Jacobi symbol is a generalisation of the Legendre symbol to ( a n ) {\displaystyle \left({\tfrac {a}{n}}\right)} , where n can be any odd integer. The Jacobi symbol can be computed in time O ((log n )²) using Jacobi's generalization of the law of quadratic reciprocity . Given an odd number n one can contemplate whether or not the congruence holds for various values of the "base" a , given that a is relatively prime to n . If n is prime then this congruence is true for all a . So if we pick values of a at random and test the congruence, then as soon as we find an a which doesn't fit the congruence we know that n is not prime (but this does not tell us a nontrivial factorization of n ). This base a is called an Euler witness for n ; it is a witness for the compositeness of n . The base a is called an Euler liar for n if the congruence is true while n is composite. For every composite odd n , at least half of all bases are (Euler) witnesses as the set of Euler liars is a proper subgroup of ( Z / n Z ) ∗ {\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{*}} . For example, for n = 65 {\displaystyle n=65} , the set of Euler liars has order 8 and = { 1 , 8 , 14 , 18 , 47 , 51 , 57 , 64 } {\displaystyle =\{1,8,14,18,47,51,57,64\}} , and ( Z / n Z ) ∗ {\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{*}} has order 48. This contrasts with the Fermat primality test , for which the proportion of witnesses may be much smaller. Therefore, there are no (odd) composite n without many witnesses, unlike the case of Carmichael numbers for Fermat's test. Suppose we wish to determine if n = 221 is prime. We write ( n −1)/2=110. We randomly select an a (greater than 1 and smaller than n ): 47. Using an efficient method for raising a number to a power (mod n ) such as binary exponentiation , we compute: This gives that, either 221 is prime, or 47 is an Euler liar for 221. We try another random a , this time choosing a = 2 : Hence 2 is an Euler witness for the compositeness of 221, and 47 was in fact an Euler liar. Note that this tells us nothing about the prime factors of 221, which are actually 13 and 17. The algorithm can be written in pseudocode as follows: Using fast algorithms for modular exponentiation , the running time of this algorithm is O( k ·log 3 n ), where k is the number of different values of a we test. It is possible for the algorithm to return an incorrect answer. If the input n is indeed prime, then the output will always correctly be probably prime . However, if the input n is composite then it is possible for the output to be incorrectly probably prime . The number n is then called an Euler–Jacobi pseudoprime . When n is odd and composite, at least half of all a with gcd( a , n ) = 1 are Euler witnesses. We can prove this as follows: let { a 1 , a 2 , ..., a m } be the Euler liars and a an Euler witness. Then, for i = 1,2,..., m : Because the following holds: now we know that This gives that each a i gives a number a · a i , which is also an Euler witness. So each Euler liar gives an Euler witness and so the number of Euler witnesses is larger or equal to the number of Euler liars. Therefore, when n is composite, at least half of all a with gcd( a , n ) = 1 is an Euler witness. Hence, the probability of failure is at most 2 − k (compare this with the probability of failure for the Miller–Rabin primality test , which is at most 4 − k ). For purposes of cryptography the more bases a we test, i.e. if we pick a sufficiently large value of k , the better the accuracy of test. Hence the chance of the algorithm failing in this way is so small that the (pseudo) prime is used in practice in cryptographic applications, but for applications for which it is important to have a prime, a test like ECPP or the Pocklington primality test [ 3 ] should be used which proves primality. The bound 1/2 on the error probability of a single round of the Solovay–Strassen test holds for any input n , but those numbers n for which the bound is (approximately) attained are extremely rare. On the average, the error probability of the algorithm is significantly smaller: it is less than for k rounds of the test, applied to uniformly random n ≤ x . [ 4 ] [ 5 ] The same bound also applies to the related problem of what is the conditional probability of n being composite for a random number n ≤ x which has been declared prime in k rounds of the test. The Solovay–Strassen algorithm shows that the decision problem COMPOSITE is in the complexity class RP . [ 6 ]
https://en.wikipedia.org/wiki/Solovay–Strassen_primality_test
Solstar Space Co. , also known as Solstar , is an American company that provides commercial wireless internet services to space travelers and Internet of Things in space. It also provides a two-way internet link connecting people on earth to technology in space. [ 2 ] Based out of Santa Fe , New Mexico , the company was founded in March 2017. Solstar was founded by M. Brian Barnett in March 2017, with Michael Potter and Mark Matossian as co-founders. [ 3 ] [ 4 ] [ 5 ] Prior to this, Barnett had developed an initial design of a communication system which was used to successfully transmit the first-ever commercial text message from earth to space in November 2013, [ 3 ] [ 6 ] with students from Albuquerque sending 16 messages to a device aboard a UP Aerospace rocket [ 7 ] [ 8 ] launched from Spaceport America . [ 3 ] [ 9 ] In 2017, Solstar received a Phase I small business contract with NASA to develop a preliminary design for a commercial router on the International Space Station , under the Small Business Innovation Research (SBIR) program. The device is intended for low Earth orbit service [ 4 ] and was named the Slayton Space Communicator (SC-Slayton) after one of Mercury astronauts Deke Slayton who was NASA's first Chief of the Astronaut Office . [ 10 ] [ 11 ] [ 9 ] The company also signed a Space Act Agreement with NASA to test WiFi technologies in space. [ 12 ] [ 13 ] [ 14 ] In April 2018, Solstar tested the Schmitt Space Communicator SC-1x, a three-pound device, in a Blue Origin capsule on a New Shepard rocket [ 15 ] which was launched from the Blue Origin's launch facility near Van Horn, Texas , [ 7 ] [ 16 ] and reached a height of 66 miles. [ 17 ] The test was successful, with the founder Barnett using the on-flight internet connection to send out a tweet . The project's US$ 2 million cost was partly funded by NASA as part of its Flight Opportunities program. [ 18 ] [ 19 ] [ 20 ] The device is named after Harrison Schmitt , one of the last men to walk on the Moon and Solstar's adviser. [ 18 ] [ 21 ] It conducted a second successful test in July 2018, with the flight reaching a peak height of 73.8 miles above sea level. [ 22 ] [ 23 ] [ 24 ] The device was accepted to the Smithsonian National Air and Space Museum's collection. [ 12 ] [ 14 ] [ 25 ] The April 2018 test footage was featured in a short documentary, The Digital Nomad and the Scientist , detailing the first commercial WiFi service in space. The film was directed by Maclovia Martel and Kristina Korsholm with Michael Potter as the executive producer. [ 26 ] The documentary was selected for the Independent Filmmakers Showcase (May 2019) [ 27 ] in Los Angeles and got shortlisted to the Ekko Shortlist ( Denmark , 2020). [ 28 ] In June 2018, Solstar sought Securities and Exchange Commission 's approval to raise investment capital through the crowdfunding platform Wefunder . [ 18 ] [ 29 ] The astronaut Charles D. Walker , who flew on three Space Shuttle flights, joined Solstar as an adviser. [ 12 ] By November that year, the company had raised over US$ 200,000 through Wefunder and US$ 300,000 from other investors. [ 10 ] The Wefunder round closed in January 2019 with US$ 331,460 raised from a total of 519 investors. [ 12 ]
https://en.wikipedia.org/wiki/Solstar
A solstice is the time when the Sun reaches its most northerly or southerly excursion relative to the celestial equator on the celestial sphere . Two solstices occur annually, around 20–22 June and 20–22 December. In many countries, the seasons of the year are defined by reference to the solstices and the equinoxes . The term solstice can also be used in a broader sense, as the day when this occurs. For locations not too close to the equator or the poles, the dates with the longest and shortest periods of daylight are the summer and winter solstices, respectively. Terms with no ambiguity as to which hemisphere is the context are " June solstice " and " December solstice ", referring to the months in which they take place every year. [ 7 ] The word solstice is derived from the Latin sol ("sun") and sistere ("to stand still"), because at the solstices, the Sun's declination appears to "stand still"; that is, the seasonal movement of the Sun's daily path (as seen from Earth ) pauses at a northern or southern limit before reversing direction. [ citation needed ] Solstice first entered into English in the Middle English period. [ 8 ] An older term in English is its calque sunstead ( Old English : sunstede ), which became rare after the 17th century. Sunstead is cognate with other terms with the same meaning in other Germanic language such as Old Norse : sólstaðr and Middle High German : sunnenstat . [ 9 ] A similar English calque of the Latin term is sunstay which was first used in the 16th century and is now also rare. [ 10 ] For an observer at the North Pole , the Sun reaches the highest position in the sky once a year in June. The day this occurs is called the June solstice day. Similarly, for an observer on the South Pole , the Sun reaches the highest position on the December solstice day. When it is the summer solstice at one Pole, it is the winter solstice on the other. The Sun's westerly motion never ceases as Earth is continually in rotation. However, the Sun's motion in declination (i.e. vertically) comes to a stop, before reversing, at the moment of solstice. In that sense, solstice means "sun-standing". This modern scientific word descends from a Latin scientific word in use in the late Roman Republic of the 1st century BC: solstitium . Pliny uses it a number of times in his Natural History with a similar meaning that it has today. It contains two Latin-language morphemes, sol , "sun", and -stitium , "stoppage". [ 11 ] The Romans used "standing" to refer to a component of the relative velocity of the Sun as it is observed in the sky. Relative velocity is the motion of an object from the point of view of an observer in a frame of reference . From a fixed position on the ground, the Sun appears to orbit around Earth. [ 12 ] To an observer in an inertial frame of reference , planet Earth is seen to rotate about an axis and orbit around the Sun in an elliptical path with the Sun at one focus . Earth's axis is tilted with respect to the plane of Earth's orbit and this axis maintains a position that changes little with respect to the background of stars . An observer on Earth therefore sees a solar path that is the result of both rotation and revolution. The component of the Sun's motion seen by an earthbound observer caused by the revolution of the tilted axis—which, keeping the same angle in space, is oriented toward or away from the Sun—is an observed daily increment (and lateral offset) of the elevation of the Sun at noon for approximately six months and observed daily decrement for the remaining six months. At maximum or minimum elevation, the relative yearly motion of the Sun perpendicular to the horizon stops and reverses direction. Outside of the tropics, the maximum elevation occurs at the summer solstice and the minimum at the winter solstice. The path of the Sun, or ecliptic , sweeps north and south between the northern and southern hemispheres. The lengths of time when the sun is up are longer around the summer solstice and shorter around the winter solstice, except near the equator. When the Sun's path crosses the equator , the length of the nights at latitudes +L° and −L° are of equal length. This is known as an equinox . There are two solstices and two equinoxes in a tropical year. [ 14 ] Because of the variation in the rate at which the sun's right ascension changes, the days of longest and shortest daylight do not coincide with the solstices for locations very close to the equator. At the equator, the longest day is around 23 December and the shortest around 16 September (see graph). Inside the Arctic or Antarctic Circles the sun is up all the time for days or even months. The seasons occur because the Earth's axis of rotation is not perpendicular to its orbital plane (the plane of the ecliptic ) but currently makes an angle of about 23.44° (called the obliquity of the ecliptic ), and because the axis keeps its orientation with respect to an inertial frame of reference . As a consequence, for half the year the Northern Hemisphere is inclined toward the Sun while for the other half year the Southern Hemisphere has this distinction. The two moments when the inclination of Earth's rotational axis has maximum effect are the solstices. At the June solstice the subsolar point is further north than any other time: at latitude 23.44° north, known as the Tropic of Cancer . Similarly at the December solstice the subsolar point is further south than any other time: at latitude 23.44° south, known as the Tropic of Capricorn . The subsolar point will cross every latitude between these two extremes exactly twice per year. Also during the June solstice, places on the Arctic Circle (latitude 66.56° north) will see the Sun just on the horizon during midnight, and all places north of it will see the Sun above horizon for 24 hours. That is the midnight sun or midsummer -night sun or polar day. On the other hand, places on the Antarctic Circle (latitude 66.56° south) will see the Sun just on the horizon during midday, and all places south of it will not see the Sun above horizon at any time of the day. That is the polar night . During the December Solstice, the effects on both hemispheres are just the opposite. This sees polar sea ice re-grow annually due to lack of sunlight on the air above and surrounding sea. The warmest and coldest periods of the year in temperate regions are offset by about one month from the solstices, delayed by the earth's thermal inertia. The concept of the solstices was embedded in ancient Greek celestial navigation . As soon as they discovered that the Earth was spherical [ 15 ] they devised the concept of the celestial sphere , [ 16 ] an imaginary spherical surface rotating with the heavenly bodies ( ouranioi ) fixed in it (the modern one does not rotate, but the stars in it do). As long as no assumptions are made concerning the distances of those bodies from Earth or from each other, the sphere can be accepted as real and is in fact still in use. The Ancient Greeks use the term "ηλιοστάσιο" (heliostāsio) , meaning stand of the Sun . The stars move across the inner surface of the celestial sphere along the circumferences of circles in parallel planes [ 17 ] perpendicular to the Earth's axis extended indefinitely into the heavens and intersecting the celestial sphere in a celestial pole. [ 18 ] The Sun and the planets do not move in these parallel paths but along another circle, the ecliptic, whose plane is at an angle, the obliquity of the ecliptic , to the axis, bringing the Sun and planets across the paths of and in among the stars.* Cleomedes states: [ 19 ] The band of the Zodiac ( zōdiakos kuklos , "zodiacal circle") is at an oblique angle ( loksos ) because it is positioned between the tropical circles and equinoctial circle touching each of the tropical circles at one point ... This Zodiac has a determinable width (set at 8° today) ... that is why it is described by three circles: the central one is called "heliacal" ( hēliakos , "of the sun"). The term heliacal circle is used for the ecliptic, which is in the center of the zodiacal circle, conceived as a band including the noted constellations named on mythical themes. Other authors use Zodiac to mean ecliptic, which first appears in a gloss of unknown author in a passage of Cleomedes where he is explaining that the Moon is in the zodiacal circle as well and periodically crosses the path of the Sun. As some of these crossings represent eclipses of the Moon, the path of the Sun is given a synonym, the ekleiptikos (kuklos) from ekleipsis , "eclipse". The two solstices can be distinguished by different pairs of names, depending on which feature one wants to stress. ( Gregorian calendar ) ( subsolar point ) ( Northern Hemisphere ) ( Southern Hemisphere ) The traditional East Asian calendars divide a year into 24 solar terms (節氣). Xiàzhì ( pīnyīn ) or Geshi ( rōmaji ) ( Chinese and Japanese : 夏至; Korean : 하지(Haji) ; Vietnamese : Hạ chí ; lit. summer's extreme ) is the 10th solar term, and marks the summer solstice . It begins when the Sun reaches the celestial longitude of 90° (around 21 June) and ends when the Sun reaches the longitude of 105° (around 7 July). Xiàzhì more often refers in particular to the day when the Sun is exactly at the celestial longitude of 90°. Dōngzhì ( pīnyīn ) or Tōji ( rōmaji ) ( Chinese and Japanese : 冬至; Korean : 동지(Dongji) ; Vietnamese : Đông chí ; lit. winter's extreme ) is the 22nd solar term, and marks the winter solstice . It begins when the Sun reaches the celestial longitude of 270° (around 23 December) and ends when the Sun reaches the longitude of 285° (around 5 January). Dōngzhì more often refers in particular to the day when the Sun is exactly at the celestial longitude of 270°. The solstices (as well as the equinoxes ) mark the middle of the seasons in East Asian calendars. Here, the Chinese character 至 means "extreme", so the terms for the solstices directly signify the summits of summer and winter. The term solstice can also be used in a wider sense, as the date (day) that such a passage happens. The solstices, together with the equinoxes, are connected with the seasons. In some languages they are considered to start or separate the seasons; in others they are considered to be centre points (in England , in the Northern Hemisphere, for example, the period around the northern solstice is known as midsummer). Midsummer's Day , defined as St. Johns Day by the Christian Church , is 24 June, about three days after the solstice itself). Similarly 25 December is the start of the Christmas celebration, and is the day the Sun begins to return to the Northern Hemisphere. The traditional British and Irish main rent and meeting days of the year, "the usual quarter days ," were often those of the solstices and equinoxes. Many cultures celebrate various combinations of the winter and summer solstices, the equinoxes, and the midpoints between them, leading to various holidays arising around these events. During the southern or winter solstice , Christmas is the most widespread contemporary holiday, while Yalda , Saturnalia , Karachun , Hanukkah , Kwanzaa , and Yule are also celebrated around this time. In East Asian cultures, the Dongzhi Festival is celebrated on the winter solstice. For the northern or summer solstice , Christian cultures celebrate the feast of St. John from June 23 to 24 (see St. John's Eve , Ivan Kupala Day ), while Modern Pagans observe Midsummer, known as Litha among Wiccans . For the vernal (spring) equinox, several springtime festivals are celebrated, such as the Persian Nowruz , the observance in Judaism of Passover , the rites of Easter in most Christian churches, as well as the Wiccan Ostara . The autumnal equinox is associated with the Jewish holiday of Sukkot and the Wiccan Mabon . In the southern tip of South America , the Mapuche people celebrate We Tripantu (the New Year) a few days after the northern solstice, on 24 June. Further north, the Atacama people formerly celebrated this date with a noise festival, to call the Sun back. Further east, the Aymara people celebrate their New Year on 21 June. A celebration occurs at sunrise, when the sun shines directly through the Gate of the Sun in Tiwanaku . Other Aymara New Year feasts occur throughout Bolivia , including at the site of El Fuerte de Samaipata . In the Hindu calendar , two sidereal solstices are named Makara Sankranti which marks the start of Uttarayana and Karka Sankranti which marks the start of Dakshinayana . The former occurs around 14 January each year, while the latter occurs around 14 July each year. These mark the movement of the Sun along a sidereally fixed zodiac ( precession is ignored) into Makara, the zodiacal sign which corresponds with Capricorn , and into Karka, the zodiacal sign which corresponds with Cancer , respectively. The Amundsen–Scott South Pole Station celebrates every year on 21 June a midwinter party, to celebrate that the Sun is at its lowest point and coming back. The Fremont Solstice Parade takes place every summer solstice in Fremont, Seattle, Washington in the United States . The reconstructed Cahokia Woodhenge , a large timber circle located at the Mississippian culture Cahokia archaeological site near Collinsville, Illinois , [ 25 ] is the site of annual equinox and solstice sunrise observances. Out of respect for Native American beliefs these events do not feature ceremonies or rituals of any kind. [ 26 ] [ 27 ] [ 28 ] Unlike the equinox, the solstice time is not easy to determine. The changes in solar declination become smaller as the Sun gets closer to its maximum/minimum declination. The days before and after the solstice, the declination speed is less than 30 arcseconds per day which is less than 1 ⁄ 60 of the angular size of the Sun, or the equivalent to just 2 seconds of right ascension . This difference is hardly detectable with indirect viewing based devices like sextant equipped with a vernier , and impossible with more traditional tools like a gnomon [ 29 ] or an astrolabe . It is also hard to detect the changes in sunrise/sunset azimuth due to the atmospheric refraction [ 30 ] changes. Those accuracy issues render it impossible to determine the solstice day based on observations made within the 3 (or even 5) days surrounding the solstice without the use of more complex tools. Accounts do not survive but Greek astronomers must have used an approximation method based on interpolation, which is still used by some amateurs. This method consists of recording the declination angle at noon during some days before and after the solstice, trying to find two separate days with the same declination. When those two days are found, the halfway time between both noons is estimated solstice time. An interval of 45 days has been postulated as the best one to achieve up to a quarter-day precision, in the solstice determination. [ 31 ] In 2012, the journal DIO found that accuracy of one or two hours with balanced errors can be attained by observing the Sun's equal altitudes about S = twenty degrees (or d = about 20 days) before and after the summer solstice because the average of the two times will be early by q arc minutes where q is (πe cosA)/3 times the square of S in degrees (e = earth orbit eccentricity, A = earth's perihelion or Sun's apogee), and the noise in the result will be about 41 hours divided by d if the eye's sharpness is taken as one arc minute. Astronomical almanacs define the solstices as the moments when the Sun passes through the solstitial colure , i.e. the times when the apparent geocentric celestial longitude of the Sun is equal to 90° (June solstice) or 270° (December solstice). [ 32 ] The dates of the solstice varies each year and may occur a day earlier or later depending on the time zone . Because the earth's orbit takes slightly longer than a calendar year of 365 days, the solstices occur slightly later each calendar year, until a leap day re-aligns the calendar with the orbit. Thus the solstices always occur between June 20 and 22 and between December 20 and 23 [ 33 ] [ 34 ] in a four-year-long cycle with the 21st and 22nd being the most common dates, as can be seen in the schedule at the start of the article. Using the current official IAU constellation boundaries—and taking into account the variable precession speed and the rotation of the ecliptic—the solstices shift through the constellations as follows [ 35 ] (expressed in astronomical year numbering in which the year 0 = 1 BC, −1 = 2 BC, etc.):
https://en.wikipedia.org/wiki/Solstice
Solubility equilibrium is a type of dynamic equilibrium that exists when a chemical compound in the solid state is in chemical equilibrium with a solution of that compound. The solid may dissolve unchanged, with dissociation, or with chemical reaction with another constituent of the solution, such as acid or alkali. Each solubility equilibrium is characterized by a temperature-dependent solubility product which functions like an equilibrium constant . Solubility equilibria are important in pharmaceutical, environmental and many other scenarios. A solubility equilibrium exists when a chemical compound in the solid state is in chemical equilibrium with a solution containing the compound. This type of equilibrium is an example of dynamic equilibrium in that some individual molecules migrate between the solid and solution phases such that the rates of dissolution and precipitation are equal to one another. When equilibrium is established and the solid has not all dissolved, the solution is said to be saturated. The concentration of the solute in a saturated solution is known as the solubility . Units of solubility may be molar (mol dm −3 ) or expressed as mass per unit volume, such as μg mL −1 . Solubility is temperature dependent. A solution containing a higher concentration of solute than the solubility is said to be supersaturated . A supersaturated solution may be induced to come to equilibrium by the addition of a "seed" which may be a tiny crystal of the solute, or a tiny solid particle, which initiates precipitation. [ citation needed ] There are three main types of solubility equilibria. In each case an equilibrium constant can be specified as a quotient of activities . This equilibrium constant is dimensionless as activity is a dimensionless quantity. However, use of activities is very inconvenient, so the equilibrium constant is usually divided by the quotient of activity coefficients, to become a quotient of concentrations. See Equilibrium chemistry § Equilibrium constant for details. Moreover, the activity of a solid is, by definition, equal to 1 so it is omitted from the defining expression. For a chemical equilibrium A p B q ⇋ p A + q B {\displaystyle \mathrm {A} _{p}\mathrm {B} _{q}\leftrightharpoons p\mathrm {A} +q\mathrm {B} } the solubility product, K sp for the compound A p B q is defined as follows K s p = [ A ] p [ B ] q {\displaystyle K_{\mathrm {sp} }=[\mathrm {A} ]^{p}[\mathrm {B} ]^{q}} where [A] and [B] are the concentrations of A and B in a saturated solution . A solubility product has a similar functionality to an equilibrium constant though formally K sp has the dimension of (concentration) p + q . Solubility is sensitive to changes in temperature . For example, sugar is more soluble in hot water than cool water. It occurs because solubility products, like other types of equilibrium constants, are functions of temperature. In accordance with Le Chatelier's Principle , when the dissolution process is endothermic (heat is absorbed), solubility increases with rising temperature. This effect is the basis for the process of recrystallization , which can be used to purify a chemical compound. When dissolution is exothermic (heat is released) solubility decreases with rising temperature. [ 1 ] Sodium sulfate shows increasing solubility with temperature below about 32.4 °C, but a decreasing solubility at higher temperature. [ 2 ] This is because the solid phase is the decahydrate ( Na 2 SO 4 ·10H 2 O ) below the transition temperature, but a different hydrate above that temperature. [ citation needed ] The dependence on temperature of solubility for an ideal solution (achieved for low solubility substances) is given by the following expression containing the enthalpy of melting, Δ m H , and the mole fraction x i {\displaystyle x_{i}} of the solute at saturation: ( ∂ ln ⁡ x i ∂ T ) P = H ¯ i , a q − H i , c r R T 2 {\displaystyle \left({\frac {\partial \ln x_{i}}{\partial T}}\right)_{P}={\frac {{\bar {H}}_{i,\mathrm {aq} }-H_{i,\mathrm {cr} }}{RT^{2}}}} where H ¯ i , a q {\displaystyle {\bar {H}}_{i,\mathrm {aq} }} is the partial molar enthalpy of the solute at infinite dilution and H i , c r {\displaystyle H_{i,\mathrm {cr} }} the enthalpy per mole of the pure crystal. [ 3 ] This differential expression for a non-electrolyte can be integrated on a temperature interval to give: [ 4 ] ln ⁡ x i = Δ m H i R ( 1 T f − 1 T ) {\displaystyle \ln x_{i}={\frac {\Delta _{m}H_{i}}{R}}\left({\frac {1}{T_{f}}}-{\frac {1}{T}}\right)} For nonideal solutions activity of the solute at saturation appears instead of mole fraction solubility in the derivative with respect to temperature: ( ∂ ln ⁡ a i ∂ T ) P = H i , a q − H i , c r R T 2 {\displaystyle \left({\frac {\partial \ln a_{i}}{\partial T}}\right)_{P}={\frac {H_{i,\mathrm {aq} }-H_{i,\mathrm {cr} }}{RT^{2}}}} The common-ion effect is the effect of decreased solubility of one salt when another salt that has an ion in common with it is also present. For example, the solubility of silver chloride , AgCl, is lowered when sodium chloride, a source of the common ion chloride, is added to a suspension of AgCl in water. [ 5 ] A g C l ( s ) ⇋ A g + ( a q ) + C l − ( a q ) {\displaystyle \mathrm {AgCl(s)\leftrightharpoons Ag^{+}(aq)+Cl^{-}(aq)} } The solubility, S , in the absence of a common ion can be calculated as follows. The concentrations [Ag + ] and [Cl − ] are equal because one mole of AgCl would dissociate into one mole of Ag + and one mole of Cl − . Let the concentration of [Ag + (aq)] be denoted by x . Then K s p = [ A g + ] [ C l − ] = x 2 {\displaystyle K_{\mathrm {sp} }=\mathrm {[Ag^{+}][Cl^{-}]} =x^{2}} Solubility = [ A g + ] = [ C l − ] = x = K s p {\displaystyle {\text{Solubility}}=\mathrm {[Ag^{+}]=[Cl^{-}]} =x={\sqrt {K_{\mathrm {sp} }}}} K sp for AgCl is equal to 1.77 × 10 −10 mol 2 dm −6 at 25 °C, so the solubility is 1.33 × 10 −5 mol dm −3 . Now suppose that sodium chloride is also present, at a concentration of 0.01 mol dm −3 = 0.01 M. The solubility, ignoring any possible effect of the sodium ions, is now calculated by K s p = [ A g + ] [ C l − ] = x ( 0.01 M + x ) {\displaystyle K_{\mathrm {sp} }=\mathrm {[Ag^{+}][Cl^{-}]} =x(0.01\,{\text{M}}+x)} This is a quadratic equation in x , which is also equal to the solubility. x 2 + 0.01 M x − K s p = 0 {\displaystyle x^{2}+0.01\,{\text{M}}\,x-K_{sp}=0} In the case of silver chloride, x 2 is very much smaller than 0.01 M x , so the first term can be ignored. Therefore Solubility = [ A g + ] = x = K s p 0.01 M = 1.77 × 10 − 8 m o l d m − 3 {\displaystyle {\text{Solubility}}=\mathrm {[Ag^{+}]} =x={\frac {K_{\mathrm {sp} }}{0.01\,{\text{M}}}}=\mathrm {1.77\times 10^{-8}\,mol\,dm^{-3}} } a considerable reduction from 1.33 × 10 −5 mol dm −3 . In gravimetric analysis for silver, the reduction in solubility due to the common ion effect is used to ensure "complete" precipitation of AgCl. The thermodynamic solubility constant is defined for large monocrystals. Solubility will increase with decreasing size of solute particle (or droplet) because of the additional surface energy. This effect is generally small unless particles become very small, typically smaller than 1 μm. The effect of the particle size on solubility constant can be quantified as follows: log ⁡ ( ∗ K A ) = log ⁡ ( ∗ K A → 0 ) + γ A m 3.454 R T {\displaystyle \log(^{*}K_{A})=\log(^{*}K_{A\to 0})+{\frac {\gamma A_{\mathrm {m} }}{3.454RT}}} where * K A is the solubility constant for the solute particles with the molar surface area A , * K A →0 is the solubility constant for substance with molar surface area tending to zero (i.e., when the particles are large), γ is the surface tension of the solute particle in the solvent, A m is the molar surface area of the solute (in m 2 /mol), R is the universal gas constant , and T is the absolute temperature . [ 6 ] The salt effects [ 7 ] ( salting in and salting-out ) refers to the fact that the presence of a salt which has no ion in common with the solute, has an effect on the ionic strength of the solution and hence on activity coefficients , so that the equilibrium constant, expressed as a concentration quotient, changes. Equilibria are defined for specific crystal phases . Therefore, the solubility product is expected to be different depending on the phase of the solid. For example, aragonite and calcite will have different solubility products even though they have both the same chemical identity ( calcium carbonate ). Under any given conditions one phase will be thermodynamically more stable than the other; therefore, this phase will form when thermodynamic equilibrium is established. However, kinetic factors may favor the formation the unfavorable precipitate (e.g. aragonite), which is then said to be in a metastable state. [ citation needed ] In pharmacology, the metastable state is sometimes referred to as amorphous state. Amorphous drugs have higher solubility than their crystalline counterparts due to the absence of long-distance interactions inherent in crystal lattice. Thus, it takes less energy to solvate the molecules in amorphous phase. The effect of amorphous phase on solubility is widely used to make drugs more soluble. [ 8 ] [ 9 ] For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution , the dependence can be quantified as: ( ∂ ln ⁡ x i ∂ P ) T = − V ¯ i , a q − V i , c r R T {\displaystyle \left({\frac {\partial \ln x_{i}}{\partial P}}\right)_{T}=-{\frac {{\bar {V}}_{i,\mathrm {aq} }-V_{i,\mathrm {cr} }}{RT}}} where x i {\displaystyle x_{i}} is the mole fraction of the i {\displaystyle i} -th component in the solution, P {\displaystyle P} is the pressure, T {\displaystyle T} is the absolute temperature, V ¯ i , aq {\displaystyle {\bar {V}}_{i,{\text{aq}}}} is the partial molar volume of the i {\displaystyle i} th component in the solution, V i , cr {\displaystyle V_{i,{\text{cr}}}} is the partial molar volume of the i {\displaystyle i} th component in the dissolving solid, and R {\displaystyle R} is the universal gas constant . [ 10 ] The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time. Dissolution of an organic solid can be described as an equilibrium between the substance in its solid and dissolved forms. For example, when sucrose (table sugar) forms a saturated solution C 12 H 22 O 11 ( s ) ⇋ C 12 H 22 O 11 ( a q ) {\displaystyle \mathrm {C_{12}H_{22}O_{11}(s)\leftrightharpoons C_{12}H_{22}O_{11}(aq)} } An equilibrium expression for this reaction can be written, as for any chemical reaction (products over reactants): K ⊖ = { C 12 H 22 O 11 ( a q ) } { C 12 H 22 O 11 ( s ) } {\displaystyle K^{\ominus }={\frac {\left\{\mathrm {{C}_{12}{H}_{22}{O}_{11}(aq)} \right\}}{\left\{\mathrm {{C}_{12}{H}_{22}{O}_{11}(s)} \right\}}}} where K o is called the thermodynamic solubility constant. The braces indicate activity . The activity of a pure solid is, by definition, unity. Therefore K ⊖ = { C 12 H 22 O 11 ( a q ) } {\displaystyle K^{\ominus }=\left\{\mathrm {{C}_{12}{H}_{22}{O}_{11}(aq)} \right\}} The activity of a substance, A, in solution can be expressed as the product of the concentration, [A], and an activity coefficient , γ . When K o is divided by γ , the solubility constant, K s , K s = [ C 12 H 22 O 11 ( a q ) ] {\displaystyle K_{\mathrm {s} }=\left[\mathrm {{C}_{12}{H}_{22}{O}_{11}(aq)} \right]} is obtained. This is equivalent to defining the standard state as the saturated solution so that the activity coefficient is equal to one. The solubility constant is a true constant only if the activity coefficient is not affected by the presence of any other solutes that may be present. The unit of the solubility constant is the same as the unit of the concentration of the solute. For sucrose K s = 1.971 mol dm −3 at 25 °C. This shows that the solubility of sucrose at 25 °C is nearly 2 mol dm −3 (540 g/L). Sucrose is unusual in that it does not easily form a supersaturated solution at higher concentrations, as do most other carbohydrates . Ionic compounds normally dissociate into their constituent ions when they dissolve in water. For example, for silver chloride : AgCl ( s ) ↽ − − ⇀ Ag ( aq ) + + Cl − ( aq ) {\displaystyle {\ce {AgCl_{(s)}<=> Ag^+_{(aq)}{}+ Cl^-_{(aq)}}}} The expression for the equilibrium constant for this reaction is: K ⊖ = { Ag + ( aq ) } { Cl − ( aq ) } { AgCl ( s ) } = { Ag + ( aq ) } { Cl − ( aq ) } {\displaystyle K^{\ominus }={\frac {\left\{{\ce {Ag+}}_{{\ce {(aq)}}}\right\}\left\{{\ce {Cl-}}_{{\ce {(aq)}}}\right\}}{\left\{{\ce {AgCl_{(s)}}}\right\}}}=\left\{{\ce {Ag+}}_{{\ce {(aq)}}}\right\}\left\{{\ce {Cl-}}_{{\ce {(aq)}}}\right\}} where K ⊖ {\displaystyle K^{\ominus }} is the thermodynamic equilibrium constant and braces indicate activity. The activity of a pure solid is, by definition, equal to one. When the solubility of the salt is very low the activity coefficients of the ions in solution are nearly equal to one. By setting them to be actually equal to one this expression reduces to the solubility product expression: K sp = [ Ag + ] [ Cl − ] = [ Ag + ] 2 = [ Cl − ] 2 . {\displaystyle K_{{\ce {sp}}}=[{\ce {Ag+}}][{\ce {Cl-}}]=[{\ce {Ag+}}]^{2}=[{\ce {Cl-}}]^{2}.} For 2:2 and 3:3 salts, such as CaSO 4 and FePO 4 , the general expression for the solubility product is the same as for a 1:1 electrolyte A B ⇋ A p + + B p − {\displaystyle \mathrm {AB} \leftrightharpoons \mathrm {A} ^{p+}+\mathrm {B} ^{p-}} With an unsymmetrical salt like Ca(OH) 2 the solubility expression is given by Ca ( OH ) 2 ↽ − − ⇀ Ca 2 + + 2 OH − {\displaystyle {\ce {Ca(OH)_2 <=> {Ca}^{2+}+ 2OH^-}}} K s p = [ Ca ] [ OH ] 2 {\displaystyle K_{sp}={\ce {[Ca]}}{\ce {[OH]}}^{2}} Since the concentration of hydroxide ions is twice the concentration of calcium ions this reduces to K s p = 4 [ C a ] 3 {\displaystyle \mathrm {K_{sp}=4[Ca]^{3}} } In general, with the chemical equilibrium A p B q ⇋ p A n + + q B m − {\displaystyle {\ce {A}}_{p}{\ce {B}}_{q}~{\ce {\leftrightharpoons }}~p{\ce {A}}^{n+}+q{\ce {B}}^{m-}} [ B ] = q p [ A ] {\displaystyle {\ce {[B]}}={\frac {q}{p}}{\ce {[A]}}} and the following table, showing the relationship between the solubility of a compound and the value of its solubility product, can be derived. [ 11 ] Solubility products are often expressed in logarithmic form. Thus, for calcium sulfate, with K sp = 4.93 × 10 −5 mol 2 dm −6 , log K sp = −4.32 . The smaller the value of K sp , or the more negative the log value, the lower the solubility. Some salts are not fully dissociated in solution. Examples include MgSO 4 , famously discovered by Manfred Eigen to be present in seawater as both an inner sphere complex and an outer sphere complex . [ 12 ] The solubility of such salts is calculated by the method outlined in dissolution with reaction . The solubility product for the hydroxide of a metal ion, M n + , is usually defined, as follows: M ( O H ) n ⇋ M n + + n O H − {\displaystyle \mathrm {M(OH)_{n}\leftrightharpoons \mathrm {M^{n+}+nOH^{-}} } } K s p = [ M n + ] [ O H − ] n {\displaystyle K_{sp}=\mathrm {[M^{n+}][OH^{-}]^{n}} } However, general-purpose computer programs are designed to use hydrogen ion concentrations with the alternative definitions. M ( O H ) n + n H + ⇋ M n + + n H 2 O {\displaystyle \mathrm {M(OH)_{n}+nH^{+}\leftrightharpoons M^{n+}+nH_{2}O} } K sp ∗ = [ M n + ] [ H + ] − n {\displaystyle K_{\text{sp}}^{*}=\mathrm {[M^{n+}][H^{+}]^{-n}} } For hydroxides, solubility products are often given in a modified form, K * sp , using hydrogen ion concentration in place of hydroxide ion concentration. The two values are related by the self-ionization constant for water, K w . [ 13 ] K w = [ H + ] [ O H − ] {\displaystyle K_{\mathrm {w} }=[\mathrm {H^{+}} ][\mathrm {OH^{-}} ]} K sp ∗ = K sp ( K w ) n {\displaystyle K_{\text{sp}}^{*}={\frac {K_{\text{sp}}}{(K_{\text{w}})^{n}}}} log ⁡ K sp ∗ = log ⁡ K sp − n log ⁡ K w {\displaystyle \log K_{\text{sp}}^{*}=\log K_{\text{sp}}-n\log K_{\text{w}}} For example, at ambient temperature, for calcium hydroxide, Ca(OH) 2 , lg K sp is ca. −5 and lg K * sp ≈ −5 + 2 × 14 ≈ 23. A typical reaction with dissolution involves a weak base , B, dissolving in an acidic aqueous solution . B ( s ) + H + ( a q ) ⇋ B H + ( a q ) {\displaystyle \mathrm {B} \mathrm {(s)} +\mathrm {H} ^{+}\mathrm {(aq)} \leftrightharpoons \mathrm {BH} ^{+}(\mathrm {aq)} } This reaction is very important for pharmaceutical products. [ 14 ] Dissolution of weak acids in alkaline media is similarly important. H A ( s ) + O H − ( a q ) ⇋ A − ( a q ) + H 2 O {\displaystyle \mathrm {HA(s)+OH^{-}(aq)\leftrightharpoons A^{-}(aq)+H_{2}O} } The uncharged molecule usually has lower solubility than the ionic form, so solubility depends on pH and the acid dissociation constant of the solute. The term "intrinsic solubility" is used to describe the solubility of the un-ionized form in the absence of acid or alkali. Leaching of aluminium salts from rocks and soil by acid rain is another example of dissolution with reaction: alumino-silicates are bases which react with the acid to form soluble species, such as Al 3+ (aq). Formation of a chemical complex may also change solubility. A well-known example is the addition of a concentrated solution of ammonia to a suspension of silver chloride , in which dissolution is favoured by the formation of an ammine complex. A g C l ( s ) + 2 N H 3 ( a q ) ⇋ [ A g ( N H 3 ) 2 ] + ( a q ) + C l − ( a q ) {\displaystyle \mathrm {AgCl(s)+2NH_{3}(aq)\leftrightharpoons [Ag(NH_{3})_{2}]^{+}(aq)+Cl^{-}(aq)} } When sufficient ammonia is added to a suspension of silver chloride, the solid dissolves. The addition of water softeners to washing powders to inhibit the formation of soap scum provides an example of practical importance. The determination of solubility is fraught with difficulties. [ 6 ] First and foremost is the difficulty in establishing that the system is in equilibrium at the chosen temperature. This is because both precipitation and dissolution reactions may be extremely slow. If the process is very slow solvent evaporation may be an issue. Supersaturation may occur. With very insoluble substances, the concentrations in solution are very low and difficult to determine. The methods used fall broadly into two categories, static and dynamic. In static methods a mixture is brought to equilibrium and the concentration of a species in the solution phase is determined by chemical analysis . This usually requires separation of the solid and solution phases. In order to do this the equilibration and separation should be performed in a thermostatted room. [ 15 ] Very low concentrations can be measured if a radioactive tracer is incorporated in the solid phase. A variation of the static method is to add a solution of the substance in a non-aqueous solvent, such as dimethyl sulfoxide , to an aqueous buffer mixture. [ 16 ] Immediate precipitation may occur giving a cloudy mixture. The solubility measured for such a mixture is known as "kinetic solubility". The cloudiness is due to the fact that the precipitate particles are very small resulting in Tyndall scattering . In fact the particles are so small that the particle size effect comes into play and kinetic solubility is often greater than equilibrium solubility. Over time the cloudiness will disappear as the size of the crystallites increases, and eventually equilibrium will be reached in a process known as precipitate ageing. [ 17 ] Solubility values of organic acids, bases, and ampholytes of pharmaceutical interest may be obtained by a process called "Chasing equilibrium solubility". [ 18 ] In this procedure, a quantity of substance is first dissolved at a pH where it exists predominantly in its ionized form and then a precipitate of the neutral (un-ionized) species is formed by changing the pH. Subsequently, the rate of change of pH due to precipitation or dissolution is monitored and strong acid and base titrant are added to adjust the pH to discover the equilibrium conditions when the two rates are equal. The advantage of this method is that it is relatively fast as the quantity of precipitate formed is quite small. However, the performance of the method may be affected by the formation supersaturated solutions. A number of computer programs are available to do the calculations. They include:
https://en.wikipedia.org/wiki/Solubility_equilibrium
In oceanic biogeochemistry , the solubility pump is a physico-chemical process that transports carbon as dissolved inorganic carbon (DIC) from the ocean's surface to its interior. The solubility pump is driven by the coincidence of two processes in the ocean : Since deep water (that is, seawater in the ocean's interior) is formed under the same surface conditions that promote carbon dioxide solubility, it contains a higher concentration of dissolved inorganic carbon than might be expected from average surface concentrations. Consequently, these two processes act together to pump carbon from the atmosphere into the ocean's interior. One consequence of this is that when deep water upwells in warmer, equatorial latitudes, it strongly outgasses carbon dioxide to the atmosphere because of the reduced solubility of the gas. The solubility pump has a biological counterpart known as the biological pump . For an overview of both pumps, see Raven & Falkowski (1999). [ 1 ] Carbon dioxide , like other gases, is soluble in water. However, unlike many other gases ( oxygen for instance), it reacts with water and forms a balance of several ionic and non-ionic species (collectively known as dissolved inorganic carbon , or DIC). These are dissolved free carbon dioxide (CO 2 (aq) ), carbonic acid (H 2 CO 3 ), bicarbonate (HCO 3 − ) and carbonate (CO 3 2− ), and they interact with water as follows : The balance of these carbonate species (which ultimately affects the solubility of carbon dioxide), is dependent on factors such as pH , as shown in a Bjerrum plot . In seawater this is regulated by the charge balance of a number of positive (e.g. Na + , K + , Mg 2+ , Ca 2+ ) and negative (e.g. CO 3 2− itself, Cl − , SO 4 2− , Br − ) ions. Normally, the balance of these species leaves a net positive charge. With respect to the carbonate system, this excess positive charge shifts the balance of carbonate species towards negative ions to compensate. The result of which is a reduced concentration of the free carbon dioxide and carbonic acid species, which in turn leads to an oceanic uptake of carbon dioxide from the atmosphere to restore balance. Thus, the greater the positive charge imbalance, the greater the solubility of carbon dioxide. In carbonate chemistry terms, this imbalance is referred to as alkalinity . In terms of measurement, four basic parameters are of key importance: Total inorganic carbon (TIC, T CO 2 or C T ), Total alkalinity (T ALK or A T ), pH , and pCO 2 . Measuring any two of these parameters allows for the determination of a wide range of pH-dependent species (including the above-mentioned species). This balance can be changed by a number of processes. For example, the air-sea flux of CO 2 , the dissolution / precipitation of CaCO 3 , or biological activity such as photosynthesis / respiration . Each of these has different effects on each of the four basic parameters, and together they exert strong influences on global cycles. The net and local charge of the oceans remains neutral during any chemical process. The combustion of fossil fuels , land-use changes, and the production of cement have led to a flux of CO 2 to the atmosphere. Presently, about one third (approximately 2 gigatons of carbon per year) [ 2 ] [ 3 ] of anthropogenic emissions of CO 2 are believed to be entering the ocean. The solubility pump is the primary mechanism driving this flux, with the consequence that anthropogenic CO 2 is reaching the ocean interior via high latitude sites of deep water formation (particularly the North Atlantic). Ultimately, most of the CO 2 emitted by human activities will dissolve in the ocean, [ 4 ] however the rate at which the ocean will take it up in the future is less certain. In a study of carbon cycle up to the end of the 21st century, Cox et al. (2000) [ 5 ] predicted that the rate of CO 2 uptake will begin to saturate at a maximum rate at 5 gigatons of carbon per year by 2100. This was partially due to non-linearities in the seawater carbonate system, but also due to climate change . Ocean warming decreases the solubility of CO 2 in seawater, slowing the ocean's response to emissions. Warming also acts to increase ocean stratification , isolating the surface ocean from deeper waters. Additionally, changes in the ocean's thermohaline circulation (specifically slowing) [ 6 ] may act to decrease transport of dissolved CO 2 into the deep ocean. However, the magnitude of these processes is still uncertain, preventing good long-term estimates of the fate of the solubility pump. While ocean absorption of anthropogenic CO 2 from the atmosphere acts to decrease climate change, it causes ocean acidification which is believed will have negative consequences for marine ecosystems. [ 7 ]
https://en.wikipedia.org/wiki/Solubility_pump
The tables below provides information on the variation of solubility of different substances (mostly inorganic compounds ) in water with temperature , at one atmosphere pressure . Units of solubility are given in grams of substance per 100 millilitres of water (g/100 ml), unless shown otherwise. The substances are listed in alphabetical order.
https://en.wikipedia.org/wiki/Solubility_table
Soluble adenylyl cyclase (sAC) is a regulatory cytosolic enzyme present in almost every cell. sAC is a source of cyclic adenosine 3’,5’ monophosphate (cAMP) – a second messenger that mediates cell growth and differentiation in organisms from bacteria to higher eukaryotes. sAC differentiates from the transmembrane adenylyl cyclase (tmACs) – an important source of cAMP; in that sAC is regulated by bicarbonate anions and it is dispersed throughout the cell cytoplasm. sAC has been found to have various functions in physiological systems different from that of the tmACs. [ 1 ] sAC is encoded in a single Homo sapiens gene identified as ADCY10 or Adenylate cyclase 10 (soluble). This gene packed down 33 exons that comprise greater than 100kb; though, it seems to utilize multiple promoters, [ 2 ] [ 3 ] and its mRNA undergoes extensive alternative splicing. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The functional mammalian sAC consist of two heterologous catalytic domains (C1 and C2), forming the 50 kDa amino terminus of the protein. [ 6 ] [ 7 ] The additional ~140 kDa C terminus of the enzyme includes an autoinhibitory region, [ 8 ] canonical P-loop, potential heme-binding domain, [ 9 ] and leucine zipper-like sequence, [ 10 ] which are a form of putative regulatory domains. A truncated form of the enzyme only includes the C1 and C2 domains and it is refers to as the minimal functional sAC variant. [ 5 ] [ 10 ] This sAC-truncated form has cAMP-forming activity much higher than its full-length type. These sAC variants are stimulated by HCO3- and respond to all known selective sAC inhibitors. [ 6 ] Crystal structures of this sAC variant comprising only the catalytic core, in apo form and in as complex with various substrate analogs, products, and regulators, reveal a generic Class III AC architecture with sAC-specific features. [ 11 ] The structurally related domains C1 and C2 form the typical pseudo-heterodimer, with one active site. [ 6 ] The pseudo-symmetric site accommodates the sAC-specific activator HCO3−, which activates by triggering a rearrangement of Arg176, a residue connecting both sites. The anionic sAC inhibitor 4,4′-diisothiocyanatostilbene-2,2′-disulfonic acid (DIDS) acts as a blocker for the entrance to active site and bicarbonate binding pocket. [ 11 ] The binding and cyclizing of adenosine 5’ triphosphate (ATP) to the catalytic active site of the enzyme is coordinated by two metal cations. The catalytic activity of sAC is increase by the presence of manganese [Mn 2+ ]. sAC magnesium [Mg 2+ ] activity is regulated by calcium [Ca 2+ ] which increases the affinity for ATP of mammalian sAC. In addition, bicarbonate [HCO − 3 ] releases ATP-Mg 2+ substrate inhibition and increases V max of the enzyme. [ 12 ] The open conformation state of sAC is reached when ATP, with Ca 2+ bound to its γ-phosphate binds with specific residues in the catalytic center of the enzyme. When the second metal – a Mg 2+ ion – binds to the α-phosphate of ATP leads to a conformational change of the enzyme: the close state . The change in conformation from open to close state induces esterification of the α-phosphate with the ribose in adenosine and the release of the β- and γ-phosphates, this leads to cyclizing. [ 7 ] Hydrogencarbonate stimulates the enzyme’s V max by promoting the allosteric change that leads to active site closure, recruitment of the catalytic Mg 2+ ion, and readjustment of the phosphates in the bound ATP. [ 13 ] The activator bicarbonate binds to a site pseudo-symmetric to the active site and triggers conformational changes by recruiting Arg176 from the active site (see above - "structure"). [ 11 ] Calcium increases substrate affinity by replacing the magnesium in the ion B site, which provides an anchoring point for the beta- and gamma-phosphates of the ATP substrate. [ 11 ] [ 13 ] Astrocytes express several sAC splice variants, [ 14 ] which are involved in metabolic coupling between neurons and astrocytes . Increase of potassium [K + ] in the extracellular space caused by neuronal activity depolarizes the cell membrane of nearby astrocytes and facilitates the entry of hydrogencarbonate through Na + /HCO − 3 - cotransporters. [ 7 ] The increase in cytosolic hydrogencarbonate activates sAC; the result of this activation is the release of lactate for use as energy source by the neurons . Numerous sAC splice variants are present in osteoclast and osteoblasts, [ 3 ] and mutation in the human sAC gene is associated with low spinal density. [ 15 ] Calcification by osteoblasts is intrinsically related with bicarbonate and calcium. Bone density experiments in mouse calvaria [ 16 ] cultured indicates that HCO − 3 -sensing sAC is a physiological appropriate regulator of bone formation and/or reabsorption. sAC activation by bicarbonate is necessary for motility and other aspects of capacitation in the spermatozoa of mammals. [ 17 ] [ 18 ] In human males, mutations in the ADCY10 gene that lead to the inactivation of sAC have been linked to cases of sterility. [ 19 ] Due to this essential role in male fertility, sAC has been explored as a potential target for non-hormonal male contraception . [ 20 ]
https://en.wikipedia.org/wiki/Soluble_adenylyl_cyclase
Soluble cell adhesion molecules ( sCAM s) are a class of cell adhesion molecule (CAMs - cell surface binding proteins ) that may represent important biomarkers for inflammatory processes involving activation or damage to cells such as platelets and the endothelium . They include soluble isoforms of the cell adhesion molecules ICAM-1 , VCAM-1 , E-selectin and P-selectin (distinguished as sICAM-1, sVCAM-1, sE-selectin and sP-selectin). The cellular expression of CAMs is difficult to assess clinically, but these soluble forms are present in the circulation and may serve as markers for CAMs. [ 1 ] Research has focused on their role in cardiovascular (particularly atherosclerosis ), connective tissue and neoplastic diseases, where blood plasma levels may be a marker of the disease severity or prognosis , and they may be useful in evaluating progress of some treatments. [ 2 ] Many studies have postulated that increased production of cell adhesion molecules (CAMs) on the vascular endothelium (blood vessel lining) plays a role in the development of arterial plaque , with the suggestion from both in vitro and in vivo studies that the CAM production is increased by dyslipidemia (abnormal lipid levels in the blood). [ 3 ] Research studies have used sCAMs as biomarkers to measure correlations with nutrients [ 4 ] [ 5 ] or nutrient levels [ 6 ] as significant, or not. [ 7 ] This molecular biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Soluble_cell_adhesion_molecules
Soluble urokinase plasminogen activator receptor ( suPAR ) ( NCBI Accession no. AAK31795) is a protein and the soluble form of uPAR . uPAR is expressed mainly on immune cells, endothelial cells, and smooth muscle cells. uPAR is a membrane-bound receptor for uPA, also known as urokinase and Vitronectin. The soluble version of uPAR, called suPAR, results from the cleavage and membrane-bound uPAR during inflammation or immune activation. [ 1 ] The suPAR concentration is positively correlated to the activation level of the immune system . Therefore, suPAR is a marker of disease severity and aggressiveness [ 1 ] [ 2 ] and is associated with morbidity and mortality in several acute and chronic diseases. [ 3 ] [ 4 ] [ 5 ] [ 6 ] suPAR levels have been observed to increase with age. [ 7 ] suPAR is present in plasma , urine , blood , serum , and cerebrospinal fluid . In the general population, the suPAR level is higher in females than in males. The median suPAR level for men and women in blood donors is 2.22 ng/mL and 2.54 ng/mL, respectively. [ 8 ] In general, women have slightly higher suPAR than men. [ 8 ] [ 9 ] suPAR levels are higher in serum than in plasma for the same individual. [ 10 ] [ 11 ] suPAR is a biomarker reflecting the level of activity of the immune system in response to an inflammatory stimulus. suPAR levels positively correlate with pro-inflammatory biomarkers, including tumor necrosis factor-α (TNFα)  and C-reactive protein (CRP) and other parameters, including leukocyte counts . suPAR is also associated with organ damage in various diseases.[2-5] Elevated levels of suPAR are associated with increased risk of systemic inflammatory response syndrome (SIRS), cancer , focal segmental glomerulosclerosis , cardiovascular disease , type 2 diabetes , infectious diseases , HIV , and mortality . [ 12 ] [ 13 ] In the emergency departments, suPAR can aid in the triage and risk assessment of patients. This allows for many patients can be discharged rather than admitted. This also ensures that the most ill patients are prioritised first and put under careful observation without delay. A suPAR level below 4 ng/mL indicates a good prognosis in acute medical patients and supports discharge. In contrast, patients presenting with a suPAR level above 6 ng/mL have a high risk of a negative outcome. In COVID-19, an early elevation of suPAR, e.g. in patients that present with symptoms of SARS-CoV-2 infection, is associated with an increased risk of severe COVID-19 development, which may lead to respiratory failure, acute kidney injury, and death. Clinical relevant cut-offs have been identified with a suPAR below 4 ng/mL indicating low risk of adverse outcomes and a suPAR above 6 ng/mL for high risk of negative outcomes such as severe respiratory failure. [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] The suPAR level is elevated in patients with cardiovascular diseases compared to healthy individuals. suPAR is a predictor of cardiovascular morbidity and mortality in the general population. [ 21 ] [ 22 ] [ 23 ] In the kidneys, suPAR plays a role in regulating the permeability of the glomerular filtration barrier. An elevated suPAR level is associated with chronic renal diseases, [ 24 ] the future incidence of chronic renal diseases, [ 25 ] and declining eGFR. [ 25 ] [ 26 ] A high level is significantly associated with mortality and incidence of cardiovascular diseases in these patients. [ 24 ] suPAR has a secondary structure of 17 anti-parallel β-sheets with three short α-helices . It consists of the three homologous domains D1, D2, and D3. Comparing cDNA sequences, D1 differs from D2 and D3 in its primary and tertiary structure, causing its distinct ligand binding properties. [ 1 ] [ 27 ] uPAR has cleavage sites for several proteases in the linker region (chymotrypsin, elastase, matrix metalloproteases, cathepsin G, plasmin, urokinase plasminogen activator (uPA, or urokinase), and in the GPI anchor (phospholipase C and D, cathepsin G, plasmin). The GPI-anchor links uPAR to the cell membrane making it available for uPA binding. When uPA is bound to the receptor, a cleavage between the GPI-anchor and D3 forms suPAR. Of the three suPAR forms: suPAR1-3, suPAR2-3, and suPAR1, suPAR2-3 is the chemotactic agent for promoting the immune system. [ 1 ] The molecular weight of suPAR varies between 24–66 kDa due to variations in posttranslational glycosylations. [ 1 ] Additional isoforms generated by alternative splicing have been described on the RNA level, but whether these are transcribed and their possible roles remain unclear. [ 28 ] suPAR is mainly measured in serum and plasma isolated from human venous blood. The suPAR level can be measured using the  suPARnostic® product line. suPARnostic® is a CE-IVD certified antibody-based product range applied for quantitative measurements of suPAR in the clinical setting. Three product formats are available: 1) TurbiLatex, validated for clinical chemistry systems currently including the Roche Diagnostics cobas c501/2 and c701/2 systems; the Siemens ADVIA XPT and Atellica systems, and the Abbott Architect c and Alinity systems. 2) Quick Triage, which is a platform that is applied at the Point-Of-Care. 3) ELISA. [ 29 ]
https://en.wikipedia.org/wiki/Soluble_urokinase_plasminogen_activator_receptor
SoluForce is a type of Reinforced Thermoplastic Pipe (RTP, also known as flexible composite pipe or FCP) . SoluForce is a brand name of Pipelife Nederland B.V. (part of Wienerberger AG ), with its main offices and production facilities located in Enkhuizen , The Netherlands . It develops, manufactures and markets RTP , which is a flexible high pressure pipe. It is supplied in long length coils of up to 400m length and has design pressure ratings from 36 to 450 bar. This type of pipe is typically used in the oil and gas industry for oil and gas flowlines, high pressure water injection and water transportation lines. [ 1 ] [ 2 ] However, they are also used for applications outside of the oil and gas industry including domestic gas, mining , CO 2 and hydrogen applications. This pipe has faster installation time compared to conventional steel pipes, as speeds of up to 2000m per day have been reached installing RTP in ground surface (with average speeds being approx. 1000m per day for normal RTP installations). [ 3 ] The pipe mainly benefits applications where steel fails due to corrosion and installation time is an issue. [ 4 ] RTP was developed in the early 1990s by Wavin Repox, Akzo Nobel and by Tubes d'Aquitaine from France. They developed the first pipes reinforced with synthetic fibre to replace medium pressure steel pipes in response to growing demand for non-corrosive conduits for application in the onshore oil and gas industry, particularly from Shell in the Middle East. Because of its expertise in producing pipes, Pipelife Netherlands was involved in the project to develop long length RTP in 1998. [ 5 ] The resulting system is marketed today under the name SoluForce. SoluForce was the first ever RTP to be installed and used in the year 2000. The Soluforce RTP has a three layer pipe construction: In some SoluForce pipe versions, an extra bonded aluminium layer is added to prevent light components and gasses from permeating. SoluForce pipes are available in 4 and 6 inch versions. Depending on the reïnforcement layer, SoluForce pipes have design pressures of up to 450 bar / 6527 psi. Soluforce is used for the following applications: Although these kind of pipes have been developed for the oil and gas industry, they are also used for domestic gas, mining , CO 2 and hydrogen applications. Soluforce RTP is tested and acknowledged by the following organisations:
https://en.wikipedia.org/wiki/Soluforce
The solution-friction model (SF model) is a mechanistic transport model developed to describe the transport processes across porous membranes , such as reverse osmosis (RO) and nanofiltration (NF). [ 1 ] [ 2 ] [ 3 ] Unlike traditional models, such as those based on Darcy’s law , which primarily describes pressure-driven solvent (water) transport in homogeneous porous mediums, the SF model also accounts for the coupled transport of both solvent (water) and solutes (salts). [ 2 ] The solution-friction model is derived on a pore-flow or viscous flow mechanism, but extends its applicability by incorporating the force balances on the species transporting through the membrane. This inclusion allows for a detailed understanding of the interdependent fluxes of water and salt, influenced by interactions between salt ions and water molecules. The SF model has been able to successfully describe the transport of water and salt in RO membranes, showing good agreement with experiments. [ 1 ] [ 4 ] [ 5 ] [ 6 ] The development of the SF model also corrects the misconception that RO water transport is a diffusion -based process. [ 2 ] [ 7 ] Ion transport through the RO membrane is driven by the gradient of chemical potential within the membrane. The solution-friction model describes this transport by considering the frictions between ions, ions and water, and ions and membrane. The force balance for an ion is given by the equation: [ 2 ] − ∇ μ i = R T f i − w ( v i − v w ) + R T f i − m v i {\displaystyle -\nabla \mu _{i}=RTf_{i-w}(v_{i}-v_{w})+RTf_{i-m}v_{i}} Note that the membrane is stationary and its velocity v m {\displaystyle v_{m}} is therefore set to zero. By considering only the coordinate perpendicular to the membrane surface, the ion flux ( v i {\displaystyle v_{i}} ) governed by diffusion , electromigration , and advection can be expressed as: [ 2 ] v i = K w , i v w − K w , i D i , m ( d ln ⁡ c i d x + z i d φ d x ) {\displaystyle v_{i}=K_{w,i}v_{w}-K_{w,i}D_{i,m}\left({\frac {d\ln c_{i}}{dx}}+z_{i}{\frac {d\varphi }{dx}}\right)} Water transport is governed by the gradient of total pressure, counterbalanced by water-membrane and ion-water frictions. The balance is expressed as: [ 2 ] − ∇ P tot = R T f w − m v w + R T ∑ i f i − w c i ( v w − v i ) {\displaystyle -\nabla P^{\text{tot}}=RTf_{w-m}v_{w}+RT\sum _{i}f_{i-w}c_{i}(v_{w}-v_{i})} Substituting the expression of ion velocity into water velocity, we arrive at the following expression for the force balance on water: [ 2 ] − 1 R T d P tot d x = f w − m v w + ∑ f i − w c i ( 1 − K w , i ) + ∑ K w , i d c i d x + ∑ K w , i c i z i d φ d x {\displaystyle -{\frac {1}{RT}}{\frac {dP^{\text{tot}}}{dx}}=f_{w-m}v_{w}+\sum f_{i-w}c_{i}(1-K_{w,i})+\sum K_{w,i}{\frac {dc_{i}}{dx}}+\sum K_{w,i}c_{i}z_{i}{\frac {d\varphi }{dx}}} When ion-membrane friction is negligible (i.e., K w , i = 1 {\displaystyle K_{w,i}=1} ), this equation can be written as − 1 R T d P tot d x = f w − m v w + + ∑ d c i d x + ∑ c i z i d φ d x {\displaystyle -{\frac {1}{RT}}{\frac {dP^{\text{tot}}}{dx}}=f_{w-m}v_{w}++\sum {\frac {dc_{i}}{dx}}+\sum c_{i}z_{i}{\frac {d\varphi }{dx}}} The equation indicates that the water permeance is influenced by the electrical potential gradient inside the membrane, which has been verified by salt permeation through highly charged Nafion membranes. [ 8 ] Due to the interactions between ions and water, increasing salt concentration decreases the water permeance. Nevertheless, a simplification can be made when a membrane has a low volumetric charge density (i.e., within the membrane), like in typical RO membranes. Therefore, the electrical potential gradient can be neglected as it is relatively small compared to the concentration gradient. The equation for water flux can be eventually simplified as: [ 2 ] v w = 1 R T f w − m L m Δ P − 1 − Φ R T f w − m L m Δ Π {\displaystyle v_{w}={\frac {1}{RTf_{w-m}L_{m}}}\Delta P-{\frac {1-\Phi }{RTf_{w-m}L_{m}}}\Delta \Pi } Defining 1 R T f w − m L m = A {\displaystyle {\frac {1}{RTf_{w-m}L_{m}}}=A} and 1 − Φ = σ {\displaystyle 1-\Phi =\sigma } , the water permeability velocity is obtained as: [ 2 ] v w = A ( Δ P − σ Δ Π ) {\displaystyle v_{w}=A(\Delta P-\sigma \Delta \Pi )} This equation is identical in form to the Spiegler-Kedem-Katchalsky equation, [ 9 ] [ 10 ] a classic model in irreversible thermodynamics for water transport through semipermeable membranes . This ensures that the SF model aligns with basic thermodynamic principles. [ 2 ]
https://en.wikipedia.org/wiki/Solution-friction_model
Solution 16 was the first Brazilian all-in-one PC , introduced by Prológica in 1986. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Based on the Intel 8088 was launched in the national market as the first 16-bit 4.77MHz microprocessor integrated computer in the market, it had 254 KB RAM configuration expandable up to 512 KB, and two 5-1/4" floppy disk drives with capacity for up to 320 KiB of storage. [ 6 ] [ 3 ] The Solution 16 came with an operating system named SO16 (portuguese for " Sistema Operacional 16" meaning "Operating System 16" ). [ 5 ] [ 7 ] [ 8 ] This was a translated copy of MS-DOS 2.11 , which was the first to support hard disks, directories, and character tables for international languages. Microsoft filed a lawsuit in Brazil, accusing the manufacturer of piracy, and later forced the company to pass on a percentage of the profits made with the software, which ended up putting the company in financial difficulties. [ 9 ] Two floppy disk drives, double density, double-sided, 360 kB. [ 6 ] Audio cables were supplied with the computer for connection with a regular tape recorder. [ 6 ] This computing article is a stub . You can help Wikipedia by expanding it . This computer hardware article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solution_16
In game theory , a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts , most famously Nash equilibrium . Many solution concepts, for many games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a refinement to narrow down the solutions. Each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games. Let Γ {\displaystyle \Gamma } be the class of all games and, for each game G ∈ Γ {\displaystyle G\in \Gamma } , let S G {\displaystyle S_{G}} be the set of strategy profiles of G {\displaystyle G} . A solution concept is an element of the direct product Π G ∈ Γ 2 S G ; {\displaystyle \Pi _{G\in \Gamma }2^{S_{G}};} i.e ., a function F : Γ → ⋃ G ∈ Γ 2 S G {\displaystyle F:\Gamma \rightarrow \bigcup \nolimits _{G\in \Gamma }2^{S_{G}}} such that F ( G ) ⊆ S G {\displaystyle F(G)\subseteq S_{G}} for all G ∈ Γ . {\displaystyle G\in \Gamma .} In this solution concept, players are assumed to be rational and so strictly dominated strategies are eliminated from the set of strategies that might feasibly be played. A strategy is strictly dominated when there is some other strategy available to the player that always has a higher payoff, regardless of the strategies that the other players choose. (Strictly dominated strategies are also important in minimax game-tree search .) For example, in the (single period) prisoners' dilemma (shown below), cooperate is strictly dominated by defect for both players because either player is always better off playing defect , regardless of what his opponent does. A Nash equilibrium is a strategy profile (a strategy profile specifies a strategy for every player, e.g. in the above prisoners' dilemma game ( cooperate , defect ) specifies that prisoner 1 plays cooperate and prisoner 2 plays defect ) in which every strategy played by every agent (agent i) is a best response to every other strategy played by all the other opponents (agents j for every j≠i) . A strategy by a player is a best response to another player's strategy if there is no other strategy that could be played that would yield a higher pay-off in any situation in which the other player's strategy is played. In some games, there are multiple Nash equilibria, but not all of them are realistic. In dynamic games, backward induction can be used to eliminate unrealistic Nash equilibria. Backward induction assumes that players are rational and will make the best decisions based on their future expectations. This eliminates noncredible threats, which are threats that a player would not carry out if they were ever called upon to do so. For example, consider a dynamic game with an incumbent firm and a potential entrant to the industry. The incumbent has a monopoly and wants to maintain its market share. If the entrant enters, the incumbent can either fight or accommodate the entrant. If the incumbent accommodates, the entrant will enter and gain profit. If the incumbent fights, it will lower its prices, run the entrant out of business (incurring exit costs), and damage its own profits. The best response for the incumbent if the entrant enters is to accommodate, and the best response for the entrant if the incumbent accommodates is to enter. This results in a Nash equilibrium. However, if the incumbent chooses to fight, the best response for the entrant is to not enter. If the entrant does not enter, it does not matter what the incumbent chooses to do. Hence, fight can be considered a best response for the incumbent if the entrant does not enter, resulting in another Nash equilibrium. However, this second Nash equilibrium can be eliminated by backward induction because it relies on a noncredible threat from the incumbent. By the time the incumbent reaches the decision node where it can choose to fight, it would be irrational to do so because the entrant has already entered. Therefore, backward induction eliminates this unrealistic Nash equilibrium. See also: A generalization of backward induction is subgame perfection. Backward induction assumes that all future play will be rational. In subgame perfect equilibria, play in every subgame is rational (specifically a Nash equilibrium). Backward induction can only be used in terminating (finite) games of definite length and cannot be applied to games with imperfect information . In these cases, subgame perfection can be used. The eliminated Nash equilibrium described above is subgame imperfect because it is not a Nash equilibrium of the subgame that starts at the node reached once the entrant has entered. Sometimes subgame perfection does not impose a large enough restriction on unreasonable outcomes. For example, since subgames cannot cut through information sets , a game of imperfect information may have only one subgame – itself – and hence subgame perfection cannot be used to eliminate any Nash equilibria. A perfect Bayesian equilibrium (PBE) is a specification of players' strategies and beliefs about which node in the information set has been reached by the play of the game. A belief about a decision node is the probability that a particular player thinks that node is or will be in play (on the equilibrium path ). In particular, the intuition of PBE is that it specifies player strategies that are rational given the player beliefs it specifies and the beliefs it specifies are consistent with the strategies it specifies. In a Bayesian game a strategy determines what a player plays at every information set controlled by that player. The requirement that beliefs are consistent with strategies is something not specified by subgame perfection. Hence, PBE is a consistency condition on players' beliefs. Just as in a Nash equilibrium no player's strategy is strictly dominated, in a PBE, for any information set no player's strategy is strictly dominated beginning at that information set. That is, for every belief that the player could hold at that information set there is no strategy that yields a greater expected payoff for that player. Unlike the above solution concepts, no player's strategy is strictly dominated beginning at any information set even if it is off the equilibrium path. Thus in PBE, players cannot threaten to play strategies that are strictly dominated beginning at any information set off the equilibrium path. The Bayesian in the name of this solution concept alludes to the fact that players update their beliefs according to Bayes' theorem . They calculate probabilities given what has already taken place in the game. Forward induction is so called because just as backward induction assumes future play will be rational, forward induction assumes past play was rational. Where a player does not know what type another player is (i.e. there is imperfect and asymmetric information), that player may form a belief of what type that player is by observing that player's past actions. Hence the belief formed by that player of what the probability of the opponent being a certain type is based on the past play of that opponent being rational. A player may elect to signal his type through his actions. Kohlberg and Mertens (1986) introduced the solution concept of Stable equilibrium, a refinement that satisfies forward induction. A counter-example was found where such a stable equilibrium did not satisfy backward induction. To resolve the problem Jean-François Mertens introduced what game theorists now call Mertens-stable equilibrium concept, probably the first solution concept satisfying both forward and backward induction. Forward induction yields a unique solution for the burning money game .
https://en.wikipedia.org/wiki/Solution_concept
A solution in radicals or algebraic solution is an expression of a solution of a polynomial equation that is algebraic , that is, relies only on addition , subtraction , multiplication , division , raising to integer powers , and extraction of n th roots ( square roots , cube roots , etc.). A well-known example is the quadratic formula which expresses the solutions of the quadratic equation There exist algebraic solutions for cubic equations [ 1 ] and quartic equations , [ 2 ] which are more complicated than the quadratic formula. The Abel–Ruffini theorem , [ 3 ] : 211 and, more generally Galois theory , state that some quintic equations , such as do not have any algebraic solution. The same is true for every higher degree. However, for any degree there are some polynomial equations that have algebraic solutions; for example, the equation x 10 = 2 {\displaystyle x^{10}=2} can be solved as x = ± 2 10 . {\displaystyle x=\pm {\sqrt[{10}]{2}}.} The eight other solutions are nonreal complex numbers , which are also algebraic and have the form x = ± r 2 10 , {\displaystyle x=\pm r{\sqrt[{10}]{2}},} where r is a fifth root of unity , which can be expressed with two nested square roots . See also Quintic function § Other solvable quintics for various other examples in degree 5. Évariste Galois introduced a criterion allowing one to decide which equations are solvable in radicals. See Radical extension for the precise formulation of his result. This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solution_in_radicals
Solution precursor plasma spray ( SPPS ) is a thermal spray process where a feedstock solution is heated and then deposited onto a substrate. Basic properties of the process are fundamentally similar to other plasma spraying processes. However, instead of injecting a powder into the plasma plume, a liquid precursor is used. The benefits of utilizing the SPPS process include the ability to create unique nanometer sized microstructures without the injection feed problems normally associated with powder systems and flexible, rapid exploration of novel precursor compositions. [ 1 ] [ 2 ] The use of a solution precursor was first reported as a coating technology by Karthikeyan et al. [ 3 ] [ 4 ] [ 5 ] In that work, Karthikeyan showed that the use of a solution precursor was in fact feasible; however, well adhered coatings could not be generated. Further work was reported in 2001, which refined the process to produce thermal barrier coatings , [ 6 ] YAG films, [ 7 ] and silicon ceramic coatings. [ 8 ] Since then, extensive research on the technology has been explored in large part by the University of Connecticut and Inframat Corporation . The precursor solution is formulated by dissolving salts (commonly zirconium and yttrium when used to formulate thermal barrier coatings) in a solvent. Once dissolved, the solution is then injected via a pressurized feed system. As with other thermal spray processes, feedstock material is melted and then deposited onto a substrate. Typically, the SPPS process sees material injected into a plasma plume or high velocity oxygen fuel (HVOF) combustion flame. Once the solution is injected, the droplets go through several chemical and physical changes [ 9 ] and can arrive at the substrate in several different states, from fully melted to unpyrolized. The deposition state can be manipulated through spray parameters and can be used to significantly control coating properties, such as density and strength. [ 2 ] [ 10 ] Most current research on SPPS has examined is application to create thermal barrier coatings (TBCs). These complex ceramic / metallic material systems are used to protect components in hot sections of gas turbine and diesel engines. [ 11 ] The SPPS process lends itself particularly well to the creation of these TBCs. Studies report the generation of coatings demonstrating superior durability and mechanical properties. [ 12 ] [ 13 ] [ 14 ] Superior durability is imparted by the creation of controlled through thickness vertical cracks. These cracks only slightly increase coating conductivity while allowing for strain relief of stress generated by the CTE mismatch between the coating and the substrate during cyclic heating. The generation of these through thickness cracks was systematically explored and found to be caused by the depositing a controlled portion of unpyrolized material in the coating. [ 15 ] Superior mechanical properties such as bond strength and in-plane toughness result from the nanometer sized microstructure that are created by the SPPS process. Other studies have shown that engineered coatings can reduce thermal conductivity to some of the lowest reported values for TBCs. [ 16 ] [ 17 ] These low thermal conductivities were achieved through the generation of an alternating high-porosity, low-porosity microstructure or the synthesis of a low-conductivity precursor composition with rare-earth dopants . The SPPS process is adapted to existing thermal spray systems. Application costs are significantly less than EB-PVD coatings and slightly higher than Air Plasma Spray coatings. [ 18 ]
https://en.wikipedia.org/wiki/Solution_precursor_plasma_spray
In mathematics , the solution set of a system of equations or inequality is the set of all its solutions, that is the values that satisfy all equations and inequalities. [ 1 ] Also, the solution set or the truth set of a statement or a predicate is the set of all values that satisfy it. If there is no solution, the solution set is the empty set . [ 2 ] In algebraic geometry , solution sets are called algebraic sets if there are no inequalities. Over the reals , and with inequalities, there are called semialgebraic sets . More generally, the solution set to an arbitrary collection E of relations ( E i ) ( i varying in some index set I ) for a collection of unknowns ( x j ) j ∈ J {\displaystyle {(x_{j})}_{j\in J}} , supposed to take values in respective spaces ( X j ) j ∈ J {\displaystyle {(X_{j})}_{j\in J}} , is the set S of all solutions to the relations E , where a solution x ( k ) {\displaystyle x^{(k)}} is a family of values ( x j ( k ) ) j ∈ J ∈ ∏ j ∈ J X j {\textstyle {\left(x_{j}^{(k)}\right)}_{j\in J}\in \prod _{j\in J}X_{j}} such that substituting ( x j ) j ∈ J {\displaystyle {\left(x_{j}\right)}_{j\in J}} by x ( k ) {\displaystyle x^{(k)}} in the collection E makes all relations "true". (Instead of relations depending on unknowns, one should speak more correctly of predicates , the collection E is their logical conjunction , and the solution set is the inverse image of the boolean value true by the associated boolean-valued function .) The above meaning is a special case of this one, if the set of polynomials f i if interpreted as the set of equations f i ( x )=0.
https://en.wikipedia.org/wiki/Solution_set
Since the introduction of the marine propeller in the early 19th century, cavitation during operation has been a limiting factor in the efficiency of ships. Cavitation in marine propellers develops when the propeller operates at a high speed and reduces the efficiency of the propeller. Ever since the introduction of the propeller, solutions for cavitation have been developed and tested. A nozzle system uses a set of nozzles to help reduce the vibrations on the hull due to cavitation. This system was developed by Samsung Heavy Industries , based in South Korea. In order to reduce the hull vibrations caused by cavitation, a set of nozzles are placed on the hull of the ship directly in front of the propeller. These nozzles spray out compressed air above the propeller that creates "a macro bubble". [ 1 ] This bubble protects the hull by absorbing pressure excitations produced by cavitation. To determine the effectiveness of this nozzle system, multiple tests have been carried out with the nozzles and without them. In these tests, it was discovered that the resonance frequencies and vibration could be reduced by up to 75%. [ 1 ] Those who conducted these tests also tried two different arrangements of the nozzles to find out which one was more effective. The first arrangement used only one nozzle, and though it used considerably less power than the other option, it was less effective at reducing hull vibration. The multi-nozzle system, however, performed significantly better, but required more power to operate. [ 1 ] While this nozzle system has major drawbacks, particularly in its power requirements, the vibrations and noise are reduced considerably. Thus, to some ship owners and operators, the cost of installing these nozzles and operating them is outweighed by the benefits of reduced noise caused by their propellers. [ 1 ] The air-filled rubber membrane uses the same principles as the nozzle system to reduce cavitation in marine propellers. As the nozzle system requires a large source of energy to operate, the creators sought to develop a lower cost system. This membrane builds on the principles of the nozzle system and uses a pocket of air to prevent cavitation, but does not require nozzles or compressors. [ 2 ] While limiting the cost of operation, this membrane is able to provide equal protection to nozzles. [ 2 ] The air-filled rubber membrane is placed directly behind an operating marine propeller in the hull. As in the nozzle method, the differing characteristics of the air in the membrane and the seawater around it reduce the resonance frequency, which in turn increases the point at which cavitation is encountered. [ 2 ] The membrane is specially designed so as to reduce the frequency further. [ 2 ] This solution focuses on the materials that marine propellers are created from which is a direct factor in cavitation. While redesigning propellers would only garner an extra one or two percent efficiency in operation, changing the materials a propeller is made from has greater effects. [ 3 ] The most common blend that marine propellers are created from is the nickel-aluminum bronze blend. While this blend can resist erosion, it remains less effective when resisting cavitation. [ 3 ] An example of this method is the Royal Netherlands Navy , who began experimentation with composite materials such as resins or carbon fibers in 2011. [ 4 ] [ 3 ] These materials, when formed into a propeller, are flexible enough under pressure to deflect, which can reduce cavitation. [ 3 ] Other options are made from carbon fiber, epoxy resin , or even glass , and can produce a hydroelastic effect. [ 3 ] As these new propellers are able to flex even while under pressure, the risk of cavitation is reduced. [ 3 ] While replacing propellers is most efficient on ships that are currently under construction, the benefits from newer propeller materials may outweigh the costs of replacing current marine propellers. [ 3 ] Despite the initial cost of the propellers, this solution is significantly cheaper to operate, allowing lower cost long distance transit.
https://en.wikipedia.org/wiki/Solutions_for_cavitation_in_marine_propellers
A solvated electron is a free electron in a solution , in which it behaves like an anion . [ 1 ] An electron's being solvated in a solution means it is bound by the solution. [ 2 ] The notation for a solvated electron in formulas of chemical reactions is "e − ". Often, discussions of solvated electrons focus on their solutions in ammonia, which are stable for days, but solvated electrons also occur in water and many other solvents – in fact, in any solvent that mediates outer-sphere electron transfer . The solvated electron is responsible for a great deal of radiation chemistry . Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca , [ 3 ] Sr , Ba , Eu , and Yb (also Mg using an electrolytic process [ 4 ] ), giving characteristic blue solutions. For alkali metals in liquid ammonia , the solution is blue when dilute and copper-colored when more concentrated (> 3 molar ). [ 5 ] These solutions conduct electricity . The blue colour of the solution is due to ammoniated electrons, which absorb energy in the visible region of light. The diffusivity of the solvated electron in liquid ammonia can be determined using potential-step chronoamperometry . [ 6 ] Solvated electrons in ammonia are the anions of salts called electrides . The reaction is reversible: evaporation of the ammonia solution produces a film of metallic sodium. A lithium–ammonia solution at −60 °C is saturated at about 15 mol% metal (MPM). When the concentration is increased in this range electrical conductivity increases from 10 −2 to 10 4 Ω −1 cm −1 (larger than liquid mercury ). At around 8 MPM, a "transition to the metallic state" (TMS) takes place (also called a "metal-to-nonmetal transition" (MNMT)). At 4 MPM a liquid-liquid phase separation takes place: the less dense gold-colored phase becomes immiscible from a denser blue phase. Above 8 MPM the solution is bronze/gold-colored. In the same concentration range the overall density decreases by 30%. Alkali metals also dissolve in some small primary amines , such as methylamine and ethylamine [ 7 ] and hexamethylphosphoramide , forming blue solutions. Tetrahydrofuran (THF) dissolves alkali metal, but a Birch reduction (see § Applications ) analogue does not proceed without a diamine ligand . [ 8 ] Solvated electron solutions of the alkaline earth metals magnesium, calcium, strontium and barium in ethylenediamine have been used to intercalate graphite with these metals. [ 9 ] Solvated electrons are involved in the reaction of alkali metals with water, even though the solvated electron has only a fleeting existence. [ 10 ] Below pH = 9.6 the hydrated electron reacts with the hydronium ion giving atomic hydrogen, which in turn can react with the hydrated electron giving hydroxide ion and usual molecular hydrogen H 2 . [ 11 ] Solvated electrons can be found even in the gas phase. This implies their possible existence in the upper atmosphere of Earth and involvement in nucleation and aerosol formation. [ 12 ] Its standard electrode potential value is −2.88 V. [ 13 ] The equivalent conductivity of 177 Mho cm 2 is similar to that of hydroxide ion . This value of equivalent conductivity corresponds to a diffusivity of 4.75 × 10 − 5 {\displaystyle \times 10^{-5}} cm 2 s −1 . [ 14 ] Although quite stable, the blue ammonia solutions containing solvated electrons degrade rapidly in the presence of catalysts to give colorless solutions of sodium amide : Electride salts can be isolated by the addition of macrocyclic ligands such as crown ether and cryptands to solutions containing solvated electrons. These ligands strongly bind the cations and prevent their re-reduction by the electron. The solvated electron reacts with oxygen to form a superoxide radical (O 2 .− ). [ 15 ] With nitrous oxide , solvated electrons react to form nitroxyl radicals (NO . ). [ 16 ] Solvated electrons are involved in electrode processes, a broad area with many technical applications ( electrosynthesis , electroplating , electrowinning ). A specialized use of sodium-ammonia solutions is the Birch reduction . Other reactions where sodium is used as a reducing agent also are assumed to involve solvated electrons, e.g. the use of sodium in ethanol as in the Bouveault–Blanc reduction . Work by Cullen et al. showed that metal-ammonia solutions can be used to intercalate a range of layered materials, which can then be exfoliated in polar, aprotic solvents, to produce ionic solutions of two-dimensional materials. [ 17 ] An example of this is the intercalation of graphite with potassium and ammonia, which is then exfoliated by spontaneous dissolution in THF to produce a graphenide solution. [ 18 ] The observation of the color of metal-electride solutions is generally attributed to Humphry Davy . In 1807–1809, he examined the addition of grains of potassium to gaseous ammonia (liquefaction of ammonia was invented in 1823). [ 19 ] James Ballantyne Hannay and J. Hogarth repeated the experiments with sodium in 1879–1880. [ 20 ] W. Weyl in 1864 and C. A. Seely in 1871 used liquid ammonia, whereas Hamilton Cady in 1897 related the ionizing properties of ammonia to that of water. [ 21 ] [ 22 ] [ 23 ] Charles A. Kraus measured the electrical conductance of metal ammonia solutions and in 1907 attributed it to the electrons liberated from the metal. [ 24 ] [ 25 ] In 1918, G. E. Gibson and W. L. Argo introduced the solvated electron concept. [ 26 ] They noted based on absorption spectra that different metals and different solvents ( methylamine , ethylamine ) produce the same blue color, attributed to a common species, the solvated electron. In the 1970s, solid salts containing electrons as the anion were characterized. [ 27 ]
https://en.wikipedia.org/wiki/Solvated_electron
Solvated Metal Atom Dispersion is a method of producing highly reactive [ 1 ] solvated nanoparticles . Samples of a metal (or ceramic ) [ 1 ] are heated to evaporate free atoms (or species ), as in PVD evaporation . This vapor is then co-deposited with a suitable organic solvent (e.g. toluene ) [ 2 ] at very low temperatures (on the order of 70K) to form a solid mixture of the two. [ 3 ] This is then warmed towards room temperature, producing solvated metal atoms or (over time) larger clusters. Sometimes, catalyst supports (such as SiO 2 or Al 2 O 3 ) are added to improve nucleation , [ 4 ] as the process can more readily take place on surface OH groups . This nanotechnology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solvated_metal_atom_dispersion
Solvations describes the interaction of a solvent with dissolved molecules. Both ionized and uncharged molecules interact strongly with a solvent, and the strength and nature of this interaction influence many properties of the solute, including solubility, reactivity, and color, as well as influencing the properties of the solvent such as its viscosity and density. [ 1 ] If the attractive forces between the solvent and solute particles are greater than the attractive forces holding the solute particles together, the solvent particles pull the solute particles apart and surround them. The surrounded solute particles then move away from the solid solute and out into the solution. Ions are surrounded by a concentric shell of solvent . Solvation is the process of reorganizing solvent and solute molecules into solvation complexes and involves bond formation, hydrogen bonding , and van der Waals forces . Solvation of a solute by water is called hydration. [ 2 ] Solubility of solid compounds depends on a competition between lattice energy and solvation, including entropy effects related to changes in the solvent structure. [ 3 ] By an IUPAC definition, [ 4 ] solvation is an interaction of a solute with the solvent , which leads to stabilization of the solute species in the solution . In the solvated state, an ion or molecule in a solution is surrounded or complexed by solvent molecules. Solvated species can often be described by coordination number , and the complex stability constants . The concept of the solvation interaction can also be applied to an insoluble material, for example, solvation of functional groups on a surface of ion-exchange resin . Solvation is, in concept, distinct from solubility . Solvation or dissolution is a kinetic process and is quantified by its rate. Solubility quantifies the dynamic equilibrium state achieved when the rate of dissolution equals the rate of precipitation . The consideration of the units makes the distinction clearer. The typical unit for dissolution rate is mol/s. The units for solubility express a concentration: mass per volume (mg/mL), molarity (mol/L), etc. [ citation needed ] Solvation involves different types of intermolecular interactions: Which of these forces are at play depends on the molecular structure and properties of the solvent and solute. The similarity or complementary character of these properties between solvent and solute determines how well a solute can be solvated by a particular solvent. Solvent polarity is the most important factor in determining how well it solvates a particular solute. Polar solvents have molecular dipoles, meaning that part of the solvent molecule has more electron density than another part of the molecule. The part with more electron density will experience a partial negative charge while the part with less electron density will experience a partial positive charge. Polar solvent molecules can solvate polar solutes and ions because they can orient the appropriate partially charged portion of the molecule towards the solute through electrostatic attraction. This stabilizes the system and creates a solvation shell (or hydration shell in the case of water) around each particle of solute. The solvent molecules in the immediate vicinity of a solute particle often have a much different ordering than the rest of the solvent, and this area of differently ordered solvent molecules is called the cybotactic region. [ 5 ] Water is the most common and well-studied polar solvent, but others exist, such as ethanol , methanol , acetone , acetonitrile , and dimethyl sulfoxide . Polar solvents are often found to have a high dielectric constant , although other solvent scales are also used to classify solvent polarity. Polar solvents can be used to dissolve inorganic or ionic compounds such as salts. The conductivity of a solution depends on the solvation of its ions. Nonpolar solvents cannot solvate ions, and ions will be found as ion pairs. Hydrogen bonding among solvent and solute molecules depends on the ability of each to accept H-bonds, donate H-bonds, or both. Solvents that can donate H-bonds are referred to as protic, while solvents that do not contain a polarized bond to a hydrogen atom and cannot donate a hydrogen bond are called aprotic. H-bond donor ability is classified on a scale (α). [ 6 ] Protic solvents can solvate solutes that can accept hydrogen bonds. Similarly, solvents that can accept a hydrogen bond can solvate H-bond-donating solutes. The hydrogen bond acceptor ability of a solvent is classified on a scale (β). [ 7 ] Solvents such as water can both donate and accept hydrogen bonds, making them excellent at solvating solutes that can donate or accept (or both) H-bonds. Some chemical compounds experience solvatochromism , which is a change in color due to solvent polarity. This phenomenon illustrates how different solvents interact differently with the same solute. Other solvent effects include conformational or isomeric preferences and changes in the acidity of a solute. The solvation process will be thermodynamically favored only if the overall Gibbs energy of the solution is decreased, compared to the Gibbs energy of the separated solvent and solid (or gas or liquid). This means that the change in enthalpy minus the change in entropy (multiplied by the absolute temperature) is a negative value, or that the Gibbs energy of the system decreases. A negative Gibbs energy indicates a spontaneous process but does not provide information about the rate of dissolution. Solvation involves multiple steps with different energy consequences. First, a cavity must form in the solvent to make space for a solute. This is both entropically and enthalpically unfavorable, as solvent ordering increases and solvent-solvent interactions decrease. Stronger interactions among solvent molecules leads to a greater enthalpic penalty for cavity formation. Next, a particle of solute must separate from the bulk. This is enthalpically unfavorable since solute-solute interactions decrease, but when the solute particle enters the cavity, the resulting solvent-solute interactions are enthalpically favorable. Finally, as solute mixes into solvent, there is an entropy gain. [ 5 ] The enthalpy of solution is the solution enthalpy minus the enthalpy of the separate systems, whereas the entropy of solution is the corresponding difference in entropy . The solvation energy (change in Gibbs free energy ) is the change in enthalpy minus the product of temperature (in Kelvin ) times the change in entropy. Gases have a negative entropy of solution, due to the decrease in gaseous volume as gas dissolves. Since their enthalpy of solution does not decrease too much with temperature, and their entropy of solution is negative and does not vary appreciably with temperature, most gases are less soluble at higher temperatures. Enthalpy of solvation can help explain why solvation occurs with some ionic lattices but not with others. The difference in energy between that which is necessary to release an ion from its lattice and the energy given off when it combines with a solvent molecule is called the enthalpy change of solution . A negative value for the enthalpy change of solution corresponds to an ion that is likely to dissolve, whereas a high positive value means that solvation will not occur. It is possible that an ion will dissolve even if it has a positive enthalpy value. The extra energy required comes from the increase in entropy that results when the ion dissolves. The introduction of entropy makes it harder to determine by calculation alone whether a substance will dissolve or not. A quantitative measure for solvation power of solvents is given by donor numbers . [ 8 ] Although early thinking was that a higher ratio of a cation's ion charge to ionic radius , or the charge density, resulted in more solvation, this does not stand up to scrutiny for ions like iron(III) or lanthanides and actinides , which are readily hydrolyzed to form insoluble (hydrous) oxides. As these are solids, it is apparent that they are not solvated. Strong solvent–solute interactions make the process of solvation more favorable. One way to compare how favorable the dissolution of a solute is in different solvents is to consider the free energy of transfer. The free energy of transfer quantifies the free energy difference between dilute solutions of a solute in two different solvents. This value essentially allows for comparison of solvation energies without including solute-solute interactions. [ 5 ] In general, thermodynamic analysis of solutions is done by modeling them as reactions. For example, if you add sodium chloride to water, the salt will dissociate into the ions sodium(+aq) and chloride(-aq). The equilibrium constant for this dissociation can be predicted by the change in Gibbs energy of this reaction. The Born equation is used to estimate Gibbs free energy of solvation of a gaseous ion. Recent simulation studies have shown that the variation in solvation energy between the ions and the surrounding water molecules underlies the mechanism of the Hofmeister series . [ 9 ] [ 1 ] Solvation (specifically, hydration ) is important for many biological structures and processes. For instance, solvation of ions and/or of charged macromolecules, like DNA and proteins, in aqueous solutions influences the formation of heterogeneous assemblies, which may be responsible for biological function. [ 10 ] As another example, protein folding occurs spontaneously, in part because of a favorable change in the interactions between the protein and the surrounding water molecules. Folded proteins are stabilized by 5-10 kcal/mol relative to the unfolded state due to a combination of solvation and the stronger intramolecular interactions in the folded protein structure , including hydrogen bonding . [ 11 ] Minimizing the number of hydrophobic side chains exposed to water by burying them in the center of a folded protein is a driving force related to solvation. Solvation also affects host–guest complexation . Many host molecules have a hydrophobic pore that readily encapsulates a hydrophobic guest. These interactions can be used in applications such as drug delivery, such that a hydrophobic drug molecule can be delivered in a biological system without needing to covalently modify the drug in order to solubilize it. Binding constants for host–guest complexes depend on the polarity of the solvent. [ 12 ] Hydration affects electronic and vibrational properties of biomolecules. [ 13 ] [ 14 ] Due to the importance of the effects of solvation on the structure of macromolecules, early computer simulations which attempted to model their behaviors without including the effects of solvent ( in vacuo ) could yield poor results when compared with experimental data obtained in solution. Small molecules may also adopt more compact conformations when simulated in vacuo ; this is due to favorable van der Waals interactions and intramolecular electrostatic interactions which would be dampened in the presence of a solvent. As computer power increased, it became possible to try and incorporate the effects of solvation within a simulation and the simplest way to do this is to surround the molecule being simulated with a "skin" of solvent molecules, akin to simulating the molecule within a drop of solvent if the skin is sufficiently deep. [ 15 ]
https://en.wikipedia.org/wiki/Solvation
A solvation shell or solvation sheath is the solvent interface of any chemical compound or biomolecule that constitutes the solute in a solution . When the solvent is water it is called a hydration shell or hydration sphere . The number of solvent molecules surrounding each unit of solute is called the hydration number of the solute. A classic example is when water molecules arrange around a metal ion. If the metal ion is a cation, the electronegative oxygen atom of the water molecule would be attracted electrostatically to the positive charge on the metal ion. The result is a solvation shell of water molecules that surround the ion. This shell can be several molecules thick, dependent upon the charge of the ion, its distribution and spatial dimensions. A number of molecules of solvent are involved in the solvation shell around anions and cations from a dissolved salt in a solvent. Metal ions in aqueous solutions form metal aquo complexes . This number can be determined by various methods like compressibility and NMR measurements among others. The solvation shell number of a dissolved electrolyte can be linked to the statistical component of the activity coefficient of the electrolyte and to the ratio between the apparent molar volume of a dissolved electrolyte in a concentrated solution and the molar volume of the solvent (water): [ clarification needed ] ln ⁡ γ s = h − ν ν ln ⁡ ( 1 + b r 55.5 ) − h ν ln ⁡ ( 1 − b r 55.5 ) + b r ( r + h − ν ) 55.5 ( 1 + b r 55.5 ) {\displaystyle \ln \gamma _{s}={\frac {h-\nu }{\nu }}\ln \left(1+{\frac {br}{55.5}}\right)-{\frac {h}{\nu }}\ln \left(1-{\frac {br}{55.5}}\right)+{\frac {br(r+h-\nu )}{55.5\left(1+{\frac {br}{55.5}}\right)}}} [ 1 ] The hydration shell (also sometimes called hydration layer) that forms around proteins is of particular importance in biochemistry. This interaction of the protein surface with the surrounding water is often referred to as protein hydration and is fundamental to the activity of the protein. [ 2 ] The hydration layer around a protein has been found to have dynamics distinct from the bulk water to a distance of 1 nm. The duration of contact of a specific water molecule with the protein surface may be in the subnanosecond range while molecular dynamics simulations suggest the time water spends in the hydration shell before mixing with the outside bulk water could be in the femtosecond to picosecond range, [ 2 ] and that near features conventionally regarded as attractive to water, such as hydrogen bond donors, the water molecules are actually relatively weakly bound and are easily displaced. [ 3 ] Solvation shell water molecules can also influence the molecular design of protein binders or inhibitors. [ 4 ] With other solvents and solutes, varying steric and kinetic factors can also affect the solvation shell.
https://en.wikipedia.org/wiki/Solvation_shell
In chemistry , solvatochromism is the phenomenon observed when the colour of a solution is different when the solute is dissolved in different solvents . [ 1 ] [ 2 ] The solvatochromic effect is the way the spectrum of a substance (the solute) varies when the substance is dissolved in a variety of solvents. In this context, the dielectric constant and hydrogen bonding capacity are the most important properties of the solvent. With various solvents there is a different effect on the electronic ground state and excited state of the solute, so that the size of energy gap between them changes as the solvent changes. This is reflected in the absorption or emission spectrum of the solute as differences in the position, intensity, and shape of the spectroscopic bands . When the spectroscopic band occurs in the visible part of the electromagnetic spectrum , solvatochromism is observed as a change of colour . This is illustrated by Reichardt's dye , as shown in the image. Negative solvatochromism corresponds to a hypsochromic shift (or blue shift) with increasing solvent polarity . An examples of negative solvatochromism is provided by 4-(4 ′ -hydroxystyryl)- N -methylpyridinium iodide , which is red in 1-propanol , orange in methanol , and yellow in water . Positive solvatochromism corresponds to a bathochromic shift (or red shift) with increasing solvent polarity. An example of positive solvatochromism is provided by 4,4'-bis(dimethylamino)fuchsone , which is orange in toluene , red in acetone . The main value of the concept of solvatochromism is the context it provides to predict colors of solutions. Solvatochromism can in principle be used in sensors and in molecular electronics for construction of molecular switches . Solvatochromic dyes are used to measure solvent parameters, which can be used to explain solubility phenomena and predict suitable solvents for particular uses. Solvatochromism of the photoluminescence / fluorescence of carbon nanotubes has been identified and used for optical sensor applications. In one such application, the wavelength of the fluorescence of peptide-coated carbon nanotubes was found to change when exposed to explosives , facilitating detection. [ 3 ] However, more recently the small chromophore solvatochromism hypothesis has been challenged for carbon nanotubes in light of older and newer data showing electrochromic behavior. [ 4 ] [ 5 ] [ 6 ] These and other observations regarding non-linear processes on the semiconducting nanotube suggest colloidal models will require new interpretations that are in line with classic semiconductor optical processes, including electrochemical processes, rather than small molecule physical descriptions. Conflicting hypotheses may be due to the fact the nanotube is only a single atom thick material interface unlike other "bulk" nanomaterials.
https://en.wikipedia.org/wiki/Solvatochromism
Solvatten is a simple portable device which uses sunlight to purify water for drinking. It was invented by Petra Wadström and was intended mainly for domestic use in the developing world. Provided the sun is strong enough, it takes two to six hours to produce ten litres of drinking water. [ 1 ] It works through a combination of the natural ultra-violet radiation and heat ( infra-red radiation ) in sunlight, and also incorporates a filter mesh. [ 2 ] The device is marketed by a company, Solvatten AB, founded in 2006. [ 1 ] The Solvatten device consists of two hinged parts which can open in the manner of a book, revealing two transparent plastic surfaces. Each half can hold five litres of water. When the device is placed in the sun, the plastic allows ultra-violet radiation to reach the water, which is also heated by the sunlight. It becomes safe to drink within two to six hours. [ 2 ] Use of the device can reduce the use of wood to boil water, thus acting to limit deforestation and carbon dioxide emissions . [ 2 ] The device has received several awards:
https://en.wikipedia.org/wiki/Solvatten
The Solvay process or ammonia–soda process is the major industrial process for the production of sodium carbonate (soda ash, Na 2 CO 3 ). The ammonia–soda process was developed into its modern form by the Belgian chemist Ernest Solvay during the 1860s. [ 1 ] The ingredients for this are readily available and inexpensive: salt brine (from inland sources or from the sea) and limestone (from quarries). The worldwide production of soda ash in 2005 was estimated at 42 million tonnes, [ 2 ] which is more than six kilograms (13 lb) per year for each person on Earth. Solvay-based chemical plants now produce roughly three-quarters of this supply, with the remaining being mined from natural deposits. This method superseded the Leblanc process . The name "soda ash" is based on the principal historical method of obtaining alkali, which was by using water to extract it from the ashes of certain plants. Wood fires yielded potash and its predominant ingredient potassium carbonate (K 2 CO 3 ), whereas the ashes from these special plants yielded "soda ash" and its predominant ingredient sodium carbonate (Na 2 CO 3 ). The word "soda" (from the Middle Latin) originally referred to certain plants that grow in salt solubles; it was discovered that the ashes of these plants yielded the useful alkali soda ash. The cultivation of such plants reached a particularly high state of development in the 18th century in Spain, where the plants are named barrilla (or " barilla " in English). [ 3 ] [ 4 ] [ 5 ] The ashes of kelp also yield soda ash and were the basis of an enormous 18th-century industry in Scotland. [ 6 ] Alkali was also mined from dry lakebeds in Egypt. By the late 18th century these sources were insufficient to meet Europe's burgeoning demand for alkali for soap, textile, and glass industries. [ 7 ] In 1791, the French physician Nicolas Leblanc developed a method to manufacture soda ash using salt, limestone , sulfuric acid , and coal . Although the Leblanc process came to dominate alkali production in the early 19th century, the expense of its inputs and its polluting byproducts (including hydrogen chloride gas) made it apparent that it was far from an ideal solution. [ 7 ] [ 8 ] It has been reported that in 1811 French physicist Augustin Jean Fresnel discovered that sodium bicarbonate precipitates when carbon dioxide is bubbled through ammonia-containing brines – which is the chemical reaction central to the Solvay process. The discovery wasn't published. As has been noted by Desmond Reilly, "The story of the evolution of the ammonium–soda process is an interesting example of the way in which a discovery can be made and then laid aside and not applied for a considerable time afterwards." [ 9 ] Serious consideration of this reaction as the basis of an industrial process dates from the British patent issued in 1834 to H. G. Dyar and J. Hemming. There were several attempts to reduce this reaction to industrial practice, with varying success. In 1861, Belgian industrial chemist Ernest Solvay turned his attention to the problem; he was apparently largely unaware of the extensive earlier work. [ 8 ] His solution was a 24 m (79 ft) gas absorption tower in which carbon dioxide bubbled up through a descending flow of brine. This, together with efficient recovery and recycling of the ammonia, proved effective. By 1864 Solvay and his brother Alfred had acquired financial backing and constructed a plant in Couillet , today a suburb of the Belgian town of Charleroi . The new process proved more economical and less polluting than the Leblanc method, and its use spread. In 1874, the Solvays expanded their facilities with a new, larger plant at Nancy , France. In the same year, Ludwig Mond visited Solvay in Belgium and acquired rights to use the new technology. He and John Brunner formed the firm of Brunner, Mond & Co. , and built a Solvay plant at Winnington , near Northwich , Cheshire , England. The facility began operating in 1874. Mond was instrumental in making the Solvay process a commercial success. He made several refinements between 1873 and 1880 that removed byproducts that could slow or halt the process. In 1884, the Solvay brothers licensed Americans William B. Cogswell and Rowland Hazard to produce soda ash in the US, and formed a joint venture ( Solvay Process Company ) to build and operate a plant in Solvay, New York . By the 1890s, Solvay-process plants produced the majority of the world's soda ash. In 1938 large deposits of the mineral trona were discovered near the Green River in Wyoming from which sodium carbonate can be extracted more cheaply than produced by the process. The original Solvay New York plant closed in 1986, replaced in the US by a factory in Green River. Throughout the rest of the world, the Solvay process remains the major source of soda ash. The Solvay process results in soda ash (predominantly sodium carbonate (Na 2 CO 3 )) from brine (as a source of sodium chloride (NaCl)) and from limestone (as a source of calcium carbonate (CaCO 3 )). [ 8 ] The overall process is: The actual implementation of this global, overall reaction is intricate. [ 10 ] [ 11 ] [ 12 ] A simplified description can be given using the four different, interacting chemical reactions illustrated in the figure. In the first step in the process, carbon dioxide (CO 2 ) passes through a concentrated aqueous solution of sodium chloride (table salt, NaCl) and ammonia (NH 3 ). In industrial practice, the reaction is carried out by passing concentrated brine (salt water) through two towers. In the first, ammonia bubbles up through the brine and is absorbed by it. In the second, carbon dioxide bubbles up through the ammoniated brine, and sodium bicarbonate (baking soda) precipitates out of the solution. Note that, in a basic solution , NaHCO 3 is less water-soluble than sodium chloride. The ammonia (NH 3 ) buffers the solution at a basic (high) pH ; without the ammonia, a hydrochloric acid byproduct would render the solution acidic , and arrest the precipitation. Here, NH 3 along with ammoniacal brine acts as a mother liquor . The necessary ammonia "catalyst" for reaction (I) is reclaimed in a later step, and relatively little ammonia is consumed. The carbon dioxide required for reaction (I) is produced by heating (" calcination ") of the limestone at 950–1100 °C, and by calcination of the sodium bicarbonate (see below). The calcium carbonate (CaCO 3 ) in the limestone is partially converted to quicklime (calcium oxide (CaO)) and carbon dioxide: The sodium bicarbonate (NaHCO 3 ) that precipitates out in reaction (I) is filtered out from the hot ammonium chloride (NH 4 Cl) solution, and the solution is then reacted with the quicklime (calcium oxide (CaO)) left over from heating the limestone in step (II). CaO makes a strong basic solution. The ammonia from reaction (III) is recycled back to the initial brine solution of reaction (I). The sodium bicarbonate (NaHCO 3 ) precipitate from reaction (I) is then converted to the final product, sodium carbonate (washing soda: Na 2 CO 3 ), by calcination (160–230 °C), producing water and carbon dioxide as byproducts: The carbon dioxide from step (IV) is recovered for re-use in step (I). When properly designed and operated, a Solvay plant can reclaim almost all its ammonia, and consumes only small amounts of additional ammonia to make up for losses. The only major inputs to the Solvay process are salt, limestone and thermal energy , and its only major byproduct is calcium chloride , which is sometimes sold as road salt . After the invention of the Haber and other new ammonia-producing processes in the 1910s and 1920s its price dropped, and there was less need in reclaiming it. So in the modified Solvay process developed by Chinese chemist Hou Debang in 1930s, the first few steps are the same as the Solvay process, but the CaCl 2 is supplanted by ammonium chloride (NH 4 Cl). Instead of treating the remaining solution with lime, carbon dioxide and ammonia are pumped into the solution, then sodium chloride is added until the solution saturates at 40 °C. Next, the solution is cooled to 10 °C. Ammonium chloride precipitates and is removed by filtration, and the solution is recycled to produce more sodium carbonate. Hou's process eliminates the production of calcium chloride. The byproduct ammonium chloride can be refined, used as a fertilizer and may have greater commercial value than CaCl 2 , thus reducing the extent of waste beds. Additional details of the industrial implementation of this process are available in the report prepared for the European Soda Ash Producer's Association. [ 11 ] The principal byproduct of the Solvay process is calcium chloride (CaCl 2 ) in aqueous solution. The process has other waste and byproducts as well. [ 11 ] Not all of the limestone that is calcined is converted to quicklime and carbon dioxide (in reaction II); the residual calcium carbonate and other components of the limestone become wastes. In addition, the salt brine used by the process is usually purified to remove magnesium and calcium ions, typically to form carbonates ( MgCO 3 , CaCO 3 ); otherwise, these impurities would lead to scale in the various reaction vessels and towers. These carbonates are additional waste products. In inland plants, such as that in Solvay, New York , the byproducts have been deposited in "waste beds"; the weight of material deposited in these waste beds exceeded that of the soda ash produced by about 50%. These waste beds have led to water pollution, principally by calcium and chloride. The waste beds in Solvay, New York substantially increased the salinity in nearby Onondaga Lake , which used to be among the most polluted lakes in the U.S. [ 13 ] and is a superfund pollution site. [ 14 ] As such waste beds age, they do begin to support plant communities which have been the subject of several scientific studies. [ 15 ] [ 16 ] At seaside locations, such as those at Saurashtra , Gujarat, India, [ 17 ] the CaCl 2 solution may be discharged directly into the sea, apparently without substantial environmental harm (although small amounts of heavy metals in it may be a problem), the major concern is discharge location falls within the Marine National Park of Gulf of Kutch which serves as habitat for coral reefs, seagrass and seaweed community. At Osborne, South Australia , [ 18 ] a settling pond is now used to remove 99% of the CaCl 2 as the former discharge was silting up the shipping channel. At Rosignano Solvay in Tuscany, Italy the limestone waste produced by the Solvay factory has changed the landscape, producing the "Spiagge Bianche" ("White Beaches"). A report published in 1999 by the United Nations Environment Programme (UNEP), listed Spiagge Bianche among the priority pollution hot spots in the coastal areas of the Mediterranean Sea. [ 19 ] Variations in the Solvay process have been proposed for carbon sequestration . One idea is to react carbon dioxide, produced perhaps by the combustion of coal, to form solid carbonates (such as sodium bicarbonate) that could be permanently stored, thus avoiding carbon dioxide emission into the atmosphere. [ 20 ] [ 21 ] The Solvay process could be modified to give the overall reaction: Variations in the Solvay process have been proposed to convert carbon dioxide emissions into sodium carbonates, but carbon sequestration by calcium or magnesium carbonates appears more promising. [ dubious – discuss ] However, the amount of carbon dioxide which can be used for carbon sequestration with calcium or magnesium (when compared to the total amount of carbon dioxide exhausted by mankind) is very low. This is primarily due to the major feasibility difference between capturing carbon dioxide from controlled and concentrated emission sources such as from coal-fired power plants as compared to capturing carbon from non-concentrated small-scale carbon sources such as small fires, vehicle exhaust, human respiration etc. when using such methods. Moreover, variation on the Solvay process will most probably add an additional energy consuming step, which will increase carbon dioxide emissions unless carbon neutral energy sources like hydropower , nuclear energy , wind or solar power are used.
https://en.wikipedia.org/wiki/Solvay_process
A solvent (from the Latin solvō , "loosen, untie, solve") is a substance that dissolves a solute, resulting in a solution . A solvent is usually a liquid but can also be a solid, a gas, or a supercritical fluid . Water is a solvent for polar molecules , and the most common solvent used by living things; all the ions and proteins in a cell are dissolved in water within the cell. Major uses of solvents are in paints, paint removers, inks, and dry cleaning. [ 2 ] Specific uses for organic solvents are in dry cleaning (e.g. tetrachloroethylene ); as paint thinners ( toluene , turpentine ); as nail polish removers and solvents of glue ( acetone , methyl acetate , ethyl acetate ); in spot removers ( hexane , petrol ether); in detergents ( citrus terpenes ); and in perfumes ( ethanol ). Solvents find various applications in chemical, pharmaceutical , oil, and gas industries, including in chemical syntheses and purification processes Some petrochemical solvents are highly toxic and emit volatile organic compounds . Biobased solvents are usually more expensive, but ideally less toxic and biodegradable . Biogenic raw materials usable for solvent production are for example lignocellulose , starch and sucrose , but also waste and byproducts from other industries (such as terpenes , vegetable oils and animal fats ). [ 3 ] When one substance is dissolved into another, a solution is formed. [ 4 ] This is opposed to the situation when the compounds are insoluble like sand in water. In a solution, all of the ingredients are uniformly distributed at a molecular level and no residue remains. A solvent-solute mixture consists of a single phase with all solute molecules occurring as solvates (solvent-solute complexes ), as opposed to separate continuous phases as in suspensions, emulsions and other types of non-solution mixtures. The ability of one compound to be dissolved in another is known as solubility; if this occurs in all proportions, it is called miscible . In addition to mixing, the substances in a solution interact with each other at the molecular level. When something is dissolved, molecules of the solvent arrange around molecules of the solute. Heat transfer is involved and entropy is increased making the solution more thermodynamically stable than the solute and solvent separately. This arrangement is mediated by the respective chemical properties of the solvent and solute, such as hydrogen bonding , dipole moment and polarizability . [ 5 ] Solvation does not cause a chemical reaction or chemical configuration changes in the solute. However, solvation resembles a coordination complex formation reaction, often with considerable energetics (heat of solvation and entropy of solvation) and is thus far from a neutral process. When one substance dissolves into another, a solution is formed. A solution is a homogeneous mixture consisting of a solute dissolved into a solvent. The solute is the substance that is being dissolved, while the solvent is the dissolving medium. Solutions can be formed with many different types and forms of solutes and solvents. Solvents can be broadly classified into two categories: polar and non-polar . A special case is elemental mercury , whose solutions are known as amalgams ; also, other metal solutions exist which are liquid at room temperature. Generally, the dielectric constant of the solvent provides a rough measure of a solvent's polarity. The strong polarity of water is indicated by its high dielectric constant of 88 (at 0 °C). [ 6 ] Solvents with a dielectric constant of less than 15 are generally considered to be nonpolar. [ 7 ] The dielectric constant measures the solvent's tendency to partly cancel the field strength of the electric field of a charged particle immersed in it. This reduction is then compared to the field strength of the charged particle in a vacuum. [ 7 ] Heuristically, the dielectric constant of a solvent can be thought of as its ability to reduce the solute's effective internal charge . Generally, the dielectric constant of a solvent is an acceptable predictor of the solvent's ability to dissolve common ionic compounds , such as salts. Dielectric constants are not the only measure of polarity. Because solvents are used by chemists to carry out chemical reactions or observe chemical and biological phenomena, more specific measures of polarity are required. Most of these measures are sensitive to chemical structure. The Grunwald–Winstein m Y scale measures polarity in terms of solvent influence on buildup of positive charge of a solute during a chemical reaction. Kosower 's Z scale measures polarity in terms of the influence of the solvent on UV -absorption maxima of a salt, usually pyridinium iodide or the pyridinium zwitterion . [ 8 ] Donor number and donor acceptor scale measures polarity in terms of how a solvent interacts with specific substances, like a strong Lewis acid or a strong Lewis base. [ 9 ] The Hildebrand parameter is the square root of cohesive energy density . It can be used with nonpolar compounds, but cannot accommodate complex chemistry. Reichardt's dye, a solvatochromic dye that changes color in response to polarity, gives a scale of E T (30) values. E T is the transition energy between the ground state and the lowest excited state in kcal/mol, and (30) identifies the dye. Another, roughly correlated scale ( E T (33)) can be defined with Nile red . Gregory's solvent ϸ parameter is a quantum chemically derived charge density parameter. [ 10 ] This parameter seems to reproduce many of the experimental solvent parameters (especially the donor and acceptor numbers) using this charge decomposition analysis approach, with an electrostatic basis. The ϸ parameter was originally developed to quantify and explain the Hofmeister series by quantifying polyatomic ions and the monatomic ions in a united manner. The polarity, dipole moment, polarizability and hydrogen bonding of a solvent determines what type of compounds it is able to dissolve and with what other solvents or liquid compounds it is miscible . Generally, polar solvents dissolve polar compounds best and non-polar solvents dissolve non-polar compounds best; hence " like dissolves like ". Strongly polar compounds like sugars (e.g. sucrose ) or ionic compounds, like inorganic salts (e.g. table salt ) dissolve only in very polar solvents like water, while strongly non-polar compounds like oils or waxes dissolve only in very non-polar organic solvents like hexane . Similarly, water and hexane (or vinegar and vegetable oil) are not miscible with each other and will quickly separate into two layers even after being shaken well. Polarity can be separated to different contributions. For example, the Kamlet-Taft parameters are dipolarity/polarizability ( π* ), hydrogen-bonding acidity ( α ) and hydrogen-bonding basicity ( β ). These can be calculated from the wavelength shifts of 3–6 different solvatochromic dyes in the solvent, usually including Reichardt's dye , nitroaniline and diethylnitroaniline . Another option, Hansen solubility parameters , separates the cohesive energy density into dispersion, polar, and hydrogen bonding contributions. Solvents with a dielectric constant (more accurately, relative static permittivity ) greater than 15 (i.e. polar or polarizable) can be further divided into protic and aprotic. Protic solvents, such as water , solvate anions (negatively charged solutes) strongly via hydrogen bonding . Polar aprotic solvents , such as acetone or dichloromethane , tend to have large dipole moments (separation of partial positive and partial negative charges within the same molecule) and solvate positively charged species via their negative dipole. [ 11 ] In chemical reactions the use of polar protic solvents favors the S N 1 reaction mechanism , while polar aprotic solvents favor the S N 2 reaction mechanism. These polar solvents are capable of forming hydrogen bonds with water to dissolve in water whereas non-polar solvents are not capable of strong hydrogen bonds. The solvents are grouped into nonpolar , polar aprotic , and polar protic solvents, with each group ordered by increasing polarity. The properties of solvents which exceed those of water are bolded. CH 3 CH 2 CH 2 CH 2 CH 3 CH 3 CH 2 CH 2 CH 2 CH 2 CH 3 H 3 C(CH 2 ) 5 CH 3 C 6 H 5 -CH 3 CH 3 CH 2 -O-CH 2 CH 3 CHCl 3 CH 2 Cl 2 CH 3 -C≡N CH 3 -NO 2 C 4 H 6 O 3 NH 3 (at -33.3 °C) CH 3 CH 2 CH 2 CH 2 OH CH 3 CH 2 CH 2 OH CH 3 CH 2 OH CH 3 OH The ACS Green Chemistry Institute maintains a tool for the selection of solvents based on a principal component analysis of solvent properties. [ 14 ] The Hansen solubility parameter (HSP) values [ 15 ] [ 16 ] [ 17 ] are based on dispersion bonds (δD), polar bonds (δP) and hydrogen bonds (δH). These contain information about the inter-molecular interactions with other solvents and also with polymers, pigments, nanoparticles , etc. This allows for rational formulations knowing, for example, that there is a good HSP match between a solvent and a polymer. Rational substitutions can also be made for "good" solvents (effective at dissolving the solute) that are "bad" (expensive or hazardous to health or the environment). The following table shows that the intuitions from "non-polar", "polar aprotic" and "polar protic" are put numerically – the "polar" molecules have higher levels of δP and the protic solvents have higher levels of δH. Because numerical values are used, comparisons can be made rationally by comparing numbers. For example, acetonitrile is much more polar than acetone but exhibits slightly less hydrogen bonding. If, for environmental or other reasons, a solvent or solvent blend is required to replace another of equivalent solvency, the substitution can be made on the basis of the Hansen solubility parameters of each. The values for mixtures are taken as the weighted averages of the values for the neat solvents. This can be calculated by trial-and-error , a spreadsheet of values, or HSP software. [ 15 ] [ 16 ] A 1:1 mixture of toluene and 1,4 dioxane has δD, δP and δH values of 17.8, 1.6 and 5.5, comparable to those of chloroform at 17.8, 3.1 and 5.7 respectively. Because of the health hazards associated with toluene itself, other mixtures of solvents may be found using a full HSP dataset. The boiling point is an important property because it determines the speed of evaporation. Small amounts of low-boiling-point solvents like diethyl ether , dichloromethane , or acetone will evaporate in seconds at room temperature, while high-boiling-point solvents like water or dimethyl sulfoxide need higher temperatures, an air flow, or the application of vacuum for fast evaporation. Most organic solvents have a lower density than water, which means they are lighter than and will form a layer on top of water. Important exceptions are most of the halogenated solvents like dichloromethane or chloroform will sink to the bottom of a container, leaving water as the top layer. This is crucial to remember when partitioning compounds between solvents and water in a separatory funnel during chemical syntheses. Often, specific gravity is cited in place of density. Specific gravity is defined as the density of the solvent divided by the density of water at the same temperature. As such, specific gravity is a unitless value. It readily communicates whether a water-insoluble solvent will float (SG < 1.0) or sink (SG > 1.0) when mixed with water. Multicomponent solvents appeared after World War II in the USSR , and continue to be used and produced in the post-Soviet states. These solvents may have one or more applications, but they are not universal preparations. Most organic solvents are flammable or highly flammable, depending on their volatility . Exceptions are some chlorinated solvents like dichloromethane and chloroform . Mixtures of solvent vapors and air can explode . Solvent vapors are heavier than air; they will sink to the bottom and can travel large distances nearly undiluted. Solvent vapors can also be found in supposedly empty drums and cans, posing a flash fire hazard; hence empty containers of volatile solvents should be stored open and upside down. Both diethyl ether and carbon disulfide have exceptionally low autoignition temperatures which increase greatly the fire risk associated with these solvents. The autoignition temperature of carbon disulfide is below 100 °C (212 °F), so objects such as steam pipes, light bulbs , hotplates , and recently extinguished bunsen burners are able to ignite its vapors. In addition some solvents, such as methanol, can burn with a very hot flame which can be nearly invisible under some lighting conditions. [ 23 ] [ 24 ] This can delay or prevent the timely recognition of a dangerous fire, until flames spread to other materials. Ethers like diethyl ether and tetrahydrofuran (THF) can form highly explosive organic peroxides upon exposure to oxygen and light. THF is normally more likely to form such peroxides than diethyl ether. One of the most susceptible solvents is diisopropyl ether , but all ethers are considered to be potential peroxide sources. The heteroatom ( oxygen ) stabilizes the formation of a free radical which is formed by the abstraction of a hydrogen atom by another free radical. [ clarification needed ] The carbon-centered free radical thus formed is able to react with an oxygen molecule to form a peroxide compound. The process of peroxide formation is greatly accelerated by exposure to even low levels of light, but can proceed slowly even in dark conditions. Unless a desiccant is used which can destroy the peroxides, they will concentrate during distillation , due to their higher boiling point . When sufficient peroxides have formed, they can form a crystalline , shock-sensitive solid precipitate at the mouth of a container or bottle. Minor mechanical disturbances, such as scraping the inside of a vessel, the dislodging of a deposit, or merely twisting the cap may provide sufficient energy for the peroxide to detonate or explode violently. Peroxide formation is not a significant problem when fresh solvents are used up quickly; they are more of a problem in laboratories which may take years to finish a single bottle. Low-volume users should acquire only small amounts of peroxide-prone solvents, and dispose of old solvents on a regular periodic schedule. To avoid explosive peroxide formation, ethers should be stored in an airtight container, away from light, because both light and air can encourage peroxide formation. [ 25 ] A number of tests can be used to detect the presence of a peroxide in an ether; one is to use a combination of iron(II) sulfate and potassium thiocyanate . The peroxide is able to oxidize the Fe 2+ ion to an Fe 3+ ion, which then forms a deep-red coordination complex with the thiocyanate . Peroxides may be removed by washing with acidic iron(II) sulfate, filtering through alumina , or distilling from sodium / benzophenone . Alumina degrades the peroxides but some could remain intact in it, therefore it must be disposed of properly. [ 26 ] The advantage of using sodium/benzophenone is that moisture and oxygen are removed as well. [ 27 ] General health hazards associated with solvent exposure include toxicity to the nervous system, reproductive damage, liver and kidney damage, respiratory impairment, cancer, hearing loss, [ 28 ] [ 29 ] and dermatitis . [ 30 ] Many solvents [ which? ] can lead to a sudden loss of consciousness if inhaled in large amounts. [ citation needed ] Solvents like diethyl ether and chloroform have been used in medicine as anesthetics , sedatives , and hypnotics for a long time. [ when? ] Many solvents (e.g. from gasoline or solvent-based glues) are abused recreationally in glue sniffing , often with harmful long-term health effects such as neurotoxicity or cancer . Fraudulent substitution of 1,5-pentanediol by the psychoactive 1,4-butanediol by a subcontractor caused the Bindeez product recall. [ 31 ] Ethanol (grain alcohol) is a widely used and abused psychoactive drug . If ingested, the so-called "toxic alcohols" (other than ethanol) such as methanol , 1-propanol , and ethylene glycol metabolize into toxic aldehydes and acids, which cause potentially fatal metabolic acidosis . [ 32 ] The commonly available alcohol solvent methanol can cause permanent blindness or death if ingested. The solvent 2-butoxyethanol , used in fracking fluids , can cause hypotension and metabolic acidosis. [ 33 ] Chronic solvent exposures are often caused by the inhalation of solvent vapors, or the ingestion of diluted solvents, repeated over the course of an extended period. Some solvents can damage internal organs like the liver , the kidneys , the nervous system , or the brain . The cumulative brain effects of long-term or repeated exposure to some solvents is called chronic solvent-induced encephalopathy (CSE). [ 34 ] Chronic exposure to organic solvents in the work environment can produce a range of adverse neuropsychiatric effects. For example, occupational exposure to organic solvents has been associated with higher numbers of painters suffering from alcoholism . [ 35 ] Ethanol has a synergistic effect when taken in combination with many solvents; for instance, a combination of toluene / benzene and ethanol causes greater nausea / vomiting than either substance alone. Some organic solvents are known or suspected to be cataractogenic. A mixture of aromatic hydrocarbons , aliphatic hydrocarbons , alcohols , esters , ketones , and terpenes were found to greatly increase the risk of developing cataracts in the lens of the eye. [ 36 ] A major pathway of induced health effects arises from spills or leaks of solvents, especially chlorinated solvents , that reach the underlying soil. Since solvents readily migrate substantial distances, the creation of widespread soil contamination is not uncommon; this is particularly a health risk if aquifers are affected. [ 37 ] Vapor intrusion can occur from sites with extensive subsurface solvent contamination. [ 38 ]
https://en.wikipedia.org/wiki/Solvent
In solvent casting and particulate leaching ( SCPL ), a polymer is dissolved in an organic solvent . Particles, mainly salts, with specific dimensions are then added to the solution. The mixture is shaped into its final geometry. For example, it can be cast onto a glass plate to produce a membrane or in a three-dimensional mold to produce a scaffold . When the solvent evaporates, it creates a structure of composite material consisting of the particles together with the polymer. The composite material is then placed in a bath which dissolves the particles, leaving behind a porous structure. [ 1 ] This chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solvent_casting_and_particulate_leaching
In chemistry , solvent effects are the influence of a solvent on chemical reactivity or molecular associations. Solvents can have an effect on solubility , stability and reaction rates and choosing the appropriate solvent allows for thermodynamic and kinetic control over a chemical reaction. A solute dissolves in a solvent when solvent-solute interactions are more favorable than solute-solute interaction. Different solvents can affect the equilibrium constant of a reaction by differential stabilization of the reactant or product. The equilibrium is shifted in the direction of the substance that is preferentially stabilized. Stabilization of the reactant or product can occur through any of the different non-covalent interactions with the solvent such as H-bonding , dipole-dipole interactions, van der Waals interactions etc. The ionization equilibrium of an acid or a base is affected by a solvent change. The effect of the solvent is not only because of its acidity or basicity but also because of its dielectric constant and its ability to preferentially solvate and thus stabilize certain species in acid-base equilibria. A change in the solvating ability or dielectric constant can thus influence the acidity or basicity. In the table above, it can be seen that water is the most polar-solvent, followed by DMSO, and then acetonitrile . Consider the following acid dissociation equilibrium: Water, being the most polar-solvent listed above, stabilizes the ionized species to a greater extent than does DMSO or Acetonitrile. Ionization - and, thus, acidity - would be greatest in water and lesser in DMSO and Acetonitrile, as seen in the table below, which shows p K a values at 25 °C for acetonitrile (ACN) [ 2 ] [ 3 ] [ 4 ] and dimethyl sulfoxide (DMSO) [ 5 ] and water. Many carbonyl compounds exhibit keto–enol tautomerism . This effect is especially pronounced in 1,3-dicarbonyl compounds that can form hydrogen-bonded enols. The equilibrium constant is dependent upon the solvent polarity, with the cis -enol form predominating at low polarity and the diketo form predominating at high polarity. The intramolecular H-bond formed in the cis -enol form is more pronounced when there is no competition for intermolecular H-bonding with the solvent. As a result, solvents of low polarity that do not readily participate in H-bonding allow cis -enolic stabilization by intramolecular H-bonding. Often, reactivity and reaction mechanisms are pictured as the behavior of isolated molecules in which the solvent is treated as a passive support. However, the nature of the solvent can actually influence reaction rates and order of a chemical reaction. [ 6 ] [ 7 ] [ 8 ] [ 9 ] Performing a reaction without solvent can affect reaction-rate for reactions with bimolecular mechanisms, for example, by maximizing the concentration of the reagents. Ball milling is one of several mechanochemical techniques where physical methods are used to control reactions in the absence of solvent. Solvents can affect rates through equilibrium-solvent effects that can be explained on the basis of the transition state theory . In essence, the reaction rates are influenced by differential solvation of the starting material and transition state by the solvent. When the reactant molecules proceed to the transition state, the solvent molecules orient themselves to stabilize the transition state. If the transition state is stabilized to a greater extent than the starting material then the reaction proceeds faster. If the starting material is stabilized to a greater extent than the transition state then the reaction proceeds slower. However, such differential solvation requires rapid reorientational relaxation of the solvent (from the transition state orientation back to the ground-state orientation). Thus, equilibrium-solvent effects are observed in reactions that tend to have sharp barriers and weakly dipolar, rapidly relaxing solvents. [ 6 ] The equilibrium hypothesis does not stand for very rapid chemical reactions in which the transition state theory breaks down. In such cases involving strongly dipolar, slowly relaxing solvents, solvation of the transition state does not play a very large role in affecting the reaction rate. Instead, dynamic contributions of the solvent (such as friction , density , internal pressure, or viscosity ) play a large role in affecting the reaction rate. [ 6 ] [ 9 ] The effect of solvent on elimination and nucleophillic substitution reactions was originally studied by British chemists Edward D. Hughes and Christopher Kelk Ingold . [ 10 ] Using a simple solvation model that considered only pure electrostatic interactions between ions or dipolar molecules and solvents in initial and transition states, all nucleophilic and elimination reactions were organized into different charge types (neutral, positively charged, or negatively charged). [ 6 ] Hughes and Ingold then made certain assumptions about the extent of solvation to be expected in these situations: The applicable effect of these general assumptions are shown in the following examples: The solvent used in substitution reactions inherently determines the nucleophilicity of the nucleophile ; this fact has become increasingly more apparent as more reactions are performed in the gas phase. [ 11 ] As such, solvent conditions significantly affect the performance of a reaction with certain solvent conditions favoring one reaction mechanism over another. For S N 1 reactions the solvent's ability to stabilize the intermediate carbocation is of direct importance to its viability as a suitable solvent. The ability of polar solvents to increase the rate of S N 1 reactions is a result of the polar solvent's solvating the reactant intermediate species, i.e., the carbocation, thereby decreasing the intermediate energy relative to the starting material. The following table shows the relative solvolysis rates of tert -butyl chloride with acetic acid (CH 3 CO 2 H), methanol (CH 3 OH), and water (H 2 O). The case for S N 2 reactions is quite different, as the lack of solvation on the nucleophile increases the rate of an S N 2 reaction. In either case (S N 1 or S N 2), the ability to either stabilize the transition state (S N 1) or destabilize the reactant starting material (S N 2) acts to decrease the ΔG ‡ activation and thereby increase the rate of the reaction. This relationship is according to the equation ΔG = –RT ln K ( Gibbs free energy ). The rate equation for S N 2 reactions are bimolecular being first order in Nucleophile and first order in Reagent. The determining factor when both S N 2 and S N 1 reaction mechanisms are viable is the strength of the Nucleophile. Nuclephilicity and basicity are linked and the more nucleophilic a molecule becomes the greater said nucleophile's basicity. This increase in basicity causes problems for S N 2 reaction mechanisms when the solvent of choice is protic. Protic solvents react with strong nucleophiles with good basic character in an acid/base fashion, thus decreasing or removing the nucleophilic nature of the nucleophile. The following table shows the effect of solvent polarity on the relative reaction rates of the S N 2 reaction of 1-bromobutane with azide (N 3 – ). There is a noticeable increase in reaction rate when changing from a protic solvent to an aprotic solvent. This difference arises from acid/base reactions between protic solvents (not aprotic solvents) and strong nucleophiles. While it is true that steric effects also affect the relative reaction rates, [ 12 ] however, for demonstration of principle for solvent polarity on S N 2 reaction rates, steric effects may be neglected. A comparison of S N 1 to S N 2 reactions is to the right. On the left is an S N 1 reaction coordinate diagram. Note the decrease in ΔG ‡ activation for the polar-solvent reaction conditions. This arises from the fact that polar solvents stabilize the formation of the carbocation intermediate to a greater extent than the non-polar-solvent conditions. This is apparent in the ΔE a , ΔΔG ‡ activation . On the right is an S N 2 reaction coordinate diagram. Note the decreased ΔG ‡ activation for the non-polar-solvent reaction conditions. Polar solvents stabilize the reactants to a greater extent than the non-polar-solvent conditions by solvating the negative charge on the nucleophile, making it less available to react with the electrophile. The reactions involving charged transition metal complexes (cationic or anionic) are dramatically influenced by solvation, especially in the polar media. As high as 30-50 kcal/mol changes in the potential energy surface (activation energies and relative stability) were calculated if the charge of the metal species was changed during the chemical transformation. [ 13 ] Many free radical-based syntheses show large kinetic solvent effects that can reduce the rate of reaction and cause a planned reaction to follow an unwanted pathway. [ 14 ]
https://en.wikipedia.org/wiki/Solvent_effects
Solvent extraction and electrowinning ( SX/EW ) is a two-stage hydrometallurgical process that first extracts and upgrades copper ions from low-grade leach solutions into a solvent containing a chemical that selectively reacts with and binds the copper in the solvent. The copper is extracted from the solvent with strong aqueous acid which then deposits pure copper onto cathodes using an electrolytic procedure ( electrowinning ). SX/EW processing is best known for its use by the copper industry, where it accounts for 20% of worldwide production, but the technology is also successfully applied to a wide range of other metals including cobalt , nickel , zinc and uranium . This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solvent_extraction_and_electrowinning
Solvent impregnated resins ( SIRs ) are commercially available (macro)porous resins impregnated with a solvent /an extractant . In this approach, a liquid extractant is contained within the pores of (adsorption) particles. Usually, the extractant is an organic liquid. Its purpose is to extract one or more dissolved components from a surrounding aqueous environment. The basic principle combines adsorption , chromatography and liquid-liquid extraction . The principle of Solvent Impregnated Resins was first shown in 1971 by Abraham Warshawsky . [ 1 ] This first venture was aimed at the extraction of metals. Ever since then, SIRs have been mainly used for metal extraction , be it heavy metals or specifically radioactive metals . Much research on SIRs has been done by J.L Cortina and e.g. N. Kabay, K. Jerabek or J. Serarols. [ 2 ] However, lately investigations also go towards using SIRs for the separation of natural compounds, and even for separation of biotechnological products. Figure 1 to the right explains the basic principle, in which the organic extractant E is contained inside the pores of a porous particle. The solute S, which is initially dissolved in the aqueous phase surrounding the SIR particle, physically dissolves in the organic extractant phase during the extraction process. Furthermore, the solute S can react with the extractant to form a complex ES. This complexation of the solute with the extractant shifts the overall extraction equilibrium further towards the organic phase. This way, the extraction of the solute is enhanced. [ 3 ] While during conventional liquid-liquid extraction the solvent and the extractant have to be dispersed, in a SIR setup the dispersion is already achieved by the impregnated particles. This also prevents an additional phase separation step, which would be necessary after the emulsification occurring in liquid-liquid extraction. In order to elucidate the effect of emulsification, Figure 2 (to the left) compares the two systems of an extractant in liquid-liquid equilibrium with water, left, and SIR particles in equilibrium with water, right. The figure shows that no emulsification occurs in the SIR system, whereas the liquid-liquid system shows turbidity implying emulsification. Also, the impregnation step decreases the solvent loss into the aqueous phase compared to liquid-liquid extraction. [ 4 ] This decrease of extractant loss is contributed to physical sorption of the extractant on the particle surface, which means that the extractant inside the pores does not entirely behave as a bulk liquid. Depending on the pore size of the used particles, capillary forces may also play a role in retaining the extractant. Otherwise, van-der-Waals forces , pi-pi-interactions or hydrophobic interactions might stabilize the extractant inside the particle pores. However, the possible decrease of extractant loss depends largely on the pore size and the water solubility of the extractant. Nonetheless, SIRs have a significant advantage over e.g. custom made ion-exchange resins with chemically bonded ligands. SIRs can be reused for different separation tasks by just rinsing one complexing agent out and re-impregnating them with another more suitable extractant. This way, potentially expensive design and production steps of e.g. affinity resins can be avoided. Finally, by filling the whole volume of the particle pores with an extractant (complexing agent), a higher capacity for solutes can be achieved than with ordinary adsorption or ion exchange resins, where only the surface area is available. However, there are possible drawbacks of SIR technology, such as leaching of the extractant or clogging of a fixed bed by attrition of the particles. These might be remedied by choosing the proper particle-extractant-system. This implies selecting a suitable extractant with low water solubility , which is sufficiently retained inside the pores, and selecting mechanically stable particles as a solid support for the extractant. Additionally, SIRs can be stabilized by coating them, as shown by D. Muraviev et al. [ 5 ] As coating material, A. W. Trochimczuk et al. used polyvinyl alcohol. [ 6 ] In order to remove or recover the extracted solute, SIR particles can be regenerated using low pressure steam stripping , [ 7 ] which is particularly effective for the recovery of volatile hydrocarbons. However, if the vapor pressure of the extracted solute is too low, or if the complexation between solute and extractant is too strong, other techniques need to be applied, e.g. pH swing. The main impregnation techniques are wet impregnation and dry impregnation . During wet impregnation, the porous particles are dissolved in the extractant and allowed to soak with the respective fluid. [ 8 ] In this approach, the particles are either contacted with a precalculated amount of extractant, which completely soaks into the porous matrix, or the particles are contacted with an excess of extractant. After soaking, the remaining extractant, which is not inside the pores, is evaporated. If the wet method is used, the extractant is dissolved in an additional solvent prior to impregnation. The porous particles are then dispersed in the extractant-solvent solution. [ 8 ] After soaking the particles, the excess solvent can either be filtered off or evaporated. In the first case, an extractant-solvent mixture would be retained within the pores. This would be of interest for extractants which would be solid at design conditions when pure. In the second case, only the extractant would remain inside the pores. Figure 3 shows porous particles dispersed in an aqueous solution after wet impregnation. The cut-out in Figure 3 shows an enlarge segment of the surface of such an impregnated particle. An additional, albeit not so frequently used technique is the modifier addition method. This technique relies on the use of an extractant/solvent/modifier system. The additional modifier is supposed to enhance the penetration of the extractant into the particle pores. [ 8 ] The solvent is subsequently evaporated, leaving extractant and modifier in the particle pores. Furthermore, the dynamic column method can be used. The particles are contacted with a solvent until they are completely soaked. This can be done prior or after packing into the column. The packed bed is then rinsed with the liquid extractant until inlet and outlet concentrations are the same. [ 8 ] This approach is particularly interesting when particles are already packed in a column and shall be reused for a SIR application. Mostly, SIRs have been investigated and used for the recovery of heavy metals. [ 9 ] [ 10 ] [ 11 ] Applications include the removal of cadmium, vanadium, copper, chrome, iridium, etc. Only recently also other extraction applications have been investigated, e.g. the large scale recovery of apolar organics on offshore oil platforms using the so-called Macro-Porous Polymer Extraction (MPPE) Technology. [ 12 ] In such an application, where the SIR particles are contained in a packed bed, flow rates from 0.5 m 3 h −1 upward without maximum flow restrictions can apparently be treated cost competitive to air stripping / activated carbon , steam stripping and bio treatment systems , according to the technology developer. Additional investigations, mostly done in an academic environment, include polar organics like amino-alcohols , [ 13 ] organic acids , [ 14 ] [ 15 ] amino acids, [ 16 ] flavonoids , [ 17 ] and aldehydes on a bench-scale or pilot-scale. Also, the application of SIRs for the separation of more polar solutes, such as for instance ethers and phenols , has been investigated in the group of A.B. de Haan. [ 18 ] Applications in biotechnology were developed only most recently. This is due to the sensitivity of bioproducts such as proteins towards organic extractants. One approach by C. van den Berg et al. focuses on the use of impregnated particles for in situ recovery of phenol from Pseudomonas putida fermentations using ionic liquids . [ 19 ] Further development led to the use of high capacity polysulfone capsules. [ 20 ] These capsules are basically hollow particles surrounded by a membrane . The interior is completely filled with extractant and thus increases the impregnation capacity as compared to classical SIRs. A completely new approach of using SIRs for the separation or purification of biotechnological products such as proteins is based on the concept of impregnating porous particles with aqueous polymer solutions developed by B. Burghoff. These so-called Tunable Aqueous Polymer-Phase Impregnated Resins (TAPPIR) [ 21 ] enhance aqueous two-phase extraction (ATPE) by applying the SIR technology. During classical aqueous two-phase extraction, biotechnological components such as proteins are extracted from aqueous solutions by using a second aqueous phase. This second aqueous phase contains e.g. polyethylene glycol (PEG). On the one hand, a low density difference and low interfacial tension between the two aqueous phases facilitate comparatively fast mass transfer between the phases. On the other hand, PEG appears to stabilize the protein molecules, which results in a comparatively low protein denaturation during the extraction. However, a significant drawback of ATPE is the persistent emulsification, which makes phase separation a challenge. The idea behind TAPPIR is to use the advantages posed by SIRs, namely low extractant loss due to immobilization in the pores and less emulsification than in liquid-liquid extraction. This way, the drawbacks of ATPE could be remedied. The setup would consist of a packed column or a fluidized bed rather than liquid-liquid extraction equipment with additional phase separation steps. Nonetheless, as yet only first feasibility studies are on the way to prove the concept. Adrawback of this method is the non-conitnous working mode. The packed column is run similar as a chromatographic column.
https://en.wikipedia.org/wiki/Solvent_impregnated_resin
In computational chemistry , a solvent model is a computational method that accounts for the behavior of solvated condensed phases. [ 1 ] [ 2 ] [ 3 ] Solvent models enable simulations and thermodynamic calculations applicable to reactions and processes which take place in solution. These include biological, chemical and environmental processes. [ 1 ] Such calculations can lead to new predictions about the physical processes occurring by improved understanding. Solvent models have been extensively tested and reviewed in the scientific literature. The various models can generally be divided into two classes, explicit and implicit models, all of which have their own advantages and disadvantages. Implicit models are generally computationally efficient and can provide a reasonable description of the solvent behavior, but fail to account for the local fluctuations in solvent density around a solute molecule. The density fluctuation behavior is due to solvent ordering around a solute and is particularly prevalent when one is considering water as the solvent. Explicit models are often less computationally economical, but can provide a physical spatially resolved description of the solvent. However, many of these explicit models are computationally demanding and can fail to reproduce some experimental results, often due to certain fitting methods and parametrization. Hybrid methodologies are another option. These methods incorporate aspects of implicit and explicit aiming to minimize computational cost while retaining at least some spatial resolution of the solvent. These methods can require more experience to use them correctly and often contain post-calculation correction terms. [ 4 ] Implicit solvents or continuum solvents, are models in which one accepts the assumption that implicit solvent molecules can be replaced by a homogeneously polarizable medium as long as this medium, to a good approximation, gives equivalent properties. [ 1 ] No explicit solvent molecules are present and so explicit solvent coordinates are not given. Continuum models consider thermally averaged and usually isotropic solvents, [ 3 ] which is why only a small number of parameters can be used to represent the solvent with reasonable accuracy in many situations. The main parameter is the dielectric constant ( ε ), this is often supplemented with further parameters, for example solvent surface tension. The dielectric constant is the value responsible for defining the degree of polarizability of the solvent. Generally speaking, for implicit solvents, a calculation proceeds by encapsulating a solute in a tiled cavity (See the figure below). The cavity containing the solute is embedded in homogeneously polarizable continuum describing the solvent. The solute's charge distribution meets the continuous dielectric field at the surface of the cavity and polarizes the surrounding medium, which causes a change in the polarization on the solute. This defines the reaction potential, a response to the change in polarization. This recursive reaction potential is then iterated to self-consistency. Continuum models have widespread use, including use in force field methods and quantum chemical situations. In quantum chemistry , where charge distributions come from ab initio methods ( Hartree-Fock (HF), Post-HF and density functional theory (DFT)) the implicit solvent models represent the solvent as a perturbation to the solute Hamiltonian . In general, mathematically, these approaches can be thought of in the following way: [ 3 ] [ 5 ] [ 6 ] [ 7 ] Note here that the implicit nature of the solvent is shown mathematically in the equation above, as the equation is only dependent on solute molecule coordinates ( r m ) {\displaystyle (r_{\mathrm {m} })} . The second right hand term V ^ molecules + solvent {\displaystyle {\hat {V}}^{\text{molecules + solvent}}} is composed of interaction operators. These interaction operators calculate the systems responses as a result of going from a gaseous infinitely separated system to one in a continuum solution. If one is therefore modelling a reaction this process is akin to modelling the reaction in the gas phase and providing a perturbation to the Hamiltonian in this reaction. [ 4 ] Top: Four interaction operators generally considered in the continuum solvation models. Bottom: Five contributing Gibbs energy terms from continuum solvation models. [ 5 ] The interaction operators have a clear meaning and are physically well defined. 1st - cavity creation; a term accounting for the energy spent to build a cavity in the solvent of suitable size and shape as to house the solute. Physically, this is energy cost of compressing the solvents structure when creating a void in the solvent. 2nd term - electrostatic energy; This term deals with the polarization of the solute and solvent. 3rd term - quantum mechanical dispersion energy; can be approximated using an averaging procedure for the solvent charge distribution. [ 5 ] 4th term - an approximation for the quantum mechanical exchange repulsion; given the implicit solvent this term can only be approximated against high level theoretical calculations. These models can make useful contributions when the solvent being modelled can be modelled by a single function i.e. it is not varying significantly from the bulk. They can also be a useful way to include approximate solvent effects where the solvent is not an active constituent in the reaction or process. Additionally, if computer resources are limited, considerable computational resources can be saved by evoking the implicit solvent approximation instead of explicit solvent molecules. Implicit solvent models have been applied to model the solvent in computational investigations of reactions and to predict hydration Gibbs energy (Δ hyd G ). [ 8 ] Several standard models exist and have all been used successfully in a number of situations. The Polarizable continuum model (PCM) is a commonly used implicit model and has seeded the birth of several variants. [ 5 ] The model is based on the Poisson-Boltzmann equation , which is an expansion of the original Poisson's equation . Solvation Models (SMx) and the Solvation Model based on Density (SMD) have also seen wide spread use. SMx models (where x is an alphanumeric label to show the version) are based on the generalized Born equation. This is an approximation of Poisson's equation suitable for arbitrary cavity shapes. The SMD model solves the Poisson-Boltzmann equation analogously to PCM, but does so using a set of specifically parametrised radii which construct the cavity. [ 9 ] The COSMO solvation model is another popular implicit solvation model. [ 10 ] This model uses the scaled conductor boundary condition, which is a fast and robust approximation to the exact dielectric equations and reduces the outlying charge errors as compared to PCM. [ 11 ] The approximations lead to a root mean square deviation in the order of 0.07 kcal/mol to the exact solutions. [ 12 ] Explicit solvent models treat explicitly (i.e. the coordinates and usually at least some of the molecular degrees of freedom are included) the solvent molecules. This is a more intuitively realistic picture in which there are direct, specific solvent interactions with a solute, in contrast to continuum models. These models generally occur in the application of molecular mechanics (MM) and dynamics (MD) or Monte Carlo (MC) simulations, although some quantum chemical calculations do use solvent clusters. Molecular dynamics simulations allow scientists to study the time evolution of a chemical system in discrete time intervals. These simulations often utilize molecular mechanics force fields which are generally empirical, parametrized functions which can efficiently calculate the properties and motions of large systems. [ 6 ] [ 7 ] Parametrization is often to a higher level theory or experimental data. MC simulations allow one to explore the potential energy surface of a system by perturbing the system and calculating the energy after the perturbation. Prior criteria are defined to aid the algorithm in deciding whether to accept the newly perturbed system or not. In general, force field methods are based on similar energy evaluation functionals which usually contain terms representing the bond stretching, angle bending, torsions and terms for repulsion and dispersion, such as the Buckingham potential or Lennard-Jones potential . Commonly used solvents, such as water, often have idealized models generated. These idealized models allow one to reduce the degrees of freedom which are to be evaluated in the energy calculation without a significant loss in the overall accuracy; although this can lead certain models becoming useful only in specific circumstances. Models such as TIPXP (where X is an integer suggesting the number of sites used for energy evaluation) [ 13 ] and the simple point charge model (SPC) of water have been used extensively. A typical model of this kind uses a fixed number of sites (often three for water), on each site is placed a parametrized point charge and repulsion and dispersion parameter. These models are commonly geometrically constrained with aspects of the geometry fixed such as the bond length or angles. [ 14 ] Advancements around 2010 onwards in explicit solvent modelling see the use of a new generation of polarizable force fields, which are currently being created. These force fields are able to account for changes in the molecular charge distribution. A number of these force fields are being developed to utilise multipole moments, as opposed to point charges, given that multipole moments can reflect the charge anisotropy of the molecules. One such method is the Atomic Multipole Optimised Energetics for Biomolecular Applications (AMOEBA) force field. [ 15 ] This method has been used to study the solvation dynamics of ions. [ 1 ] Other emerging polarizable forcefields which have been applied to condensed phase systems are; the Sum of Interactions between Fragments ab initio computed (SIBFA) [ 16 ] and the Quantum Chemical Topology Force Field (QCTFF). [ 17 ] Polarizable water models are also being produced. The so-called charge on spring (COS) model gives water models with the ability to polarize due to one of the interaction sites being flexible (on spring). [ 18 ] Hybrid models, as then name suggests, are in the middle between explicit and implicit models. Hybrid models can usually be considered closer to one or other implicit or explicit. Mixed quantum mechanics and molecular mechanics models,( QM/MM ) schemes, can be thought of in this context. QM/MM methods here are closer to explicit models. One can imagine having a QM core treatment containing the solute and may be a small number of explicit solvent molecules. The second layer could then comprise MM water molecules, with a final third layer of implicit solvent representing the bulk. The Reference Interaction Site Model (RISM) can be thought of being closer to implicit solvent representations. RISM allows the solvent density to fluctuate in a local environment, achieving a description of the solvent shell behaviour. [ 1 ] [ 2 ] [ 5 ] QM/MM methods enable a section of the system to be calculated using quantum mechanics, for example the active site in a biological molecule, whilst the rest of the system is modeled using MM force fields. By continuing to a third layer with an implicit solvent the bulk water effect can be modeled more cheaply than using all explicit solvent molecules. There are many different combinations that can be used with the QM/MM technique. Alternatively, a few explicit solvent molecules can be added to a QM region and the rest of the solvent treated implicitly. Previous work has shown mixed results upon the addition of explicit solvent molecules to an implicit solvent. One example added up to three explicit water molecules to a QM calculation with an implicit COSMO water model. The results suggest that using either implicit or explicit solvent alone provide a good approximation to experiment, however, the mixed models had mixed results and possibly some dependence on the number of added explicit solvent molecules. [ 19 ] RISM, a classical statistical mechanics methodology, has it roots in the integral equation theory of liquids (IET). By statistically modeling of the solvent, an appreciation of the dynamics of the system can be acquired. This is more useful than a static model as the dynamics of the solvent can be important in some processes. The statistical modeling is done using radial distribution function (RDF). RDF are probabilistic functions which can represent the probability of locating solvent atoms/molecules in a specific area or at a specific distance from the reference point; generally taken as the solute molecule. As the probability of locating solvent atoms and molecules from the reference point can be determined in RISM theory, solvent shell structure can be directly derived. [ 20 ] The molecular Ornstein-Zernike equation (MOZ) is the starting point for RISM calculations. [ 5 ] Within the MOZ equations a solvated system can be defined in 3D space by three spatial coordinates (r) and three angles (Θ). Using relative RDF's the MOZ equations for the solvated system can define the total correlation function h(r - r';ʘ - ʘ'). The equations have a high dimensionality (6D). It is a common approximation to assume spherical symmetry, allowing one to remove the orientational (angular) degrees of freedom. The MOZ equation splits the total correlation function in two. First the direct correlation function c(r), concerned with the effect of one particle on one other over the distance r. The second, the indirect correlation function, accounts for the effects of a third particle in a system. The indirect correlation function is given as the direct correlation function between the first and the third particles c ( r 1 , 3 ) {\displaystyle c(r_{1,3})} in addition to the total correlation function between the second and third particles h ( r 2 , 3 ) {\displaystyle h(r_{2,3})} . [ 21 ] Ornstein-Zernike equation with the assumption of spherical symmetry. ρ is the liquid density, r is the separating distance, h(r) is the total correlation function, c(r) is the direct correlation function. h(r) and c(r) are the solutions to the MOZ equations. In order to solve for h(r) and c(r), another equation must be introduced. This new equation is called a closure relation. The exact closure relation is unknown, due to the so-called bridge functions exact form being unclear, we, therefore, must introduce approximations. There are several valid approximations, the first was the HyperNetted Chain (HNC), which sets the unknown terms in the closure relation to zero. Although appearing crude the HNC has been generally quite successfully applied, although it shows slow convergence and divergent behaviour in some cases. [ 22 ] A modern alternative closure relation has been suggested the Partially Linearised HyperNetted Chain (PLHNC) or Kovalenko Hirata closure. [ 23 ] The PLHNC partially linearises the exponential function if it exceeds its cutoff value. This causes a much more reliable convergence of the equations. [ 4 ] The PLHNC closure, where β = 1 k B T {\displaystyle \beta ={\frac {1}{k_{B}T}}} and U ( r ) {\displaystyle U(r)} is the interaction potential, a typical interaction potential is shown below. T(r) is the indirect correlation function, as it is the difference of the total and the direct correlation functions. There are various approximations of the RISM equations. Two popular approximations are 3D RISM and 1D RISM. [ 1 ] There are known deficiencies in these approximate RISM models. 3D RISM makes a poor estimation of the cavity creation term. 1D RISM has been found to not be properly accounting for the spatial correlations of the solvent density around the solute. However, both methods are quick to calculate, 1D RISM can be calculated in a matter of seconds on a modern computer, making it an attractive model for high through put computation. Both 3D RISM and 1D RISM have had correction schemes proposed which make the predictions reach a comparable level of accuracy to traditional implicit and explicit models. [ 22 ] [ 24 ] [ 25 ] The COSMO-RS model is another hybrid model using the surface polarization charge density derived from continuum COSMO calculations to estimate the interaction energies with neighbored molecules. COSMO-RS is able to account for a major part of reorientation and strong directional interactions like hydrogen bonding within the first solvation shell. It yields thermodynamically consistent mixture thermodynamics and is often used in addition to UNIFAC in chemical engineering applications. Quantitative Structure–Activity Relationships ( QSAR )/Quantitative Structure–Property Relationships (QSPR), whilst unable to directly model the physical process occurring in a condensed solvent phase, can provide useful predictions of solvent and solvation properties and activities; such as the solubility of a solute. [ 26 ] [ 27 ] [ 28 ] [ 4 ] These methods come in a varied way from simple regression models to sophisticated machine learning methods. Generally, QSAR/QSPR methods require descriptors; these come in many different forms and are used to represent physical features and properties of a system of interest. Descriptors are generally single numerical values which hold some information about a physical property. [ 29 ] A regression model or statistical learning model is then applied to find a correlation between the descriptor(s) and the property of interest. Once trained on some known data these model can be applied to similar unknown data to make predictions. Typically the known data comes from experimental measurement, although there is no reason why similar methods can not be used to correlate descriptor(s) with theoretical or predicted values. It is currently debated whether if more accurate experimental data was used to train these models whether the prediction from such models would be more accurate. [ 30 ] More recently the rise of deep learning has provided many methods to generate embedded representations of molecules. [ 31 ] [ 27 ] Some of these methods have also been applied to solvation properties such as solubility prediction [ 32 ] [ 27 ]
https://en.wikipedia.org/wiki/Solvent_model
Solvent suppression is any technique in nuclear magnetic resonance spectroscopy (NMR) to decrease undesired signal from a sample's solvent . [ 1 ] In liquid-state NMR spectroscopy, the sample to be studied is dissolved in a solvent. Typically, the concentration of the solvent is much higher than the concentration of the solutes of interest. The signal from the solvent can overwhelm that of the solute, and the NMR instrument may not collect any meaningful data. Solvent suppression techniques are particularly important in protein NMR where the solvent often includes H 2 O as well as D 2 O . [ 2 ] This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solvent_suppression
Solvent vapor annealing ( SVA ) is a widely used technique for controlling the morphology and ordering of block copolymer (BCP) films. [ 1 ] [ 2 ] [ 3 ] By controlling the block ratio ( f = N A / N ), spheres , cylinders , gyroids , and lamellae structures can be generated by forming a swollen and mobile layer of thin-film from added solvent vapor to facilitate the self-assembly of the polymer blocks. [ 4 ] The process allows increased lateral ordering by several magnitudes to previous methods. It is a more mild alternative to thermal annealing . [ 1 ] Ideally, the chamber in which SVA takes place is a metal chamber that is inert to reaction with the given solvent, allowing for high precision in forming the desired nanostructures. Computers with designed program control of the valves for solvent addition and withdrawal are used to increase precision as well. This regulated inlet along with close monitoring of pressure gauges and thickness allows instant response and control while the annealing and evaporation phases precede. [ 2 ] When looking at what affects SVA, one of the main things that come up first is the solvent that is used, and what nanostructure is wanted to be obtained. For example, if a hierarchical structure is desired, a solvent that has a vapor that can selectively mobilize the amorphous polymer chains of a semi-crystalline polymer is ideal because it can also keep the integrity of the crystals, allowing for the secondary structure to form. [ 5 ] Looking more at BCP itself, they make ordered nanostructures because of thermodynamic differences between different blocks of the polymer. The sample morphology at equilibrium can be predicted using the molar mass of the blocks, the degree of polymerization of the chains (N), and the Flory-Huggins interaction parameter (χ) which is a magnitude of exactly how incompatible the different blocks are. [ 6 ] These factors, along with the composition of the BCP, allow microphase separation of chains and the rearrangement into the desired product. The composition provides an especially important part of the process as knowing the ordering, such as alternating AB monomers, gives light on how to section the polymer in the desired manner. Along with this, the selection of a specific type of block polymer is important for the process and its effectiveness. The main thing to consider is the original structure of the block at room temperature, as well as, temperatures in which each block will begin to change phase. [ 6 ] Knowing these temperatures is critical in determining when each will begin to react and take in solvent and at what rate this will happen, which is critical in pushing to a desired morphology of the given block polymer through annealing. Other factors that affect SVA are parameters such as vapor pressure, solvent concertation in the film, and evaporation rate of the solvent. [ 2 ] Each of these factors contributes to the volatility and imprecision at times of this method, not possessing a set mechanism for the construction of structures that are desired, such as nanocylinders. Getting perfect success of the desired morphology of a polymer has yet to be achieved with these plethoras of factors dictating formation. [ 2 ] There are many applications in technology and lab work for this process to create desired morphologies of polymers. One of these applications is inscribing secondary nanostructures onto electrospun fibers. The use of poly(ε‐caprolactone) fibers, known as PCL, allows using solvents like acetone to move the amorphous chains of block polymers onto a pre-existing crystal, making the inscribed secondary structure. [ 5 ] When the PCL is annealed with acetone, the amorphous chains can be mobilized to a given desired region, while the overall integrity of the fully crystallized regions stays intact. With a careful approach to the semi-crystalline polymer chosen and looking for appropriate solvent vapor, this simple process can be applied to many different systems and allows for the creation of many types of hierarchical polymer material. [ 5 ] Another application of SVA is its use in helping create and improve photovoltaic device efficiency through the annealing of perovskite materials. For the greater performance of these energy cells, the keys lie with higher quality perovskite materials and on the use of SVA to create these higher quality films that can retain energy more efficiently. Solvent engineering is the key to make the perovskite material and improving their quality through SVA in an anhydrous isopropanol environment, where the crystalline polymer has low solubility, which causes the performance to increase greatly. [ 7 ] The use of SVA here leads to a more energy-efficient and promising path of using specific polymers to help move forward with the improvement of energy storage. There are some main areas of focus that can be looked at for the future of SVA to keep improving and being innovational in technology. Firstly, the chambers in which SVA takes place should continue to be improved on to allow precision of the process, as well as, reproducibility of the same structure on each attempt. [ 6 ] The focus on these chambers and the components that make it precise have been a hypothetical thought process of what parameters affect reproducibility. It is imperative to continue to improve the amount of control over the annealing through being able to control all factors, such as humidity and temperature. [ 7 ] The point of being meticulous in defining such parameters is for the possibility of multiple labs reproducing a certain compound to the same effect. Next off, SVA with the improvement of the apparatus in which the process takes place, in situ studies, through X-ray and neutron scattering methods, can give more highly accurate images of the swollen and dried states of the BCP. [ 2 ] Using methods such as also ellipsometry and interferometry can lead to discoveries about the thickness of the polymers in different states and nanostructure orientation, which will help to learn more about the equilibrium structure and the kinetics of developing a specified morphology. [ 6 ] It is important here as well to be able to define small molecule additions to different parts of the block polymer at different points of the annealing and evaporation as to accurately be able to precisely know how the moieties will create certain orientations and directionality in structure. The final area moving forward is simply the implementation of the created block polymers in new intended applications and technology, beyond lab study and characterization of the method. It is important to go beyond creating the nanostructures and move into seeing the utility of the structures in an application, which will help reveal practical shortcomings of the created polymers and reveal areas of where to improve in parts of the structure, such as film integrity and attachment strength of the amorphous chains. [ 6 ] Going beyond these simple surface imaging will allow us to realize and face some of the dangers and hindrances to functionality, such as the toxicity of working with organic solvents or the issues with dewetting the swollen state of the BCP. [ 8 ]
https://en.wikipedia.org/wiki/Solvent_vapour_annealing
Solventogenesis is the biochemical production of solvents (usually acetone and butanol ) by Clostridium species. [ 1 ] It is the second phase of ABE fermentation . [ 2 ] Solventogenic Clostridium species have a biphasic metabolism composed of an acidogenic phase and a solventogenic phase. During acidogenesis , these bacteria are able to convert several carbon sources into organic acids, commonly butyrate and acetate . [ 2 ] As acid accumulates, cells begin to assimilate the organic acids into solvents. In Clostridium acetobutylicum , a model solventogenic Clostridium species, a combination of low pH and high undisociated butyrate, referred to as the "pH-acid effect", triggers the metabolic shift from acidogenesis to solventogenesis. [ 3 ] Acetone, butanol, and ethanol are the most common products of solventogenesis. Some species such as Clostridium beijerinckii , Clostridium puniceum and Clostridium roseum are able to further reduce acetone to isopropanol . Several species are able to produce additional solvents under various culture conditions. For example, glycerol fermentation results in the production of 1,3-propanediol in several species. Acetoin is produced by several species and is further reduced to 2,3-butanediol by Clostridium beijerinckii . [ 4 ]
https://en.wikipedia.org/wiki/Solventogenesis
Solvophobic theory attempts to explain interactions between polar solvents and non-polar solutes . In the pure solvent, there are relatively strong cohesive forces between the solvent molecules due to hydrogen bonding or other polar interactions. Hence, non-polar solutes tend not to be soluble in polar solvents because these solvent-solvent binding interactions must be overcome first. When applied to liquid chromatography (LC), solvophobic theory attributes the retention of solutes on the stationary phase partly to the rejection of solute molecules by the solvent, and partly to the attraction of the solute molecules by the stationary phase. [ 1 ] This physical chemistry -related article is a stub . You can help Wikipedia by expanding it . This article related to chromatography is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solvophobic
Solvophoresis is a spontaneous motion of dispersed particles in a mixed solvent induced by a gradient of solvent concentration. Solvophoresis was experimentally established by Marek Kosmulski and Egon Matijevic . [ 1 ] Solvophoresis is similar to diffusiophoresis . This physical chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solvophoresis
Solvothermal synthesis is a method of producing chemical compounds , in which a solvent containing reagents is put under high pressure and temperature in an autoclave . Many substances dissolve better in the same solvent in such conditions than at standard conditions , enabling reactions that would not otherwise occur and leading to new compounds or polymorphs . Solvothermal synthesis is very similar to the hydrothermal route; both are typically conducted in a stainless steel autoclave. The only difference being that the precursor solution is usually non-aqueous . [ 1 ] Solvothermal synthesis has been used prepare MOFs , [ 2 ] [ 3 ] titanium dioxide , [ 4 ] and graphene , [ 5 ] carbon spheres, [ 6 ] chalcogenides [ 7 ] and other materials. Besides water (hydrothermal synthesis), solvothermal syntheses make use of a large range of solvents, including ammonia , carbon dioxide , dimethylformamide , and various alcohols such as methanol , or glycols such as hexane-1,6-diol . [ 1 ] [ 8 ] [ 9 ] Formic acid decomposes at high temperatures to carbon dioxide and hydrogen or carbon monoxide and water. This property allows formic acid to be used as a reducing and carbon dioxide-rich reaction medium in which it is possible to form various oxides and carbonates . [ 8 ] The critical temperature and pressure of ammonia are 132.2 °C and 111 bar. In these conditions, it is possible to obtain a range of amides , imides , and nitrides . Although its dielectric constant is lower than that of water, ammonia behaves as a polar solvent especially at high pressures. [ 8 ]
https://en.wikipedia.org/wiki/Solvothermal_synthesis
In mathematics , Solèr's theorem is a result concerning certain infinite-dimensional vector spaces . It states that any orthomodular form that has an infinite orthonormal set is a Hilbert space over the real numbers , complex numbers or quaternions . [ 1 ] [ 2 ] Originally proved by Maria Pia Solèr , the result is significant for quantum logic [ 3 ] [ 4 ] and the foundations of quantum mechanics . [ 5 ] [ 6 ] In particular, Solèr's theorem helps to fill a gap in the effort to use Gleason's theorem to rederive quantum mechanics from information-theoretic postulates. [ 7 ] [ 8 ] It is also an important step in the Heunen–Kornell axiomatisation of the category of Hilbert spaces. [ 9 ] Physicist John C. Baez notes, Nothing in the assumptions mentions the continuum: the hypotheses are purely algebraic. It therefore seems quite magical that [the division ring over which the Hilbert space is defined] is forced to be the real numbers, complex numbers or quaternions. [ 6 ] Writing a decade after Solèr's original publication, Pitowsky calls her theorem "celebrated". [ 7 ] Let K {\displaystyle \mathbb {K} } be a division ring . That means it is a ring in which one can add, subtract, multiply, and divide but in which the multiplication need not be commutative . Suppose this ring has a conjugation, i.e. an operation x ↦ x ∗ {\displaystyle x\mapsto x^{*}} for which Consider a vector space V with scalars in K {\displaystyle \mathbb {K} } , and a mapping which is K {\displaystyle \mathbb {K} } -linear in left (or in the right) entry, satisfying the identity This is called a Hermitian form. Suppose this form is non-degenerate in the sense that For any subspace S let S ⊥ {\displaystyle S^{\bot }} be the orthogonal complement of S . Call the subspace "closed" if S ⊥ ⊥ = S . {\displaystyle S^{\bot \bot }=S.} Call this whole vector space, and the Hermitian form, "orthomodular" if for every closed subspace S we have that S + S ⊥ {\displaystyle S+S^{\bot }} is the entire space. (The term "orthomodular" derives from the study of quantum logic. In quantum logic, the distributive law is taken to fail due to the uncertainty principle , and it is replaced with the "modular law," or in the case of infinite-dimensional Hilbert spaces, the "orthomodular law." [ 6 ] ) A set of vectors u i ∈ V {\textstyle u_{i}\in V} is called "orthonormal" if ⟨ u i , u j ⟩ = δ i j . {\displaystyle \langle u_{i},u_{j}\rangle =\delta _{ij}.} The result is this:
https://en.wikipedia.org/wiki/Solèr's_theorem
In materials science , the sol–gel process is a method for producing solid materials from small molecules. The method is used for the fabrication of metal oxides , especially the oxides of silicon (Si) and titanium (Ti). The process involves conversion of monomers in solution into a colloidal solution ( sol ) that acts as the precursor for an integrated network (or gel ) of either discrete particles or network polymers . Typical precursors are metal alkoxides . Sol–gel process is used to produce ceramic nanoparticles . In this chemical procedure, a " sol " (a colloidal solution) is formed that then gradually evolves towards the formation of a gel-like diphasic system containing both a liquid phase and solid phase whose morphologies range from discrete particles to continuous polymer networks. In the case of the colloid , the volume fraction of particles (or particle density) may be so low that a significant amount of fluid may need to be removed initially for the gel-like properties to be recognized. This can be accomplished in any number of ways. The simplest method is to allow time for sedimentation to occur, and then pour off the remaining liquid. Centrifugation can also be used to accelerate the process of phase separation . Removal of the remaining liquid (solvent) phase requires a drying process, which is typically accompanied by a significant amount of shrinkage and densification. The rate at which the solvent can be removed is ultimately determined by the distribution of porosity in the gel. The ultimate microstructure of the final component will clearly be strongly influenced by changes imposed upon the structural template during this phase of processing. Afterwards, a thermal treatment, or firing process, is often necessary in order to favor further polycondensation and enhance mechanical properties and structural stability via final sintering , densification, and grain growth . One of the distinct advantages of using this methodology as opposed to the more traditional processing techniques is that densification is often achieved at a much lower temperature. The precursor sol can be either deposited on a substrate to form a film (e.g., by dip-coating or spin coating ), cast into a suitable container with the desired shape (e.g., to obtain monolithic ceramics , glasses , fibers , membranes , aerogels ), or used to synthesize powders (e.g., microspheres , nanospheres ). [ 1 ] The sol–gel approach is a cheap and low-temperature technique that allows the fine control of the product's chemical composition. Even small quantities of dopants, such as organic dyes and rare-earth elements , can be introduced in the sol and end up uniformly dispersed in the final product. It can be used in ceramics processing and manufacturing as an investment casting material, or as a means of producing very thin films of metal oxides for various purposes. Sol–gel derived materials have diverse applications in optics , electronics , energy , space , (bio) sensors , medicine (e.g., controlled drug release ), reactive material , and separation (e.g., chromatography ) technology. The interest in sol–gel processing can be traced back in the mid-1800s with the observation that the hydrolysis of tetraethyl orthosilicate (TEOS) under acidic conditions led to the formation of SiO 2 in the form of fibers and monoliths. Sol–gel research grew to be so important that in the 1990s more than 35,000 papers were published worldwide on the process. [ 2 ] [ 3 ] [ 4 ] The sol–gel process is a wet-chemical technique used for the fabrication of both glassy and ceramic materials. In this process, the sol (or solution) evolves gradually towards the formation of a gel-like network containing both a liquid phase and a solid phase. Typical precursors are metal alkoxides and metal chlorides, which undergo hydrolysis and polycondensation reactions to form a colloid. The basic structure or morphology of the solid phase can range anywhere from discrete colloidal particles to continuous chain-like polymer networks. [ 5 ] [ 6 ] The term colloid is used primarily to describe a broad range of solid-liquid (and/or liquid-liquid) mixtures, all of which contain distinct solid (and/or liquid) particles which are dispersed to various degrees in a liquid medium. The term is specific to the size of the individual particles, which are larger than atomic dimensions but small enough to exhibit Brownian motion . If the particles are large enough, then their dynamic behavior in any given period of time in suspension would be governed by forces of gravity and sedimentation . But if they are small enough to be colloids, then their irregular motion in suspension can be attributed to the collective bombardment of a myriad of thermally agitated molecules in the liquid suspending medium, as described originally by Albert Einstein in his dissertation . Einstein concluded that this erratic behavior could adequately be described using the theory of Brownian motion , with sedimentation being a possible long-term result. This critical size range (or particle diameter) typically ranges from tens of angstroms (10 −10 m) to a few micrometres (10 −6 m). [ 7 ] In either case (discrete particles or continuous polymer network) the sol evolves then towards the formation of an inorganic network containing a liquid phase ( gel ). Formation of a metal oxide involves connecting the metal centers with oxo (M-O-M) or hydroxo (M-OH-M) bridges, therefore generating metal-oxo or metal-hydroxo polymers in solution. In both cases (discrete particles or continuous polymer network), the drying process serves to remove the liquid phase from the gel, yielding a micro-porous amorphous glass or micro-crystalline ceramic. Subsequent thermal treatment (firing) may be performed in order to favor further polycondensation and enhance mechanical properties. With the viscosity of a sol adjusted into a proper range, both optical quality glass fiber and refractory ceramic fiber can be drawn which are used for fiber optic sensors and thermal insulation , respectively. In addition, uniform ceramic powders of a wide range of chemical composition can be formed by precipitation . The Stöber process is a well-studied example of polymerization of an alkoxide, specifically TEOS . The chemical formula for TEOS is given by Si(OC 2 H 5 ) 4 , or Si(OR) 4 , where the alkyl group R = C 2 H 5 . Alkoxides are ideal chemical precursors for sol–gel synthesis because they react readily with water. The reaction is called hydrolysis, because a hydroxyl ion becomes attached to the silicon atom as follows: Depending on the amount of water and catalyst present, hydrolysis may proceed to completion to silica: Complete hydrolysis often requires an excess of water and/or the use of a hydrolysis catalyst such as acetic acid or hydrochloric acid . Intermediate species including [(OR) 2 −Si−(OH) 2 ] or [(OR) 3 −Si−(OH)] may result as products of partial hydrolysis reactions. [ 1 ] Early intermediates result from two partially hydrolyzed monomers linked with a siloxane [Si−O−Si] bond: or Thus, polymerization is associated with the formation of a 1-, 2-, or 3-dimensional network of siloxane [Si−O−Si] bonds accompanied by the production of H−O−H and R−O−H species. By definition, condensation liberates a small molecule, such as water or alcohol . This type of reaction can continue to build larger and larger silicon-containing molecules by the process of polymerization. Thus, a polymer is a huge molecule (or macromolecule ) formed from hundreds or thousands of units called monomers . The number of bonds that a monomer can form is called its functionality. Polymerization of silicon alkoxide , for instance, can lead to complex branching of the polymer, because a fully hydrolyzed monomer Si(OH) 4 is tetrafunctional (can branch or bond in 4 different directions). Alternatively, under certain conditions (e.g., low water concentration) fewer than 4 of the OR or OH groups ( ligands ) will be capable of condensation, so relatively little branching will occur. The mechanisms of hydrolysis and condensation, and the factors that bias the structure toward linear or branched structures are the most critical issues of sol–gel science and technology. This reaction is favored in both basic and acidic conditions. Sonication is an efficient tool for the synthesis of polymers. The cavitational shear forces, which stretch out and break the chain in a non-random process, result in a lowering of the molecular weight and poly-dispersity. Furthermore, multi-phase systems are very efficient dispersed and emulsified , so that very fine mixtures are provided. This means that ultrasound increases the rate of polymerisation over conventional stirring and results in higher molecular weights with lower polydispersities. Ormosils (organically modified silicate) are obtained when silane is added to gel-derived silica during sol–gel process. The product is a molecular-scale composite with improved mechanical properties. Sono-Ormosils are characterized by a higher density than classic gels as well as an improved thermal stability. An explanation therefore might be the increased degree of polymerization. [ 11 ] For single cation systems like SiO 2 and TiO 2 , hydrolysis and condensation processes naturally give rise to homogenous compositions. For systems involving multiple cations, such as strontium titanate , SrTiO 3 and other perovskite systems, the concept of steric immobilisation becomes relevant. To avoid the formation of multiple phases of binary oxides as the result of differing hydrolysis and condensation rates, the entrapment of cations in a polymer network is an effective approach, generally termed the Pechini process . [ 12 ] In this process, a chelating agent is used, most often citric acid, to surround aqueous cations and sterically entrap them. Subsequently, a polymer network is formed to immobilize the chelated cations in a gel or resin. This is most often achieved by poly-esterification using ethylene glycol . The resulting polymer is then combusted under oxidising conditions to remove organic content and yield a product oxide with homogeneously dispersed cations. [ 13 ] If the liquid in a wet gel is removed under a supercritical condition, a highly porous and extremely low density material called aerogel is obtained. Drying the gel by means of low temperature treatments (25–100 °C), it is possible to obtain porous solid matrices called xerogels . In addition, a sol–gel process was developed in the 1950s for the production of radioactive powders of UO 2 and ThO 2 for nuclear fuels , without generation of large quantities of dust. Differential stresses that develop as a result of non-uniform drying shrinkage are directly related to the rate at which the solvent can be removed, and thus highly dependent upon the distribution of porosity . Such stresses have been associated with a plastic-to-brittle transition in consolidated bodies, [ 15 ] and can yield to crack propagation in the unfired body if not relieved. In addition, any fluctuations in packing density in the compact as it is prepared for the kiln are often amplified during the sintering process, yielding heterogeneous densification. Some pores and other structural defects associated with density variations have been shown to play a detrimental role in the sintering process by growing and thus limiting end-point densities. Differential stresses arising from heterogeneous densification have also been shown to result in the propagation of internal cracks, thus becoming the strength-controlling flaws. [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] It would therefore appear desirable to process a material in such a way that it is physically uniform with regard to the distribution of components and porosity, rather than using particle size distributions which will maximize the green density. The containment of a uniformly dispersed assembly of strongly interacting particles in suspension requires total control over particle-particle interactions. Monodisperse colloids provide this potential. [ 8 ] [ 9 ] [ 21 ] Monodisperse powders of colloidal silica, for example, may therefore be stabilized sufficiently to ensure a high degree of order in the colloidal crystal or polycrystalline colloidal solid which results from aggregation. The degree of order appears to be limited by the time and space allowed for longer-range correlations to be established. Such defective polycrystalline structures would appear to be the basic elements of nanoscale materials science, and, therefore, provide the first step in developing a more rigorous understanding of the mechanisms involved in microstructural evolution in inorganic systems such as sintered ceramic nanomaterials . [ 22 ] [ 23 ] Ultra-fine and uniform ceramic powders can be formed by precipitation. These powders of single and multiple component compositions can be produced at a nanoscale particle size for dental, biomedical , agrochemical , or catalytic applications. Powder abrasives , used in a variety of finishing operations, are made using a sol–gel type process. One of the more important applications of sol–gel processing is to carry out zeolite synthesis. Other elements (metals, metal oxides) can be easily incorporated into the final product and the silicate sol formed by this method is very stable. Semi-stable metal complexes can be used to produce sub-2 nm oxide particles without thermal treatment. During base-catalyzed synthesis, hydroxo (M-OH) bonds may be avoided in favor of oxo (M-O-M) using a ligand which is strong enough to prevent reaction in the hydroxo regime but weak enough to allow reaction in the oxo regime (see Pourbaix diagram ). [ 24 ] The applications for sol gel-derived products are numerous. [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] For example, scientists have used it to produce the world's lightest materials and also some of its toughest ceramics. One of the largest application areas is thin films, which can be produced on a piece of substrate by spin coating or dip-coating. Protective and decorative coatings, and electro-optic components can be applied to glass, metal and other types of substrates with these methods. Cast into a mold, and with further drying and heat-treatment, dense ceramic or glass articles with novel properties can be formed that cannot be created by any other method. [ citation needed ] Other coating methods include spraying, electrophoresis , inkjet [ 31 ] [ 32 ] printing, or roll coating. With the viscosity of a sol adjusted into a proper range, both optical and refractory ceramic fibers can be drawn which are used for fiber optic sensors and thermal insulation, respectively. Thus, many ceramic materials, both glassy and crystalline, have found use in various forms from bulk solid-state components to high surface area forms such as thin films, coatings and fibers. [ 10 ] [ 33 ] Also, thin films have found their application in the electronic field [ 34 ] and can be used as sensitive components of a resistive gas sensors. [ 35 ] Sol-gel technology has been applied for controlled release of fragrances and drugs. [ 36 ] Macroscopic optical elements and active optical components as well as large area hot mirrors , cold mirrors , lenses , and beam splitters can be made by the sol–gel route. In the processing of high performance ceramic nanomaterials with superior opto-mechanical properties under adverse conditions, the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during the synthesis or formation of the object. Thus a reduction of the original particle size well below the wavelength of visible light (~500 nm) eliminates much of the light scattering , resulting in a translucent or even transparent material . Furthermore, microscopic pores in sintered ceramic nanomaterials, mainly trapped at the junctions of microcrystalline grains, cause light to scatter and prevented true transparency. The total volume fraction of these nanoscale pores (both intergranular and intragranular porosity) must be less than 1% for high-quality optical transmission, i.e. the density has to be 99.99% of the theoretical crystalline density. [ 37 ] [ 38 ]
https://en.wikipedia.org/wiki/Sol–gel_process
In cellular biology , the term somatic is derived from the French somatique which comes from Ancient Greek σωματικός (sōmatikós, “bodily”), and σῶμα (sôma, “body”.) [ 1 ] [ 2 ] is often used to refer to the cells of the body, in contrast to the reproductive ( germline ) cells, which usually give rise to the egg or sperm (or other gametes in other organisms). These somatic cells are diploid , containing two copies of each chromosome , whereas germ cells are haploid , as they only contain one copy of each chromosome (in preparation for fertilisation ). Although under normal circumstances all somatic cells in an organism contain identical DNA , they develop a variety of tissue-specific characteristics. This process is called differentiation , through epigenetic and regulatory alterations. The grouping of similar cells and tissues creates the foundation for organs. Somatic mutations are changes to the genetics of a multicellular organism that are not passed on to its offspring through the germline. Most cancers are due to somatic mutations. Somatic is also defined as relating to the wall of the body cavity, particularly as distinguished from the head, limbs, or viscera . It is also used in the term somatic nervous system , which is the portion of the vertebrate nervous system that regulates voluntary movements of the body. The frequency of mutations in mouse somatic tissue ( brain , liver , Sertoli cells ) was compared to the mutation frequency in male germline cells at sequential stages of spermatogenesis . [ 3 ] The spontaneous mutation frequency was found to be significantly higher (5 to 10-fold) in the somatic cell types than in the male germline cells. [ 3 ] In female mice, somatic cells were also found to have a higher mutation frequency than germline cells. [ 4 ] It was suggested that elevated levels of DNA repair enzymes play a prominent role in the lower mutation frequency of male and female germline cells, and that enhanced genetic integrity is a fundamental characteristic of germline cells. [ 4 ] DNA repair processes can remove DNA damages that would, otherwise, upon DNA replication, cause mutation. This developmental biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Somatic_(biology)
In cellular biology , a somatic cell (from Ancient Greek σῶμα (sôma) ' body ' ), or vegetal cell , is any biological cell forming the body of a multicellular organism other than a gamete , germ cell , gametocyte or undifferentiated stem cell . [ 1 ] Somatic cells compose the body of an organism and divide through mitosis . In contrast, gametes derive from meiosis within the germ cells of the germline and they fuse during sexual reproduction . Stem cells also can divide through mitosis , but are different from somatic in that they differentiate into diverse specialized cell types. In mammals , somatic cells make up all the internal organs, skin, bones, blood and connective tissue , while mammalian germ cells give rise to spermatozoa and ova which fuse during fertilization to produce a cell called a zygote , which divides and differentiates into the cells of an embryo . There are approximately 220 types of somatic cell in the human body. [ 1 ] Theoretically, these cells are not germ cells (the source of gametes); they transmit their mutations , to their cellular descendants (if they have any), but not to the organism's descendants. However, in sponges , non-differentiated somatic cells form the germ line and, in Cnidaria , differentiated somatic cells are the source of the germline. Mitotic cell division is only seen in diploid somatic cells. Only some cells like germ cells take part in reproduction. [ 2 ] As multicellularity was theorized to be evolved many times, [ 3 ] so did sterile somatic cells. [ citation needed ] The evolution of an immortal germline producing specialized somatic cells involved the emergence of mortality , and can be viewed in its simplest version in volvocine algae. [ 4 ] Those species with a separation between sterile somatic cells and a germline are called Weismannists . Weismannist development is relatively rare (e.g., vertebrates , arthropods , Volvox ), as many species have the capacity for somatic embryogenesis (e.g., land plants , most algae , and numerous invertebrates ). [ 5 ] [ 6 ] Like all cells, somatic cells contain DNA arranged in chromosomes . If a somatic cell contains chromosomes arranged in pairs, it is called diploid and the organism is called a diploid organism. The gametes of diploid organisms contain only single unpaired chromosomes and are called haploid . Each pair of chromosomes comprises one chromosome inherited from the father and one inherited from the mother. In humans, somatic cells contain 46 chromosomes organized into 23 pairs. By contrast, gametes of diploid organisms contain only half as many chromosomes. In humans, this is 23 unpaired chromosomes. When two gametes (i.e. a spermatozoon and an ovum) meet during conception, they fuse together, creating a zygote . Due to the fusion of the two gametes, a human zygote contains 46 chromosomes (i.e. 23 pairs). [ citation needed ] A large number of species have the chromosomes in their somatic cells arranged in fours (" tetraploid ") or even sixes (" hexaploid "). Thus, they can have diploid or even triploid germline cells. An example of this is the modern cultivated species of wheat , Triticum aestivum L. , a hexaploid species whose somatic cells contain six copies of every chromatid . [ citation needed ] The frequency of spontaneous mutations is significantly lower in advanced male germ cells than in somatic cell types from the same individual. [ 7 ] Female germ cells also show a mutation frequency that is lower than that in corresponding somatic cells and similar to that in male germ cells. [ 8 ] These findings appear to reflect employment of more effective mechanisms to limit the initial occurrence of spontaneous mutations in germ cells than in somatic cells. Such mechanisms likely include elevated levels of DNA repair enzymes that ameliorate most potentially mutagenic DNA damages . [ 8 ] In recent years, the technique of cloning whole organisms has been developed in mammals, allowing almost identical genetic clones of an animal to be produced. One method of doing this is called " somatic cell nuclear transfer " and involves removing the nucleus from a somatic cell, usually a skin cell. This nucleus contains all of the genetic information needed to produce the organism it was removed from. This nucleus is then injected into an ovum of the same species which has had its own genetic material removed. [ 9 ] The ovum now no longer needs to be fertilized, because it contains the correct amount of genetic material (a diploid number of chromosomes ). In theory, the ovum can be implanted into the uterus of a same-species animal and allowed to develop. The resulting animal will be a nearly genetically identical clone to the animal from which the nucleus was taken. The only difference is caused by any mitochondrial DNA that is retained in the ovum, which is different from the cell that donated the nucleus. In practice, this technique has so far been problematic, although there have been a few high-profile successes, such as Dolly the Sheep (July 5, 1996 - February 14, 2003) [ 10 ] and, more recently, Snuppy (April 24, 2005 - May 2015), the first cloned dog . [ 11 ] Somatic cells have also been collected in the practice of biobanking. The cryoconservation of animal genetic resources is a means of conserving animal genetic material in response to decreasing ecological biodiversity. [ 12 ] As populations of living organisms fall so does their genetic diversity. This places species long-term survivability at risk. Biobanking aims to preserve biologically viable cells through long-term storage for later use. Somatic cells have been stored with the hopes that they can be reprogrammed into induced pluripotent stem cells (iPSCs), which can then differentiate into viable reproductive cells. [ 13 ] Development of biotechnology has allowed for the genetic manipulation of somatic cells, whether for the modelling of chronic disease or for the prevention of malaise conditions. [ 14 ] [ 15 ] Two current means of gene editing are the use of transcription activator-like effector nucleases (TALENs) or clustered regularly interspaced short palindromic repeats (CRISPR). [ citation needed ] Genetic engineering of somatic cells has resulted in some controversies , [ 16 ] although the International Summit on Human Gene Editing has released a statement in support of genetic modification of somatic cells, as the modifications thereof are not passed on to offspring. [ 17 ] In mammals a high level of repair and maintenance of cellular DNA appears to be beneficial early in life. However, some types of cell, such as those of the brain and muscle, undergo a transition from mitotic cell division to a post-mitotic (non-dividing) condition during early development, and this transition is accompanied by a reduction in DNA repair capability. [ 18 ] [ 19 ] [ 20 ] This reduction may be an evolutionary adaptation permitting the diversion of cellular resources that were earlier used for DNA repair, as well as for DNA replication and cell division , to higher priority neuronal and muscular functions. An effect of these reductions is to allow increased accumulation of DNA damage likely contributing to cellular aging.
https://en.wikipedia.org/wiki/Somatic_cell
A somatic cell count ( SCC ) is a cell count of somatic cells in a fluid specimen , usually milk . In dairying , the SCC is an indicator of the quality of milk—specifically, its low likeliness to contain harmful bacteria , and thus its high food safety . White blood cells (leukocytes) constitute the majority of somatic cells in question. The number of somatic cells increases in response to pathogenic bacteria like Staphylococcus aureus , a cause of mastitis . The SCC is quantified as cells per milliliter . General agreement rests on a reference range of less than 100,000 cells/mL for uninfected cows and greater than 250,000 for cows infected with significant pathogen levels. [ 1 ] [ 2 ] Several tests like the Bartovation SCC cow’s milk test and The California mastitis test provide a cow-side measure of somatic cell count. [ 3 ] The somatic cell count in the milk also increases after calving when colostrum is produced. The methods of determining Grade A milk quality are well established, and are based on the somatic cell count and the bacteria plate count . Generally a lower somatic cell count indicates better animal health, while the bacteria plate count indicates improved equipment sanitation. Somatic cells originate only from inside the animal's udder, while the bacteria are usually from external contaminations, such as insufficient cleaning of the milk transport equipment or insufficient external cleansing of the cow's udder and teats prior to milking. Milking equipment can also be accidentally knocked or kicked off an animal onto the floor, and contaminants on the barn floor can be sucked into the milk line by the system vacuum. A filter sock or filter disk in the pipeline prevents large particulate contaminants from entering the milk bulk tank, but cannot remove bacterial contamination once it has occurred. For example, as defined by the State of Indiana administrative code, [ 4 ] grade A milk shall meet the following standards: [ 5 ] Milk not meeting these standards shall be designated as undergrade. Undergrade milk may not be sold for human consumption or processing into products for human consumption. As established, these measurements are taken daily from the milk bulk tank and not from individual cows. This is because testing of individual animals at each milking would be expensive, but it also means that milk from a sick cow is diluted and averaged down by the healthy animals. Recently technological advances have allowed the dairy producer to test animals individually for SCC at every milking. The huge bulk tanks at large farms are accommodating of more sick animals in the herd, without the sick animals affecting the overall milk quality rating. However many different state and governmental agencies (including FDA) inspect each load of milk delivered to the processing facility as well as the processing facilities themselves to ensure that all milk processed through those facilities is safe for all consumers. [ 6 ] As discussed in the paper Guidelines for Using the DHI Somatic Cell Count Program : [ 7 ] In Canada, [ 8 ] [ 9 ] European Union, Australia, New Zealand, Switzerland, and some US states (e.g., Washington [ 10 ] ) the somatic cell count shall be not more than 400,000 cells per milliliter. The somatic cell count limit is 750,000 in the majority of the USA [ 11 ] and 1,000,000 in Brazil. [ 12 ] [ 13 ] Bacteria in milk can come from sources other than the animal. Over time the milking pipeline and equipment can become coated with residues such as milkstone which are not removed by standard detergents and require periodic flushing of equipment with high strength corrosives. Automatic washing equipment for the bulk tank may not effectively clean all interior surfaces, and does not clean the exterior of the bulk tank at all. Milk processors and co-ops purchasing milk routinely award farmers for having the lowest possible SCC counts via "quality bonuses" added to each milk payment to the dairyman.
https://en.wikipedia.org/wiki/Somatic_cell_count
In genetics and developmental biology , somatic cell nuclear transfer ( SCNT ) is a laboratory strategy for creating a viable embryo from a body cell and an egg cell . The technique consists of taking a denucleated oocyte (egg cell) and implanting a donor nucleus from a somatic (body) cell. It is used in both therapeutic and reproductive cloning . In 1996, Dolly the sheep became famous for being the first successful case of the reproductive cloning of a mammal. [ 1 ] In January 2018, a team of scientists in Shanghai announced the successful cloning of two female crab-eating macaques (named Zhong Zhong and Hua Hua ) from foetal nuclei. [ 2 ] " Therapeutic cloning " refers to the potential use of SCNT in regenerative medicine ; this approach has been championed as an answer to the many issues concerning embryonic stem cells (ESCs) and the destruction of viable embryos for medical use, though questions remain on how homologous the two cell types truly are. Somatic cell nuclear transfer is a technique for cloning in which the nucleus of a somatic cell is transferred to the cytoplasm of an enucleated egg. After the somatic cell transfers, the cytoplasmic factors affect the nucleus to become a zygote. The blastocyst stage is developed by the egg to help create embryonic stem cells from the inner cell mass of the blastocyst. [ 3 ] The first mammal to be developed by this technique was Dolly the sheep, in 1996. [ 4 ] Although Dolly is generally recognized as the first animal to be cloned using this technique, earlier instances of SCNT exist as early as the 1950s. In particular, the research of Sir John Gurdon in 1958 entailed the cloning of Xenopus laevis utilizing the principles of SCNT. [ 5 ] In short, the experiment consisted of inducing a female specimen to ovulate, at which point her eggs were harvested. From here, the egg was enucleated using ultra-violet irradiation to disable the egg's pronucleus. At this point, the prepared egg cell and nucleus from the donor cell were combined, and then incubation and eventual development into a tadpole proceeded. [ 5 ] Gurdon's application of SCNT differs from more modern applications and even applications used on other model systems of the time (i.e., Rana pipiens ) due to his usage of UV irradiation to enucleate the egg instead of using a pipette to remove the nucleus from the egg. [ 6 ] The process of somatic cell nuclear transfer involves two different cells. The first being a female gamete, known as the ovum (egg/oocyte). In human SCNT experiments, these eggs are obtained through consenting donors, utilizing ovarian stimulation. The second being a somatic cell, referring to the cells of the human body. Skin cells, fat cells, and liver cells are only a few examples. The genetic material of the donor egg cell is removed and discarded, leaving it 'deprogrammed.' What is left is a somatic cell and an enucleated egg cell. These are then fused by inserting the somatic cell into the 'empty' ovum. [ 7 ] After being inserted into the egg, the somatic cell nucleus is reprogrammed by its host egg cell. The ovum, now containing the somatic cell's nucleus, is stimulated with a shock and will begin to divide. The egg is now viable and capable of producing an adult organism containing all necessary genetic information from just one parent. Development will ensue normally and after many mitotic divisions, the single cell forms a blastocyst (an early stage embryo with about 100 cells) with an identical genome to the original organism (i.e. a clone). [ 8 ] Stem cells can then be obtained by the destruction of this clone embryo for use in therapeutic cloning or in the case of reproductive cloning the clone embryo is implanted into a host mother for further development and brought to term. Conventional SCNT requires the use of micromanipulators , which are expensive machines used to accurately manipulate cells. [ 9 ] Using the micromanipulator, a scientist makes an opening in the zona pellucida and sucks out the egg cell's original nucleus using a pipette. They then make another opening to a different pipette to inject the donor nucleus. [ 10 ] Alternatively, electric energy can be applied to fuse the empty egg cell with a donor cell containing a nucleus. [ 9 ] An alternative technique called "handmade cloning" was described by Indian scientists in 2001. This technique requires no use of a micromanipulator and has been used for the cloning of several livestock species. [ 11 ] Removal of the nucleus can be done chemically, by centrifuge, or with the use of a blade. The empty egg is glued to the donor cell with phytohaemagglutinin , then fused using electricity. (If a blade is used, two fusion steps would be required: the first fusion is between the donor and an empty half-egg, the second between the half-size "demi-embryo" and another empty half-egg.) [ 9 ] Somatic cell nuclear transplantation has become a focus of study in stem cell research . The aim of carrying out this procedure is to obtain pluripotent cells from a cloned embryo. These cells genetically matched the donor organism from which they came. This gives them the ability to create patient specific pluripotent cells, which could then be used in therapies or disease research. [ 12 ] Embryonic stem cells are undifferentiated cells of an embryo. These cells are deemed to have a pluripotent potential because they have the ability to give rise to all of the tissues found in an adult organism. This ability allows stem cells to create any cell type, which could then be transplanted to replace damaged or destroyed cells. Controversy surrounds human ESC work due to the destruction of viable human embryos, leading scientists to seek alternative methods of obtaining pluripotent stem cells, SCNT is one such method. A potential use of stem cells genetically matched to a patient would be to create cell lines that have genes linked to a patient's particular disease. By doing so, an in vitro model could be created, would be useful for studying that particular disease, potentially discovering its pathophysiology, and discovering therapies. [ 13 ] For example, if a person with Parkinson's disease donated their somatic cells, the stem cells resulting from SCNT would have genes that contribute to Parkinson's disease. The disease specific stem cell lines could then be studied in order to better understand the condition. [ 14 ] Another application of SCNT stem cell research is using the patient specific stem cell lines to generate tissues or even organs for transplant into the specific patient. [ 15 ] The resulting cells would be genetically identical to the somatic cell donor, thus avoiding any complications from immune system rejection . [ 14 ] [ 16 ] Only a handful of the labs in the world are currently using SCNT techniques in human stem cell research. In the United States , scientists at the Harvard Stem Cell Institute , the University of California San Francisco , the Oregon Health & Science University , [ 17 ] Stemagen (La Jolla, CA) and possibly Advanced Cell Technology are currently researching a technique to use somatic cell nuclear transfer to produce embryonic stem cells . [ 18 ] In the United Kingdom , the Human Fertilisation and Embryology Authority has granted permission to research groups at the Roslin Institute and the Newcastle Centre for Life . [ 19 ] SCNT may also be occurring in China. [ 20 ] Though there has been numerous successes with cloning animals, questions remain concerning the mechanisms of reprogramming in the ovum. Despite many attempts, success in creating human nuclear transfer embryonic stem cells has been limited. There lies a problem in the human cell's ability to form a blastocyst; the cells fail to progress past the eight cell stage of development. This is thought to be a result from the somatic cell nucleus being unable to turn on embryonic genes crucial for proper development. These earlier experiments used procedures developed in non-primate animals with little success. A research group from the Oregon Health & Science University demonstrated SCNT procedures developed for primates successfully using skin cells. The key to their success was utilizing oocytes in metaphase II (MII) of the cell cycle. Egg cells in MII contain special factors in the cytoplasm that have a special ability in reprogramming implanted somatic cell nuclei into cells with pluripotent states. When the ovum's nucleus is removed, the cell loses its genetic information. This has been blamed for why enucleated eggs are hampered in their reprogramming ability. It is theorized the critical embryonic genes are physically linked to oocyte chromosomes, enucleation negatively affects these factors. Another possibility is removing the egg nucleus or inserting the somatic nucleus causes damage to the cytoplast, affecting reprogramming ability. Taking this into account the research group applied their new technique in an attempt to produce human SCNT stem cells. In May 2013, the Oregon group reported the successful derivation of human embryonic stem cell lines derived through SCNT, using fetal and infant donor cells. Using MII oocytes from volunteers and their improved SCNT procedure, human clone embryos were successfully produced. These embryos were of poor quality, lacking a substantial inner cell mass and poorly constructed trophectoderm . The imperfect embryos prevented the acquisition of human ESC. The addition of caffeine during the removal of the ovum's nucleus and fusion of the somatic cell and the egg improved blastocyst formation and ESC isolation. The ESC obtain were found to be capable of producing teratomas, expressed pluripotent transcription factors, and expressed a normal 46XX karyotype, indicating these SCNT were in fact ESC-like. [ 17 ] This was the first instance of successfully using SCNT to reprogram human somatic cells. This study used fetal and infantile somatic cells to produce their ESC. In April 2014, an international research team expanded on this break through. There remained the question of whether the same success could be accomplished using adult somatic cells. Epigenetic and age related changes were thought to possibly hinder an adult somatic cells ability to be reprogrammed. Implementing the procedure pioneered by the Oregon research group they indeed were able to grow stem cells generated by SCNT using adult cells from two donors aged 35 and 75, indicating that age does not impede a cell's ability to be reprogrammed. [ 21 ] [ 22 ] Late April 2014, the New York Stem Cell Foundation was successful in creating SCNT stem cells derived from adult somatic cells. One of these lines of stem cells was derived from the donor cells of a type 1 diabetic. The group was then able to successfully culture these stem cells and induce differentiation. When injected into mice, cells of all three of the germ layers successfully formed. The most significant of these cells, were those who expressed insulin and were capable of secreting the hormone. [ 23 ] These insulin producing cells could be used for replacement therapy in diabetics, demonstrating real SCNT stem cell therapeutic potential. The impetus for SCNT-based stem cell research has been decreased by the development and improvement of alternative methods of generating stem cells. Methods to reprogram normal body cells into pluripotent stem cells were developed in humans in 2007. The following year, this method achieved a key goal of SCNT-based stem cell research: the derivation of pluripotent stem cell lines that have all genes linked to various diseases. [ 24 ] Some scientists working on SCNT-based stem cell research have recently moved to the new methods of induced pluripotent stem cells. Though recent studies have put in question how similar iPS cells are to embryonic stem cells. Epigenetic memory in iPS affects the cell lineage it can differentiate into. For instance, an iPS cell derived from a blood cell using only the yamanaka factors will be more efficient at differentiating into blood cells, while it will be less efficient at creating a neuron. [ 25 ] Recent studies indicate however that changes to the epigenetic memory of iPSCs using small molecules can reset them to an almost naive state of pluripotency. [ 26 ] [ 27 ] Studies have even shown that via tetraploid complementation, an entire viable organism can be created solely from iPSCs. [ 28 ] SCNT stem cells have been found to have similar challenges. The cause for low yields in bovine SCNT cloning has, in recent years, been attributed to the previously hidden epigenetic memory of the somatic cells that were being introduced into the oocyte. [ 29 ] This technique is currently the basis for cloning animals (such as the famous Dolly the sheep ), [ 30 ] and has been proposed as a possible way to clone humans. Using SCNT in reproductive cloning has proven difficult with limited success. High fetal and neonatal death make the process very inefficient. Resulting cloned offspring are also plagued with development and imprinting disorders in non-human species. For these reasons, along with moral and ethical objections, reproductive cloning in humans is proscribed in more than 30 countries. [ 31 ] Most researchers believe that in the foreseeable future it will not be possible to use the current cloning technique to produce a human clone that will develop to term. It remains a possibility, though critical adjustments will be required to overcome current limitations during early embryonic development in human SCNT. [ 32 ] [ 33 ] There is also the potential for treating diseases associated with mutations in mitochondrial DNA. Recent studies show SCNT of the nucleus of a body cell afflicted with one of these diseases into a healthy oocyte prevents the inheritance of the mitochondrial disease. This treatment does not involve cloning but would produce a child with three genetic parents. A father providing a sperm cell, one mother providing the egg nucleus, and another mother providing the enucleated egg cell. [ 15 ] In 2018, the first successful cloning of primates using somatic cell nuclear transfer, the same method as Dolly the sheep , with the birth of two live female clones ( crab-eating macaques named Zhong Zhong and Hua Hua ) was reported. [ 2 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] Interspecies nuclear transfer (iSCNT) is a means of somatic cell nuclear transfer being used to facilitate the rescue of endangered species, or even to restore species after their extinction. The technique is similar to SCNT cloning which typically is between domestic animals and rodents, or where there is a ready supply of oocytes and surrogate animals. However, the cloning of highly endangered or extinct species requires the use of an alternative method of cloning. Interspecies nuclear transfer utilizes a host and a donor of two different organisms that are closely related species and within the same genus. In 2000, Robert Lanza was able to produce a cloned fetus of a gaur , Bos gaurus , combining it successfully with a domestic cow, Bos taurus . [ 38 ] In 2017, the first cloned Bactrian camel was born through iSCNT, using oocytes of dromedary camel and skin fibroblast cells of an adult Bactrian camel as donor nuclei. [ 39 ] Somatic cell nuclear transfer (SCNT) can be inefficient due to stresses placed on both the egg cell and the introduced nucleus. This can result in a low percentage of successfully reprogrammed cells. For example, in 1996 Dolly the sheep was born after 277 eggs were used for SCNT, which created 29 viable embryos, giving it a measly 0.3% efficiency. [ 40 ] Only three of these embryos survived until birth, and only one survived to adulthood. [ 30 ] Millie, the offspring that survived, took 95 attempts to produce. [ 40 ] Because the procedure was not automated and had to be performed manually under a microscope , SCNT was very resource intensive. Another reason why there is such high mortality rate with the cloned offspring is due to the fetus being larger than even other large offspring, resulting in death soon after birth. [ 40 ] The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from understood. Another limitation is trying to use one-cell embryos during the SCNT. When using just one-cell cloned embryos, the experiment has a 65% chance to fail in the process of making morula or blastocyst. The biochemistry also has to be extremely precise, as most late term cloned fetus deaths are the result of inadequate placentation. [ 40 ] However, by 2014, researchers were reporting success rates of 70-80% with cloning pigs [ 41 ] and in 2016 a Korean company, Sooam Biotech, was reported to be producing 500 cloned embryos a day. [ 42 ] In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus. This fact may also hamper the potential benefits of SCNT-derived tissues and organs for therapy, as there may be an immuno-response to the non-self mtDNA after transplant. Additionally, the genes found in the mitochondria’s genome need to communicate with the cell’s genome and a failure of somatic cell nuclear reprogramming can lead to non communication to the cell’s genome causing SCNT to fail. [ 43 ] Epigenetic factors play an important role in the success or failure of SCNT attempts. The varying gene expression of a previously activated cell and its mRNAs may lead to overexpression, underexpression, or in some cases non functional genes which will affect the developing fetus. [ 44 ] One such example of epigenetic limitations to SCNT is regulating histone methylation. Differing regulation of these histone methylation genes can directly affect the transcription of the developing genome, causing failure of the SCNT. [ 45 ] Another contributing factor to failure of SCNT includes the X chromosome inactivation in early development of the embryo. A non coding gene called XIST is responsible for inactivating one X chromosome during development, however in SCNT this gene can have abnormal regulation causing mortality to the developing fetus. [ 45 ] Nuclear transfer techniques present a different set of ethical considerations than those associated with the use of other stem cells like embryonic stem cells which are controversial for their requirement to destroy an embryo. These different considerations have led to some individuals and organizations who are not opposed to human embryonic stem cell research to be concerned about, or opposed to, SCNT research. [ 46 ] [ 47 ] [ 48 ] One concern is that blastula creation in SCNT-based human stem cell research will lead to the reproductive cloning of humans. Both processes use the same first step: the creation of a nuclear transferred embryo, most likely via SCNT. Those who hold this concern often advocate for strong regulation of SCNT to preclude implantation of any derived products for the intention of human reproduction, [ 49 ] or its prohibition. [ 46 ] A second important concern is the appropriate source of the eggs that are needed. SCNT requires human egg cells , which can only be obtained from women. The most common source of these eggs today are eggs that are produced and in excess of the clinical need during IVF treatment. This is a minimally invasive procedure, but it does carry some health risks, such as ovarian hyperstimulation syndrome . One vision for successful stem cell therapies is to create custom stem cell lines for patients. Each custom stem cell line would consist of a collection of identical stem cells each carrying the patient's own DNA, thus reducing or eliminating any problems with rejection when the stem cells were transplanted for treatment. For example, to treat a man with Parkinson's disease, a cell nucleus from one of his cells would be transplanted by SCNT into an egg cell from an egg donor, creating a unique lineage of stem cells almost identical to the patient's own cells. (There would be differences. For example, the mitochondrial DNA would be the same as that of the egg donor. In comparison, his own cells would carry the mitochondrial DNA of his mother.) Potentially millions of patients could benefit from stem cell therapy, and each patient would require a large number of donated eggs in order to successfully create a single custom therapeutic stem cell line. Such large numbers of donated eggs would exceed the number of eggs currently left over and available from couples trying to have children through assisted reproductive technology . Therefore, healthy young women would need to be induced to sell eggs to be used in the creation of custom stem cell lines that could then be purchased by the medical industry and sold to patients. It is so far unclear where all these eggs would come from. Stem cell experts consider it unlikely that such large numbers of human egg donations would occur in a developed country because of the unknown long-term public health effects of treating large numbers of healthy young women with heavy doses of hormones in order to induce hyper-ovulation (ovulating several eggs at once). Although such treatments have been performed for several decades now, the long-term effects have not been studied or declared safe to use on a large scale on otherwise healthy women. Longer-term treatments with much lower doses of hormones are known to increase the rate of cancer decades later. Whether hormone treatments to induce hyper-ovulation could have similar effects is unknown. There are also ethical questions surrounding paying for eggs. In general, marketing body parts is considered unethical and is banned in most countries. [ why? ] Human eggs have been a notable exception to this rule for some time. To address the problem of creating a human egg market, some stem cell researchers are investigating the possibility of creating artificial eggs. If successful, human egg donations would not be needed to create custom stem cell lines. However, this technology may be a long way off. SCNT involving human cells is currently legal for research purposes in the United Kingdom , having been incorporated into the Human Fertilisation and Embryology Act 1990 . [ 50 ] [ 7 ] Permission must be obtained from the Human Fertilisation and Embryology Authority in order to perform or attempt SCNT. In the United States, the practice remains legal, as it has not been addressed by federal law. [ 51 ] However, in 2002, a moratorium on United States federal funding for SCNT prohibits funding the practice for the purposes of research. Thus, though legal, SCNT cannot be federally funded. [ 52 ] American scholars have recently argued that because the product of SCNT is a clone embryo, rather than a human embryo, these policies are morally wrong and should be revised. [ 53 ] In 2003, the United Nations adopted a proposal submitted by Costa Rica , calling on member states to "prohibit all forms of human cloning in as much as they are incompatible with human dignity and the protection of human life." [ 54 ] This phrase may include SCNT, depending on interpretation. The Council of Europe's Convention on Human Rights and Biomedicine and its Additional Protocol to the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine, on the Prohibition of Cloning Human Being appear to ban SCNT of human beings. Of the Council's 45 member states, the Convention has been signed by 31 and ratified by 18. The Additional Protocol has been signed by 29 member nations and ratified by 14. [ 55 ]
https://en.wikipedia.org/wiki/Somatic_cell_nuclear_transfer
Somatic effort refers to the total investments of an organism in its own development , differentiation , and maintenance which consequently increases its reproductive potential. [ 1 ] This ethology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Somatic_effort
Somatic embryogenesis is an artificial process in which a plant or embryo is derived from a single somatic cell . [ 1 ] Somatic embryos are formed from plant cells that are not normally involved in the development of embryos, i.e. ordinary plant tissue. No endosperm or seed coat is formed around a somatic embryo. Cells derived from competent source tissue are cultured to form an undifferentiated mass of cells called a callus . Plant growth regulators in the tissue culture medium can be manipulated to induce callus formation and subsequently changed to induce embryos to form the callus. The ratio of different plant growth regulators required to induce callus or embryo formation varies with the type of plant. [ 2 ] Somatic embryos are mainly produced in vitro and for laboratory purposes, using either solid or liquid nutrient media which contain plant growth regulators (PGR’s). The main PGRs used are auxins but can contain cytokinin in a smaller amount. [ 3 ] Shoots and roots are monopolar while somatic embryos are bipolar, allowing them to form a whole plant without culturing on multiple media types. Somatic embryogenesis has served as a model to understand the physiological and biochemical events that occur during plant developmental processes as well as a component to biotechnological advancement. [ 4 ] The first documentation of somatic embryogenesis was by Steward et al. in 1958 and Reinert in 1959 with carrot cell suspension cultures. [ 5 ] [ 6 ] Somatic embryogenesis has been described to occur in two ways: directly or indirectly. [ 7 ] occurs when embryos are started directly from explant tissue creating an identical clone. In other words without callus formation of embryo from explant, that is called direct embryogenesis. occurs when explants produced undifferentiated, or partially differentiated, cells (often referred to as callus) which then is maintained or differentiated into plant tissues such as leaf, stem, or roots. 2,4-Dichlorophenoxyacetic acid (2,4-D) , 6-Benzylaminopurine (BAP) and Gibberellic acid (GA) has been used for development of indirect somatic embryos in strawberry ( Fragaria ananassa ) [ 8 ] Plant regeneration via somatic embryogenesis occurs in five steps: initiation of embryogenic cultures, proliferation of embryogenic cultures, prematuration of somatic embryos, maturation of somatic embryos and plant development on nonspecific media. Initiation and proliferation occur on a medium rich in auxin, which induces differentiation of localized meristematic cells . The auxin typically used is 2,4-D . Once transferred to a medium with low or no auxin , these cells can then develop into mature embryos . Germination of the somatic embryo can only occur when it is mature enough to have functional root and shoot apices [ 3 ] Factors and mechanisms controlling cell differentiation in somatic embryos are relatively ambiguous. Certain compounds excreted by plant tissue cultures and found in culture media have been shown necessary to coordinate cell division and morphological changes. [ 9 ] These compounds have been identified by Chung et al. [ 10 ] as various polysaccharides , amino acids , growth regulators , vitamins , low molecular weight compounds and polypeptides. Several signaling molecules known to influence or control the formation of somatic embryos have been found and include extracellular proteins, arabinogalactan proteins and lipochitooligosaccharides. Temperature and lighting can also affect the maturation of the somatic embryo. Applications of this process include: clonal propagation of genetically uniform plant material; elimination of viruses ; provision of source tissue for genetic transformation ; generation of whole plants from single cells called protoplasts ; development of synthetic seed technology. [ 1 ] The development of somatic embryogenesis procedures has given rise to research on seed storage proteins (SSPs) of woody plants for tree species of commercial importance, i.e., mainly gymnosperms , including white spruce . In this area of study, SSPs are used as markers to determine the embryogenic potential and competency of the embryogenic system to produce a somatic embryo biochemically similar to its zygotic counterpart (Flinn et al. 1991, Beardmore et al. 1997). [ 13 ] [ 14 ] Grossnickle et al. (1992) [ 15 ] compared interior spruce seedlings with emblings during nursery development and through a stock quality assessment program immediately before field outplanting. Seedling shoot height, root collar diameter, and dry weight increased at a greater rate in seedlings than in emblings during the first half of the first growing season, but thereafter shoot growth was similar among all plants. By the end of the growing season, seedlings were 70% taller than emblings, had greater root collar diameter, and greater shoot dry weight. Root dry weight increased more rapidly in seedlings than in emblings during the early growing season During fall acclimation, the pattern of increasing dormancy release index and increasing tolerance to freezing was similar in both seedlings and emblings. Root growth capacity decreased then increased during fall acclimation, with the increase being greater in seedlings. Assessment of stock quality just prior to planting showed that: emblings had greater water use efficiency with decreasing predawn shoot water potential compared with seedlings; seedlings and emblings had similar water movement capability at both high and low root temperatures; net photosynthesis and needle conductance at low root temperatures were greater in seedlings than in emblings; and seedlings had greater root growth than emblings at 22 °C root, but root growth among all plants was low at 7.5 °C root temperature. Growth and survival of interior spruce 313B Styroblock seedlings and emblings after outplanting on a reforestation site were determined by Grossnickle and Major (1992). [ 16 ] For both seedlings and emblings, osmotic potential at saturation (ψ sat ) and turgor loss point (ψ tip ) increased from a low of -1.82 and -2.22 MPa, respectively, just prior to planting to a seasonal high of -1.09 and -1.21 MPa, respectively, during active shoot elongation. Thereafter, seedlings and emblings (ψ sat ) and (ψ tip ) declined to -2.00 and -2.45 MPa, respectively, at the end of the growing season, which coincided with the steady decline in site temperatures and a cessation of height growth. In general, seedlings and emblings had similar ψ sat and ψ tip values through the growing season, and also had similar shifts in seasonal patterns of maximum modulus of elasticity, sympalstic fraction, and relative water content at turgor loss point. Grossnickle and Major (1992) [ 16 ] found that year-old and current-year needles of both seedlings and emblings had a similar decline in needle conductance with increasing vapour pressure deficit. Response surface models of current-year needles net photosynthesis (P n ) response to vapour pressure deficit (VPD) and photosynthetically active radiation (PAR) showed that emblings had 15% greater P n at VPD of less than 3.0 kPa and PAR greater than 1000 μmol m −2 s −1 . Year-old and current-year needles of seedlings and emblings showed similar patterns of water use efficiency. Rates of shoot growth in seedlings and emblings through the growing season were also similar to one another. Seedlings had larger shoot systems both at the time of planting and at the end of the growing season. Seedlings also had greater root development than emblings through the growing season, but root:shoot ratios for the 2 stock types were similar at the end of the growing season, when the survival rates for seedlings and emblings were 96% and 99%, respectively. Understanding the formation of a somatic embryo through establishment of morphological and molecular markers is important for construction of a fate map. The fate map is the foundation in which to build further research and experimentation. Two methods exist to construct a fate map: synchronous cell-division and time-lapse tracking. The latter typically works more consistently because of cell-cycle-altering chemicals and centrifuging involved in synchronous cell-division. [ 17 ] Embryo development in angiosperms is divided into several steps. The zygote is divided asymmetrically forming a small apical cell and large basal cell. The organizational pattern is formed in the globular stage and the embryo then transitions to the cotyledonary stage. [ 18 ] Embryo development differs in monocots and dicots. Dicots pass through the globular, heart-shaped, and torpedo stages while monocots pass through globular, scutellar, and coleoptilar stages. [ 19 ] Many culture systems induce and maintain somatic embryogenesis by continuous exposure to 2,4-dichlorophenoxyacetic acid . Abscisic acid has been reported to induce somatic embryogenesis in seedlings. After callus formation, culturing on a low auxin or hormone free media will promote somatic embryo growth and root formation. In monocots , embryogenic capability is usually restricted to tissues with embryogenic or meristematic origin. Somatic cells of monocots differentiate quickly and then lose mitotic and morphogenic capability. Differences of auxin sensitivity in embryogenic callus growth between different genotypes of the same species show how variable auxin responses can be. [ 20 ] Carrot Daucus carota was the first and most understood species with regard to developmental pathways and molecular mechanisms. [ 17 ] Time-lapse tracking by Toonen et al. (1994) showed that morphology of competent cells can vary based on shape and cytoplasm density. Five types of cells were identified from embryonic suspension: spherical cytoplasm-rich, spherical vacuolated, oval vacuolated, elongated vacuolated, and irregular shaped cells. Each type of cell multiplied with certain geometric symmetry. They developed into symmetrical, asymmetrical, and aberrantly-shaped cell clusters that eventually formed embryos at different frequencies. [ 21 ] This indicates that organized growth polarity do not always exist in somatic embryogenesis. [ 17 ] Embryo development in gymnosperms occurs in three phases. Proembryogeny includes all stages prior to suspensor elongation. Early embryogeny includes all stages after suspensor elongation but before root meristem development. Late embryogeny includes development of root and shoot meristems. [ 18 ] Time-lapse tracking in Norway Spruce Picea abies revealed that neither single cytoplasmic-rich cells nor vacuolated cells developed into embryos. Proembryogenic masses (PEMs), an intermediate between unorganized cells and an embryo composed of cytoplasmic-rich cells next to a vacuolated cell, are stimulated with auxin and cytokinin . Gradual removal of auxin and cytokinin and introduction of abscisic acid (ABA) will allow an embryo to form. [ 17 ] Using somatic embryogenesis has been considered for mass production of vegetatively propagated conifer clones and cryopreservation of germplasm . However, the use of this technology for reforestation and tree breeding of conifers is in its infancy. [ 22 ] [ 23 ]
https://en.wikipedia.org/wiki/Somatic_embryogenesis
Somatic evolution is the accumulation of mutations and epimutations in somatic cells (the cells of a body, as opposed to germ plasm and stem cells ) during a lifetime, and the effects of those mutations and epimutations on the fitness of those cells. This evolutionary process has first been shown by the studies of Bert Vogelstein in colon cancer. Somatic evolution is important in the process of aging as well as the development of some diseases, including cancer. Cells in pre-malignant and malignant neoplasms ( tumors ) evolve by natural selection . [ 1 ] [ 2 ] This accounts for how cancer develops from normal tissue and why it has been difficult to cure. There are three necessary and sufficient conditions for natural selection, all of which are met in a neoplasm: Cells in neoplasms compete for resources, such as oxygen and glucose, as well as space. Thus, a cell that acquires a mutation that increases its fitness will generate more daughter cells than competitor cells that lack that mutation. In this way, a population of mutant cells, called a clone, can expand in the neoplasm. Clonal expansion is the signature of natural selection in cancer. Cancer therapies act as a form of artificial selection, killing sensitive cancer cells, but leaving behind resistant cells . Often the tumor will regrow from those resistant cells, the patient will relapse, and the therapy that had been previously used will no longer kill the cancer cells. This selection for resistance is similar to the repeatedly spraying crops with a pesticide and selecting for resistant pests until the pesticide is no longer effective. Modern descriptions of biological evolution will typically elaborate on major contributing factors to evolution such as the formation of local micro-environments, mutational robustness, molecular degeneracy , and cryptic genetic variation. [ 4 ] Many of these contributing factors in evolution have been isolated and described for cancer. [ 5 ] Cancer is a classic example of what evolutionary biologists call multilevel selection : at the level of the organism, cancer is usually fatal so there is selection for genes and the organization of tissues [ 6 ] [ 7 ] that suppress cancer. At the level of the cell, there is selection for increased cell proliferation and survival, such that a mutant cell that acquires one of the hallmarks of cancer [ 3 ] (see below), will have a competitive advantage over cells that have not acquired the hallmark. Thus, at the level of the cell there is selection for cancer. The earliest ideas about neoplastic evolution come from Boveri [ 8 ] who proposed that tumors originated in chromosomal abnormalities passed on to daughter cells. In the decades that followed, cancer was recognized as having a clonal origin associated with chromosomal aberrations. [ 9 ] [ 10 ] [ 11 ] [ 12 ] Early mathematical modeling of cancer, by Armitage and Doll , set the stage for the future development of the somatic evolutionary theory of cancer. Armitage and Doll explained the cancer incidence data, as a function of age, as a process of the sequential accumulation of somatic mutations (or other rate limiting steps). [ 13 ] Advances in cytogenetics facilitated discovery of chromosome abnormalities in neoplasms, including the Philadelphia chromosome in chronic myelogenous leukemia [ 14 ] and translocations in acute myeloblastic leukemia. [ 15 ] Sequences of karyotypes replacing one another in a tumor were observed as it progressed. [ 16 ] [ 17 ] [ 18 ] Researchers hypothesized that cancer evolves in a sequence of chromosomal mutations and selection [ 6 ] [ 17 ] [ 19 ] [ 20 ] and that therapy may further select clones. [ 12 ] In 1971, Knudson published the 2-hit hypothesis for mutation and cancer based on statistical analysis of inherited and sporadic cases of retinoblastoma. [ 21 ] He postulated that retinoblastoma developed as a consequence of two mutations; one of which could be inherited or somatic followed by a second somatic mutation. Cytogenetic studies localized the region to the long arm of chromosome 13, and molecular genetic studies demonstrated that tumorigenesis was associated with chromosomal mechanisms, such as mitotic recombination or non-disjunction, that could lead to homozygosity of the mutation. [ 22 ] The retinoblastoma gene was the first tumor suppressor gene to be cloned in 1986. Cairns hypothesized a different, but complementary, mechanism of tumor suppression in 1975 based on tissue architecture to protect against selection of variant somatic cells with increased fitness in proliferating epithelial populations, such as the intestine and other epithelial organs. [ 6 ] He postulated that this could be accomplished by restricting the number of stem cells for example at the base of intestinal crypts and restraining the opportunities for competition between cells by shedding differentiated intestinal cells into the gut. The essential predictions of this model have been confirmed although mutations in some tumor suppressor genes, including CDKN2A (p16), predispose to clonal expansions that encompass large numbers of crypts in some conditions such as Barrett's esophagus. He also postulated an immortal DNA strand that is discussed at Immortal DNA strand hypothesis . Nowell synthesized the evolutionary view of cancer in 1976 as a process of genetic instability and natural selection. [ 1 ] Most of the alterations that occur are deleterious for the cell, and those clones will tend to go extinct, but occasional selectively advantageous mutations arise that lead to clonal expansions. This theory predicts a unique genetic composition in each neoplasm due to the random process of mutations, genetic polymorphisms in the human population, and differences in the selection pressures of the neoplasm's microenvironment. Interventions are predicted to have varying results in different patients. What is more important, the theory predicts the emergence of resistant clones under the selective pressures of therapy. Since 1976, researchers have identified clonal expansions [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] and genetic heterogeneity [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ] within many different types of neoplasms. There are multiple levels of genetic heterogeneity associated with cancer, including single nucleotide polymorphism (SNP), [ 35 ] sequence mutations, [ 30 ] Microsatellite shifts [ 29 ] and instability, [ 36 ] loss of heterozygosity (LOH), [ 34 ] Copy number variation (detected both by comparative genomic hybridization (CGH), [ 31 ] and array CGH, [ 37 ] ) and karyotypic variations including chromosome structural aberrations and aneuploidy. [ 32 ] [ 33 ] [ 38 ] [ 39 ] [ 40 ] Studies of this issue have focused mainly at the gene mutation level, as copy number variation, LOH and specific chromosomal translocations are explained in the context of gene mutation. It is thus necessary to integrate multiple levels of genetic variation in the context of complex system and multilevel selection. System instability is a major contributing factor to genetic heterogeneity. [ 41 ] For the majority of cancers, genome instability is reflected in a large frequency of mutations in the whole genome DNA sequence (not just the protein coding regions that are only 1.5% of the genome [ 42 ] ). In whole genome sequencing of different types of cancers, large numbers of mutations were found in two breast cancers (about 20,000 point mutations [ 43 ] ), 25 melanomas (9,000 to 333,000 point mutations [ 44 ] ) and a lung cancer (50,000 point mutations and 54,000 small additions and deletions [ 45 ] ). Genome instability is also referred to as an enabling characteristic for achieving endpoints of cancer evolution. [ 3 ] Many of the somatic evolutionary studies have traditionally been focused on clonal expansion, as recurrent types of changes can be traced to illustrate the evolutionary path based on available methods. Recent studies from both direct DNA sequencing and karyotype analysis illustrate the importance of the high level of heterogeneity in somatic evolution. For the formation of solid tumors, there is an involvement of multiple cycles of clonal and non-clonal expansion. [ 39 ] [ 46 ] Even at the typical clonal expansion phase, there are significant levels of heterogeneity within the cell population, however, most are under-detected when mixed populations of cells are used for molecular analysis. In solid tumors, a majority of gene mutations are not recurrent types, [ 47 ] and neither are the karyotypes. [ 39 ] [ 41 ] These analyses offer an explanation for the findings that there are no common mutations shared by most cancers. [ 48 ] The state of a cell may be changed epigenetically , in addition to genetic alterations. The best-understood epigenetic alterations in tumors are the silencing or expression of genes by changes in the methylation of CG pairs of nucleotides in the promoter regions of the genes. These methylation patterns are copied to the new chromosomes when cells replicate their genomes and so methylation alterations are heritable and subject to natural selection. Methylation changes are thought to occur more frequently than mutations in the DNA, and so may account for many of the changes during neoplastic progression (the process by which normal tissue becomes cancerous), in particular in the early stages. For instance, when loss of expression of the DNA repair protein MGMT occurs in a colon cancer, it is caused by a mutation only about 4% of the time, while in most cases the loss is due to methylation of its promoter region. [ 49 ] Similarly, when loss of expression of the DNA repair protein PMS2 occurs in colon cancer, it is caused by a mutation about 5% of the time, while in most cases loss of expression is due to methylation of the promoter of its pairing partner MLH1 (PMS2 is unstable in the absence of MLH1). [ 50 ] Epigenetic changes in progression interact with genetic changes. For example, epigenetic silencing of genes responsible for the repair of mispairs or damages in the DNA (e.g. MLH1 or MSH2) results in an increase of genetic mutations. Deficiency of DNA repair proteins PMS2 , MLH1 , MSH2 , MSH3 , MSH6 or BRCA2 can cause up to 100-fold increases in mutation frequency [ 51 ] [ 52 ] [ 53 ] Epigenetic deficiencies in DNA repair gene protein expression have been found in many cancers, though not all deficiencies have been evaluated in all cancers. Epigeneticically deficient DNA repair proteins include BRCA1 , WRN , MGMT , MLH1 , MSH2 , ERCC1 , PMS2 , XPF, P53 , PCNA and OGG1 , and these are found to be deficient at frequencies of 13% to 100% in different cancers. [ citation needed ] (Also see Frequencies of epimutations in DNA repair genes .) In addition to well studied epigenetic promoter methylation, more recently there have been substantial findings of epigenetic alterations in cancer due to changes in histone and chromatin architecture and alterations in the expression of microRNAs (microRNAs either cause degradation of messenger RNAs or block their translation ) [ 54 ] For instance, hypomethylation of the promoter for microRNA miR-155 increases expression of miR-155, and this increased miR-155 targets DNA repair genes MLH1, MSH2 and MSH6, causing each of them to have reduced expression. [ 55 ] In cancers, loss of expression of genes occurs about 10 times more frequently by transcription silencing (caused by somatically heritable promoter hypermethylation of CpG islands) than by mutations. As Vogelstein et al. point out, in a colorectal cancer there are usually about 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. [ 56 ] In contrast, in colon tumors compared to adjacent normal-appearing colonic mucosa, there are about 600 to 800 somatically heritable heavily methylated CpG islands in promoters of genes in the tumors while these CpG islands are not methylated in the adjacent mucosa. [ 57 ] [ 58 ] [ 59 ] Methylation of the cytosine of CpG dinucleotides is a somatically heritable and conserved regulatory mark that is generally associated with transcriptional repression. CpG islands keep their overall un-methylated state (or methylated state) extremely stably through multiple cell generations. [ 60 ] One common feature of neoplastic progression is the expansion of a clone with a genetic or epigenetic alteration. This may be a matter of chance, but is more likely due to the expanding clone having a competitive advantage (either a reproductive or survival advantage) over other cells in the tissue. Since clones often have many genetic and epigenetic alterations in their genomes, it is often not clear which of those alterations cause a reproductive or survival advantage and which other alterations are simply hitchhikers or passenger mutations (see Glossary below) on the clonal expansion. Clonal expansions are most often associated with the loss of the p53 (TP53) or p16 (CDKN2A/INK4a) tumor suppressor genes. In lung cancer, a clone with a p53 mutation was observed to have spread over the surface of one entire lung and into the other lung. [ 27 ] In bladder cancer, clones with loss of p16 were observed to have spread over the entire surface of the bladder. [ 61 ] [ 62 ] Likewise, large expansions of clones with loss of p16 have been observed in the oral cavity [ 24 ] and in Barrett's esophagus . [ 25 ] Clonal expansions associated with inactivation of p53 have also appeared in skin, [ 23 ] [ 63 ] Barrett's esophagus , [ 25 ] brain, [ 64 ] and kidney. [ 65 ] Further clonal expansions have been observed in the stomach, [ 66 ] bladder, [ 67 ] colon, [ 68 ] lung, [ 69 ] hematopoietic (blood) cells, [ 70 ] and prostate. [ 71 ] These clonal expansions are important for at least two reasons. First, they generate a large target population of mutant cells and so increase the probability that the multiple mutations necessary to cause cancer will be acquired within that clone. Second, in at least one case, the size of the clone with loss of p53 has been associated with an increased risk of a pre-malignant tumor becoming cancerous. [ 72 ] It is thought that the process of developing cancer involves successive waves of clonal expansions within the tumor. [ 73 ] The term "field cancerization" was first used in 1953 to describe an area or "field" of epithelium that has been preconditioned by (at that time) largely unknown processes so as to predispose it towards development of cancer. [ 74 ] Since then, the terms "field cancerization" and "field defect" have been used to describe pre-malignant tissue in which new cancers are likely to arise. Field defects, for example, have been identified in most of the major areas subject to tumorigenesis in the gastrointestinal (GI) tract. [ 75 ] Cancers of the GI tract that are shown to be due, to some extent, to field defects include head and neck squamous cell carcinoma (HNSCC) , oropharyngeal/laryngeal cancer , esophageal adenocarcinoma and esophageal squamous-cell carcinoma , gastric cancer , bile duct cancer , pancreatic cancer , small intestine cancer and colon cancer . In the colon , a field defect probably arises by natural selection of a mutant or epigenetically altered cell among the stem cells at the base of one of the intestinal crypts on the inside surface of the colon. A mutant or epigenetically altered stem cell, if it has a selective advantage, could replace the other nearby stem cells by natural selection. This can cause a patch of abnormal tissue, or field defect. The figure in this section includes a photo of a freshly resected and lengthwise-opened segment of the colon that may represent a large field defect in which there is a colon cancer and four polyps . The four polyps, in addition to the cancer, may represent sub-clones with proliferative advantages. The sequence of events giving rise to this possible field defect are indicated below the photo. The schematic diagram shows a large area in yellow indicating a large patch of mutant or epigenetically altered cells that formed by clonal expansion of an initial cell based on a selective advantage. Within this first large patch, a second such mutation or epigenetic alteration may have occurred so that a given stem cell acquired an additional selective advantage compared to the other stem cells within the patch, and this altered stem cell expanded clonally forming a secondary patch, or sub-clone, within the original patch. This is indicated in the diagram by four smaller patches of different colors within the large yellow original area. Within these new patches (sub-clones), the process may have been repeated multiple times, indicated by the still smaller patches within the four secondary patches (with still different colors in the diagram) which clonally expanded, until a stem cell arose that generated either small polyps (which may be benign neoplasms ) or else a malignant neoplasm (cancer). These neoplasms are also indicated, in the diagram below the photo, by 4 small tan circles (polyps) and a larger red area (cancer). The cancer in the photo occurred in the cecal area of the colon, where the colon joins the small intestine (labeled) and where the appendix occurs (labeled). The fat in the photo is external to the outer wall of the colon. In the segment of colon shown here, the colon was cut open lengthwise to expose the inner surface of the colon and to display the cancer and polyps occurring within the inner epithelial lining of the colon. Phylogenetics may be applied to cells in tumors to reveal the evolutionary relationships between cells, just as it is used to reveal evolutionary relationships between organisms and species. Shibata, Tavare and colleagues have exploited this to estimate the time between the initiation of a tumor and its detection in the clinic. [ 29 ] Louhelainen et al. have used parsimony to reconstruct the relationships between biopsy samples based on loss of heterozygosity. [ 76 ] Phylogenetic trees should not be confused with oncogenetic trees, [ 77 ] which represent the common sequences of genetic events during neoplastic progression and do not represent the relationships of common ancestry that are essential to a phylogeny. For an up-to-date review in this field, see Bast 2012. [ 78 ] An adaptive landscape is a hypothetical topological landscape upon which evolution is envisioned to take place. It is similar to Wright's fitness landscape [ 79 ] [ 80 ] in which the location of each point represents the genotype of an organism and the altitude represents the fitness of that organism in the current environment. However, unlike Wright's rigid landscape, the adaptive landscape is pliable. It readily changes shape with changes in population densities and survival/reproductive strategies used within and among the various species. Wright's shifting balance theory of evolution combines genetic drift (random sampling error in the transmission of genes) and natural selection to explain how multiple peaks on a fitness landscape could be occupied or how a population can achieve a higher peak on this landscape. This theory, based on the assumption of density-dependent selection as the principal forms of selection, results in a fitness landscape that is relatively rigid. A rigid landscape is one that does not change in response to even large changes in the position and composition of strategies along the landscape. In contrast to the fitness landscape, the adaptive landscape is constructed assuming that both density and frequency-dependent selection is involved (selection is frequency-dependant when the fitness of a species depends not only on that species strategy but also on the strategy of all other species). As such, the shape of the adaptive landscape can change drastically in response to even small changes in strategies and densities. [ 81 ] The flexibility of adaptive landscapes provide several ways for natural selection to cross valleys and occupy multiple peaks without having to make large changes in their strategies. Within the context of differential or difference equation models for population dynamics, an adaptive landscape may actually be constructed using a fitness generating function . [ 82 ] If a given species is able to evolve, it will, over time, "climb" the adaptive landscape toward a fitness peak through gradual changes in its mean phenotype according to a strategy dynamic that involves the slope of the adaptive landscape. Because the adaptive landscape is not rigid and can change shape during the evolutionary process, it is possible that a species may be driven to maximum, minimum, or saddle point on the adaptive landscape. A population at a global maximum on the adaptive landscape corresponds an evolutionarily stable strategy (ESS) and will become dominant, driving all others toward extinction. Populations at a minimum or saddle point are not resistant to invasion, so that the introduction of a slightly different mutant strain may continue the evolutionary process toward unoccupied local maxima. The adaptive landscape provides a useful tool for studying somatic evolution as it can describe the process of how a mutant cell evolves from a small tumor to an invasive cancer. Understanding this process in terms of the adaptive landscape may lead to the control of cancer through external manipulation of the shape of the landscape. [ 83 ] [ 84 ] In their landmark paper, The Hallmarks of Cancer , [ 3 ] Hanahan and Weinberg suggest that cancer can be described by a small number of underlying principles, despite the complexities of the disease. The authors describe how tumor progression proceeds via a process analogous to Darwinian evolution, where each genetic change confers a growth advantage to the cell. These genetic changes can be grouped into six "hallmarks", which drive a population of normal cells to become a cancer. The six hallmarks are: Genetic instability is defined as an "enabling characteristic" that facilitates the acquisition of other mutations due to defects in DNA repair. The hallmark "self-sufficiency in growth signals" describes the observation that tumor cells produce many of their own growth signals and thereby no longer rely on proliferation signals from the micro-environment. Normal cells are maintained in a nondividing state by antigrowth signals, which cancer cells learn to evade through genetic changes producing "insensitivity to antigrowth signals". A normal cell initiates programmed cell death (apoptosis) in response to signals such as DNA damage, oncogene overexpression, and survival factor insufficiency, but a cancer cell learns to "evade apoptosis", leading to the accumulation of aberrant cells. Most mammalian cells can replicate a limited number of times due to progressive shortening of telomeres; virtually all malignant cancer cells gain an ability to maintain their telomeres, conferring "limitless replicative potential". As cells cannot survive at distances of more than 100 μm from a blood supply, cancer cells must initiate the formation of new blood vessels to support their growth via the process of "sustained angiogenesis". During the development of most cancers, primary tumor cells acquire the ability to undergo "invasion and metastasis" whereby they migrate into the surrounding tissue and travel to distant sites in the body, forming secondary tumors. The pathways that cells take toward becoming malignant cancers are variable, and the order in which the hallmarks are acquired can vary from tumor to tumor. The early genetic events in tumorigenesis are difficult to measure clinically, but can be simulated according to known biology. [ 85 ] Macroscopic tumors are now beginning to be described in terms of their underlying genetic changes, providing additional data to refine the framework described in The Hallmarks of Cancer. The theory about the monoclonal origin of cancer states that, in general, neoplasms arise from a single cell of origin. [ 1 ] While it is possible that certain carcinogens may mutate more than one cell at once, the tumor mass usually represents progeny of a single cell, or very few cells. [ 1 ] A series of mutations is required in the process of carcinogenesis for a cell to transition from being normal to pre-malignant and then to a cancer cell. [ 86 ] The mutated genes usually belong to classes of caretaker, gatekeeper, landscaper or several other genes. Mutation ultimately leads to acquisition of the ten hallmarks of cancer. The first malignant cell, that gives rise to the tumor, is often labeled a cancer stem cell. [ 87 ] The cancer stem-cell hypothesis relies on the fact that a lot of tumors are heterogeneous – the cells in the tumor vary by phenotype and functions. [ 87 ] [ 88 ] [ 89 ] Current research shows that in many cancers there is apparent hierarchy among cells. [ 87 ] [ 88 ] [ 89 ] in general, there is a small population of cells in the tumor – about 0.2%–1% [ 88 ] – that exhibits stem cell-like properties. These cells have the ability to give rise to a variety of cells in tumor tissue, self-renew indefinitely, and upon transfer can form new tumors. According to the hypothesis, cancer stem cells are the only cells capable of tumorigenesis – initiation of a new tumor. [ 87 ] Cancer stem cell hypothesis might explain such phenomena as metastasis and remission . The monoclonal model of cancer and the cancer stem-cell model are not mutually exclusive. [ 87 ] Cancer stem cell arises by clonal evolution as a result of selection for the cell with the highest fitness in the neoplasm . This way, the heterogeneous nature of neoplasm can be explained by two processes – clonal evolution, or the hierarchical differentiation of cells, regulated by cancer stem cells. [ 87 ] All cancers arise as a result of somatic evolution, but only some of them fit the cancer stem cell hypothesis. [ 87 ] The evolutionary processes do not cease when a population of cancer stem cells arises in a tumor. Cancer treatment drugs pose a strong selective force on all types of cells in tumors, including cancer stem cells, which would be forced to evolve resistance to the treatment. Cancer stem cells do not always have to have the highest resistance among the cells in the tumor to survive chemotherapy and re-emerge afterwards. The surviving cells might be in a special microenvironment , which protects them from adverse effects of treatment. [ 87 ] It is currently unclear as to whether cancer stem cells arise from adult stem cell transformation, a maturation arrest of progenitor cells , or as a result of dedifferentiation of mature cells. [ 88 ] Therapeutic resistance has been observed in virtually every form of therapy, from the beginning of cancer therapy. [ 90 ] In most cases, therapies appear to select for mutations in the genes or pathways targeted by the drug. Some of the first evidence for a genetic basis of acquired therapeutic resistance came from studies of methotrexate. Methotrexate inhibits the dihydrofolate reductase (DHFR) gene. However, methotrexate therapy appears to select for cells with extra copies (amplification) of DHFR, which are resistant to methotrexate. This was seen in both cell culture [ 91 ] and samples from tumors in patients that had been treated with methotrexate. [ 92 ] [ 93 ] [ 94 ] [ 95 ] A common cytotoxic chemotherapy used in a variety of cancers, 5-fluorouracil (5-FU), targets the TYMS pathway and resistance can evolve through the evolution of extra copies of TYMS, thereby diluting the drug's effect. [ 96 ] In the case of Gleevec (Imatinib), which targets the BCR-ABL fusion gene in chronic myeloid leukemia , resistance often develops through a mutation that changes the shape of the binding site of the drug. [ 97 ] [ 98 ] Sequential application of drugs can lead to the sequential evolution of resistance mutations to each drug in turn. [ 99 ] Gleevec is not as selective as was originally thought. It turns out that it targets other tyrosine kinase genes and can be used to control gastrointestinal stromal tumors (GISTs) that are driven by mutations in c-KIT. However, patients with GIST sometimes relapse with additional mutations in c-KIT that make the cancer cells resistant to Gleevec. [ 100 ] [ 101 ] Gefitinib(Iressa) and Erlotinib (Tarceva) are epidermal growth factor receptor (EGFR) tyrosine kinase inhibitors used for non-small cell lung cancer patients whose tumors have somatic mutations in EGFR. However, most patients' tumors eventually become resistant to these drugs. Two major mechanisms of acquired resistance have been discovered in patients who have developed clinical resistance to Gefitinib or Erlotinib: [ 102 ] point mutations in the EGFR gene targeted by the drugs, [ 103 ] and amplification of MET, another receptor tyrosine kinase, which can bypass EGFR to activate downstream signaling in the cell. In an initial study, 22% of tumors with acquired resistance to Gefitinib or Erlotinib had MET amplification. [ 104 ] To address these issues, clinical trials are currently assessing irreversible EGFR inhibitors (which inhibit growth even in cell lines with mutations in EGFR), the combination of EGFR and MET kinase inhibitors, and Hsp90 inhibitors (EGFR and MET both require Hsp90 proteins to fold properly). In addition, taking repeated tumor biopsies from patients as they develop resistance to these drugs would help to understand the tumor dynamics. Selective estrogen receptor modulators (SERMs) are a commonly used adjuvant therapy in estrogen-receptor positive (ERα+) breast cancer and a preventive treatment for women at high risk of the disease. There are several possible mechanisms of SERM resistance, though the relative clinical importance of each is debated. These include: [ 105 ] [ 106 ] Most prostate cancers derive from cells that are stimulated to proliferate by androgens. Most prostate cancer therapies are therefore based on removing or blocking androgens. Mutations in the androgen receptor (AR) have been observed in anti-androgen resistant prostate cancer that makes the AR hypersensitive to the low levels of androgens that remain after therapy. [ 111 ] Likewise, extra copies of the AR gene (amplification) have been observed in anti-androgen resistant prostate cancer. [ 112 ] These additional copies of the gene are thought to make the cell hypersensitive to low levels of androgens and so allow them to proliferate under anti-androgen therapy. Resistance to radiotherapy is also commonly observed. However, to date, comparisons of malignant tissue before and after radiotherapy have not been done to identify genetic and epigenetic changes selected by exposure to radiation. In gliomas , a form of brain cancer, radiation therapy appears to select for stem cells, [ 113 ] [ 114 ] though it is unclear if the tumor returns to the pre-therapy proportion of cancer stem cells after therapy or if radiotherapy selects for an alteration that keeps the glioma cells in the stem cell state. Cancer drugs and therapies commonly used today are evolutionary inert and represent a strong selection force, which leads to drug resistance. [ 115 ] A possible way to avoid that is to use a treatment agent that would co-evolve alongside cancer cells. Anoxic bacteria could be used as competitors or predators in hypoxic environments within tumors. [ 115 ] Scientists have been interested in the idea of using anoxic bacteria for over 150 years, but until recently there has been little progress in that field. According to Jain and Forbes, several requirements have to be met by the cells to qualify as efficient anticancer bacterium: [ 116 ] In the process of the treatment, cancer cells are most likely to evolve some form of resistance to the bacterial treatment. However, being a living organism, bacteria would coevolve with tumor cells, potentially eliminating the possibility of resistance. [ 116 ] Since bacteria prefer an anoxic environment, they are not efficient at eliminating cells on the periphery of the tumor, where oxygen supply is efficient. A combination of bacterial treatment with chemical drugs will increase chances of destroying the tumor. [ 116 ] Oncolytic viruses are engineered to infect cancerous cells. Limitations of that method include immune response to the virus and the possibility of the virus evolving into a pathogen . [ 115 ] By manipulating the tumor environment, it is possible to create favorable conditions for the cells with least resistance to chemotherapy drugs to become more fit and outcompete the rest of the population. The chemotherapy, administered directly after, should wipe out the predominant tumor cells. [ 115 ] Mapping between common terms from cancer biology and evolutionary biology:
https://en.wikipedia.org/wiki/Somatic_evolution_in_cancer
Somatic fusion , also called protoplast fusion , is a type of genetic modification in plants by which two distinct species of plants are fused together to form a new hybrid plant with the characteristics of both, a somatic hybrid . [ 1 ] Hybrids have been produced either between different varieties of the same species (e.g. between non-flowering potato plants and flowering potato plants) or between two different species (e.g. between wheat Triticum and rye Secale to produce Triticale ). Uses of somatic fusion include developing plants resistant to disease, such as making potato plants resistant to potato leaf roll disease . [ 2 ] Through somatic fusion, the crop potato plant Solanum tuberosum – the yield of which is severely reduced by a viral disease transmitted on by the aphid vector – is fused with the wild, non-tuber-bearing potato Solanum brevidens , which is resistant to the disease. The resulting hybrid has the chromosomes of both plants and is thus similar to polyploid plants. Somatic hybridization was first introduced by Carlson et al. in Nicotiana glauca . [ 3 ] The somatic fusion process occurs in four steps: [ 4 ] The procedure for seed plants describe above, fusion of moss protoplasts can be initiated without electric shock but by the use of polyethylene glycol (PEG). Further, moss protoplasts do not need phytohormones for regeneration , and they do not form a callus . [ 5 ] Instead, regenerating moss protoplasts behave like germinating moss spores . [ 6 ] Of further note sodium nitrate and calcium ion at high pH can be used, although results are variable depending on the organism. [ 7 ] Somatic cells of different types can be fused to obtain hybrid cells. Hybrid cells are useful in a variety of ways, e.g., Chromosome mapping through somatic cell hybridization is essentially based on fusion of human and mouse somatic cells. Generally, human fibrocytes or leucocytes are fused with mouse continuous cell lines . When human and mouse cells (or cells of any two mammalian species or of the same species) are mixed, spontaneous cell fusion occurs at a very low rate (10-6). Cell fusion is enhanced 100 to 1000 times by the addition of ultraviolet inactivated Sendai (parainfluenza) virus or polyethylene glycol (PEG). These agents adhere to the plasma membranes of cells and alter their properties in such a way that facilitates their fusion. Fusion of two cells produces a heterokaryon, i.e., a single hybrid cell with two nuclei, one from each of the cells entering fusion. Subsequently, the two nuclei also fuse to yield a hybrid cell with a single nucleus. A generalized scheme for somatic cell hybridization may be described as follows. Appropriate human and mouse cells are selected and mixed together in the presence of inactivated Sendai virus or PEG to promote cell fusion. After a period of time, the cells (a mixture of man, mouse and 'hybrid' cells) are plated on a selective medium , e.g., HAT medium , which allows the multiplication of hybrid cells only. Several clones (each derived from a single hybrid cell) of the hybrid cells are thus isolated and subjected to both cytogenetic and appropriate biochemical analyses for the detection of enzyme / protein / trait under investigation. An attempt is now made to correlate the presence and absence of the trait with the presence and absence of a human chromosome in the hybrid clones. If there is a perfect correlation between the presence and absence of a human chromosome and that of a trait in the hybrid clones, the gene governing the trait is taken to be located in the concerned chromosome. The HAT medium is one of the several selective media used for the selection of hybrid cells. This medium is supplemented with hypoxanthine , aminopterin and thymidine , hence the name HAT medium. Antimetabolite aminopterin blocks the cellular biosynthesis of purines and pyrimidines from simple sugars and amino acids . However, normal human and mouse cells can still multiply as they can utilize hypoxanthine and thymidine present in the medium through a salvage pathway , which ordinarily recycles the purines and pyrimidines produced from degradation of nucleic acids . Hypoxanthine is converted into guanine by the enzyme hypoxanthine-guanine phosphoribosyltransferase (HGPRT), while thymidine is phosphorylated by thymidine kinase (TK); both HGPRT and TK are enzymes of the salvage pathway. On a HAT medium, only those cells that have active HGPRT (HGPRT+) and TK (TK+) enzymes can proliferate, while those deficient in these enzymes (HGPRr- and/or TK-) can not divide (since they cannot produce purines and pyrimidines due to the aminopterin present in the HAT medium). For using HAT medium as a selective agent, human cells used for fusion must be deficient for either the enzyme HGPRT or TK, while mouse cells must be deficient for the other enzyme of this pair. Thus, one may fuse HGPRT deficient human cells (designated as TK+ HGPRr-) with TK deficient mouse cells (denoted as TK- HGPRT+). Their fusion products (hybrid cells) will be TK+ (due to the human gene ) and HGPRT+ (due to the mouse gene) and will multiply on the HAT medium, while the man and mouse cells will fail to do so. Experiments with other selective media can be planned in a similar fashion. Note: The table only lists a few examples, there are many more crosses. The possibilities of this technology are great; however, not all species are easily put into protoplast culture.
https://en.wikipedia.org/wiki/Somatic_fusion
The somatic marker hypothesis , formulated by Antonio Damasio and associated researchers, proposes that emotional processes guide (or bias) behavior , particularly decision-making. [ 1 ] [ 2 ] "Somatic markers" are feelings in the body that are associated with emotions, such as the association of rapid heartbeat with anxiety or of nausea with disgust . According to the hypothesis, somatic markers strongly influence subsequent decision-making. Within the brain, somatic markers are thought to be processed in the ventromedial prefrontal cortex (vmPFC) and the amygdala . The hypothesis has been tested in experiments using the Iowa gambling task . In economic theory , human decision-making is often modeled as being devoid of emotions, involving only logical reasoning based on cost-benefit calculations . [ 3 ] In contrast, the somatic marker hypothesis proposes that emotions play a critical role in the ability to make fast, rational decisions in complex and uncertain situations. [ 1 ] Patients with frontal lobe damage, such as Phineas Gage , provided the first evidence that the frontal lobes were associated with decision-making. Frontal lobe damage, particularly to the vmPFC, results in impaired abilities to organize and plan behavior and learn from previous mistakes, without affecting intellect in terms of working memory , attention , and language comprehension and expression . [ 4 ] [ 5 ] vmPFC patients also have difficulty expressing and experiencing appropriate emotions. This led Antonio Damasio to hypothesize that decision-making deficits following vmPFC damage result from the inability to use emotions to help guide future behavior based on past experiences. Consequently, vmPFC damage forces those affected to rely on slow and laborious cost-benefit analyses for every given choice situation. [ 6 ] When individuals make decisions , they must assess the incentive value of the choices available to them, using cognitive and emotional processes. When the individuals face complex and conflicting choices, they may be unable to decide using only cognitive processes, which may become overloaded. Emotions, consequently, are hypothesized to guide decision-making. Emotions, as defined by Damasio, are changes in both body and brain states in response to stimuli. [ 1 ] Physiological changes (such as muscle tone , heart rate , endocrine activity , posture , facial expression , and so forth) occur in the body and are relayed to the brain where they are transformed into an emotion that tells the individual something about the stimulus that they have encountered. Over time, emotions and their corresponding bodily changes, which are called "somatic markers", become associated with particular situations and their past outcomes. When making subsequent decisions, these somatic markers and their evoked emotions are consciously or unconsciously associated with their past outcomes, and influence decision-making in favor of some behaviors instead of others. [ 1 ] For instance, when a somatic marker associated with a positive outcome is perceived, the person may feel happy and thereby motivated to pursue that behavior. When a somatic marker associated with the negative outcome is perceived, the person may feel sad, which acts as an internal alarm to warn the individual to avoid that course of action. These situation-specific somatic states are based on, and reinforced by, past experiences help to guide behavior in favor of more advantageous choices, and therefore are adaptive. According to the hypothesis, two distinct pathways reactivate somatic marker responses. In the first pathway, emotion can be evoked by changes in the body that are projected to the brain – called the "body loop". For instance, encountering a feared object like a snake may initiate the fight-or-flight response and cause fear. In the second pathway, cognitive representations of the emotions (imagining an unpleasant situation "as if" you were in that particular situation) can be activated in the brain without being directly elicited by a sensory stimulus – called the " as-if body loop ". Thus, the brain can anticipate expected bodily changes, which allows the individual to respond faster to external stimuli without waiting for an event to actually occur. [ 4 ] The amygdala and vmPFC (a subsection of the orbital and medial prefrontal cortex or OMPFC) are essential components of this hypothesized mechanism, and therefore damage to either structure will disrupt decision-making. [ 7 ] In an effort to produce a simple neuropsychological tool that would assess deficits in emotional processing, decision-making, and social skills of OMPFC- lesioned individuals, Bechara and collaborators created the Iowa gambling task . [ 2 ] [ 8 ] The task measures a form of emotion-based learning. Studies using the gambling task have found deficits in various neurological (such as amygdala and OMPFC lesions) and psychiatric populations (such as schizophrenia , mania , and drug abusers ). The Iowa gambling task is a computerized test in which participants are presented with four decks of cards from which they repeatedly choose. Each deck contains various amounts of rewards of either $50 or $100, and occasional losses that are greater in the decks with higher rewards. The participants do not know where the penalty cards are located, and are told to pick cards that will maximize their winnings. The most profitable strategy turns out to be to choose cards only from the small reward/small penalty decks, because although the reward is smaller, the penalty is proportionally much smaller than in the high reward/high penalty decks. Over the course of a session, most healthy participants come to adopt the profitable low-penalty deck strategy. Participants with brain damage, however, are unable to determine the better deck to choose from, and continue to choose from the high reward/high penalty decks. [ 9 ] Since the Iowa gambling task measures participants' quickness in "developing anticipatory emotional responses to guide advantageous choices", [ 10 ] it is helpful in testing the somatic marker hypothesis. According to the hypothesis, somatic markers give rise to anticipation of the emotional consequences of a decision being made. Consequently, persons who perform well on the task are thought to be aware of the penalty cards and of the negative emotions associated with drawing such cards, and to realize which deck is less likely to yield a penalty. [ 10 ] This experiment has been used to analyze the impairments of people with damage to the vmPFC, which has been known to affect neural signaling of prospective rewards or punishments. Such persons perform less well on the task. [ 1 ] Functional magnetic resonance imaging (fMRI) has been used to analyze the brain during the Iowa gambling task. The brain regions that were activated during the Iowa gambling task were also the ones hypothesized to be triggered by somatic markers during decision-making. [ 11 ] Damasio has posited that the ability of humans to perform abstract thinking quickly and efficiently coincides with both the development of the vmPFC and with the use of somatic markers to guide human behavior during evolution. [ 6 ] Patients with damage to the vmPFC are more likely to engage in behaviors that negatively impact personal relationships in the distant future, but they never engage in actions that would lead to immediate harm to themselves or others. [ 1 ] The evolution of the prefrontal cortex was associated with the ability to represent events that may occur in the future. [ 6 ] The somatic marker hypothesis has been applied to trying to understand risky behaviors, such as risky sexual behavior and drug addiction. According to the hypothesis, riskier sexual behaviors are more exhilarating and pleasurable, and therefore they are more likely to stimulate repetitive engagement in such behaviors. [ 12 ] When this idea was tested in individuals who were infected with HIV and were substance dependent , differences were found between persons who scored well in the Iowa gambling test, and those who scored poorly. The high scorers showed a correlation between the amount of distress they reported having over their HIV status, and their acceptance of risk during sexual behavior – the greater the distress, the greater the risk that these people would take. The low scorers, on the other hand, showed no such correlation. These results were interpreted as indicating that persons with intact decision-making abilities are better able to rely on past emotional experiences when weighing risks, than are persons who are deficient in such abilities, and that acceptance of risk serves to ameliorate emotional distress. [ 10 ] Drug abusers are thought to ignore the negative consequences of addiction while seeking drugs. According to the somatic marker hypothesis, such abusers are impaired in their ability to recall and consider past unpleasant experiences when weighing whether to consider drug seeking behaviors. [ 13 ] [ 14 ] Researchers analyzed the neuroendocrine responses of substance-dependent individuals and healthy individuals after being shown pleasant or unpleasant images. In response to unpleasant images, drug users showed decreased levels of several neuroendocrine markers, including norepinephrine , cortisol , and adrenocorticotropic hormone . Addicts showed lesser responses to both pleasant and unpleasant images, suggesting that they may have a diminished emotional response. [ 15 ] Neuroimaging studies utilizing fMRI indicate that drug-related stimuli have the ability to activate brain regions involved in emotional evaluation and reward processing. When shown a film of people smoking cocaine , cocaine users showed greater activation of the anterior cingulate cortex , the right inferior parietal lobe , and the caudate nucleus than did non-users. Conversely, the cocaine users showed lesser activation when viewing a sex film than did non-users. [ 16 ] Some researchers believe that the use of somatic markers (i.e., afferent feedback ) would be a very inefficient method of influencing behavior. Damasio's notion of the as-if experience dependent feedback route, [ 1 ] [ 17 ] whereby bodily responses are re-represented utilizing the somatosensory cortex ( postcentral gyrus ), also proposes an inefficient method of affecting explicit behavior. [ 18 ] Edmund Rolls (1999) stated that; "it would be very inefficient and noisy to place in the execution route a peripheral response, and transducers to attempt to measure that peripheral response, itself a notoriously difficult procedure" (p. 73). [ 18 ] Reinforcement association located in the orbitofrontal cortex and amygdala, where the incentive value of stimuli is decoded, is sufficient to elicit emotion-based learning and to affect behavior via, for example, the orbitofrontal-striatal pathway . [ 18 ] [ 19 ] This process can occur via implicit or explicit processes. [ 18 ] The somatic marker hypothesis represents a model of how feedback from the body may contribute to both advantageous and disadvantageous decision-making in situations of complexity and uncertainty. Much of its supporting data comes from data taken from the Iowa gambling task. [ 20 ] While the Iowa gambling task has proven to be an ecologically valid measure of decision-making impairment, there exist three assumptions that need to hold true. First, the claim that it assesses implicit learning as the reward/punishment design is inconsistent with data showing accurate knowledge of the task possibilities [ 21 ] and that mechanisms such as working-memory appear to have a strong influence. Second, the claim that this knowledge occurs through preventive marker signals is not supported by competing explanations of the psychophysiology generated profile. [ 22 ] Lastly, the claim that the impairment is due to a 'myopia for the future' is undermined by more plausible psychological mechanisms explaining deficits on the tasks such as reversal learning, risk-taking, and working-memory deficits. There may also be more variability in control performance than previously thought, thus complicating the interpretation of the findings. Furthermore, although the somatic marker hypothesis has accurately identified many of the brain regions involved in decision-making, emotion, and body-state representation, it has failed to clearly demonstrate how these processes interact at a psychological and evolutionary level. There are many experiments that could be implemented to further test the somatic marker hypothesis. One way would be to develop variants of the Iowa gambling task that control some of the methodological issues and interpretation ambiguities generated. It may be a good idea to include removing the reversal learning confound, which would make the task more difficult to consciously comprehend. Additionally, causal tests of the somatic marker hypothesis could be practiced more insistently in a greater range of populations with altered peripheral feedback, like on patients with facial paralysis. In conclusion, the somatic marker hypothesis needs to be tested in more experiments. Until a wider range of empirical approaches are employed in order to test the somatic marker hypothesis, it appears that the framework is simply an intriguing idea that is in need of some better supporting evidence. Despite these issues, the somatic marker hypothesis and the Iowa gambling task reestablish the notion that emotion has the potential to be a benefit as well as a problem during the decision-making process in humans. [ 23 ]
https://en.wikipedia.org/wiki/Somatic_marker_hypothesis
A somatic mutation is a change in the DNA sequence of a somatic cell of a multicellular organism with dedicated reproductive cells ; that is, any mutation that occurs in a cell other than a gamete , germ cell , or gametocyte . Unlike germline mutations , which can be passed on to the descendants of an organism, somatic mutations are not usually transmitted to descendants. This distinction is blurred in plants, which lack a dedicated germline , and in those animals that can reproduce asexually through mechanisms such as budding , as in members of the cnidarian genus Hydra . While somatic mutations are not passed down to an organism's offspring, somatic mutations will be present in all descendants of a cell within the same organism. Many cancers are the result of accumulated somatic mutations. The term somatic generally refers to the cells of the body, in contrast to the reproductive ( germline ) cells, which give rise to the egg or sperm . For example, in mammals , somatic cells make up the internal organs, skin, bones, blood, and connective tissue. [ 1 ] In most animals, separation of germ cells from somatic cells ( germline development ) occurs during early stages of development . Once this segregation has occurred in the embryo, any mutation outside of the germline cells can not be passed down to an organism's offspring. However, somatic mutations are passed down to all the progeny of a mutated cell within the same organism. A major section of an organism therefore might carry the same mutation, especially if that mutation occurs at earlier stages of development. [ 2 ] Somatic mutations that occur later in an organism's life can be hard to detect, as they may affect only a single cell—for instance, a post- mitotic neuron; [ 3 ] [ 4 ] improvements in single cell sequencing are therefore an important tool for the study of somatic mutation. [ 5 ] Both the nuclear DNA and mitochondrial DNA of a cell can accumulate mutations; somatic mitochondrial mutations have been implicated in development of some neurodegenerative diseases. [ 6 ] There are many exceptions to the rule that somatic mutations cannot be inherited by offspring. Many organisms simply do not dedicate a separate germline during early development. Plants and basal animals such as sponges and corals instead generate gametes from pluripotent stem cells in adult somatic tissues. [ 7 ] [ 8 ] In flowering plants, for example, germ cells can arise from adult somatic cells in the floral meristem . Other animals without a designated germ line include tunicates and flatworms . [ 9 ] Somatic mutations can also be passed down to offspring in organisms that can reproduce asexually , without production of gametes. For instance, animals in the cnidarian genus Hydra can reproduce asexually through the mechanism of budding (they can also reproduce sexually). In hydra , a new bud develops directly from somatic cells of the parent hydra. [ 10 ] A mutation present in the tissue that gives rise to the daughter organism would be passed down to that offspring. Many plants naturally reproduce through vegetative reproduction —growth of a new plant from a fragment of the parent plant—propagating somatic mutations without the step of seed production. Humans artificially induce vegetative reproduction via grafting and stem cuttings. As with germline mutations, mutations in somatic cells may arise due to endogenous factors, including errors during DNA replication and repair, and exposure to reactive oxygen species produced by normal cellular processes. Mutations can also be induced by contact with mutagens , which can increase the rate of mutation. Most mutagens act by causing DNA damage—alterations in DNA structure such as pyrimidine dimers , or breakage of one or both DNA strands. DNA repair processes can remove DNA damages that would, otherwise, upon DNA replication, cause mutation. Mutation results from damage when mistakes in the mechanism of DNA repair cause changes in the nucleotide sequence, or if replication occurs before repair is complete. Mutagens can be physical, such as radiation from UV rays and X-rays , or chemical—molecules that interact directly with DNA—such as metabolites of benzo[ a ]pyrene , a potent carcinogen found in tobacco smoke . [ 11 ] Mutagens associated with cancers are often studied to learn about cancer and its prevention. Research suggests that the frequency of mutations is generally higher in somatic cells than in cells of the germline; [ 12 ] furthermore, there are differences in the types of mutation seen in the germ and in the soma. [ 13 ] There is variation in mutation frequency between different somatic tissues within the same organism [ 13 ] and between species. [ 2 ] Milholland et al. (2017) examined the mutation rate of dermal fibroblasts (a type of somatic cell) and germline cells in humans and in mice. They measured the rate of single nucleotide variants (SNVs), most of which are a consequence of replication error. Both in terms of mutational load (total mutations present in a cell) and mutation rate per cell division (new mutations with each mitosis ), somatic mutation rates were more than ten times that of the germline, in humans and in mice. In humans, mutation load in fibroblasts was over twenty times greater than germline (2.8 × 10 −7 compared with 1.2 × 10 −8 mutations per base pair). Adjusted for differences in the estimated number of cell divisions, the fibroblast mutation rate was about 80 times greater than the germ (respectively, 2.66 × 10 −9 vs. 3.3 × 10 −11 mutations per base pair per mitosis). [ 2 ] The disparity in mutation rate between the germline and somatic tissues likely reflects the greater importance of genetic integrity in the germline than in the soma. [ 12 ] Variation in mutation frequency may be due to differences in rates of DNA damage or to differences in the DNA repair process as a result of elevated levels of DNA repair enzymes. [ 13 ] In April 2022 it has been reported that most mammals have about the same number of mutations by the time they reach the end of their lifespan, so those that have similar lifespan will have similar somatic mutation rates and those who live less/more will have a higher/lower rate of somatic mutations respectively. [ 14 ] [ 15 ] Post-mitotic neurons accumulate somatic mutations at a constant rate throughout life, and this rate is roughly similar to the mutation rates of mitotically active tissues. [ 16 ] The mutations in neurons may arise as a consequence of endogenous DNA damage and the somewhat inaccurate repair of such damage that occurs all the time in cells. [ 16 ] As a part of the adaptive immune response , antibody-producing B cells experience a mutation rate many times higher than the normal rate of mutation. The mutation rate in antigen-binding coding sequences of the immunoglobulin genes is up to 1,000,000 times higher than in cell lines outside the lymphoid system. A major step in affinity maturation , somatic hypermutation helps B cells produce antibodies with greater antigen affinity. [ 17 ] Somatic mutations accumulate within an organism's cells as it ages and with each round of cell division; the role of somatic mutations in the development of cancer is well established, and the accumulation of somatic mutations is implicated in the biology of aging. [ 4 ] Mutations in neuronal stem cells (especially during neurogenesis ) [ 18 ] and in post-mitotic neurons lead to genomic heterogeneity of neurons—referred to as "somatic brain mosaicism". [ 3 ] The accumulation of age-related mutations in neurons may be linked to neurodegenerative diseases , including Alzheimer's disease , but the association is unproven. The majority of central-nervous system cells in the adult are post-mitotic, and adult mutations might affect only a single neuron. Unlike in cancer, where mutations result in clonal proliferation, detrimental somatic mutations might contribute to neurodegenerative disease by cell death. [ 19 ] Accurate assessment of somatic mutation burden in neurons therefore remains difficult to assess. If a mutation occurs in a cell of an organism, that mutation will be present in all the descendants of this cell within the same organism. The accumulation of certain mutations over generations of somatic cells is part of the process of malignant transformation , from normal cell to cancer cell. Cells with heterozygous loss-of-function mutations (one good copy of a gene and one mutated copy) may function normally with the unmutated copy until the good copy has been spontaneously somatically mutated. This kind of mutation happens often in living organisms, but it is difficult to measure the rate. Measuring this rate is important in predicting the rate at which people may develop cancer.
https://en.wikipedia.org/wiki/Somatic_mutation
Somatic recombination , as opposed to the genetic recombination that occurs in meiosis , is an alteration of the DNA of a somatic cell that is inherited by its daughter cells. The term is usually reserved for large-scale alterations of DNA such as chromosomal translocations and deletions and not applied to point mutations . Somatic recombination occurs physiologically in the assembly of the B cell receptor and T-cell receptor genes ( V(D)J recombination ), [ 1 ] as well as in the class switching of immunoglobulins . [ 2 ] Somatic recombination is also important in the process of carcinogenesis . [ 3 ] In neurons of the human brain , somatic recombination occurs in the gene that encodes the amyloid precursor protein APP. [ 4 ] Neurons from individuals with sporadic Alzheimer's disease show greater APP gene diversity due to somatic recombination than neurons from healthy individuals. [ 4 ] Intrachromosomal homologous recombination in Arabidopsis thaliana plants was found to occur in all organs examined from the seed stage to the flowering stage of somatic plant development. [ 5 ] Recombination frequencies were typically in the range of 10 −6 to 10 −7 events per genome . [ 5 ] A. thaliana mutants selected for hypersensitivity to X-irradiation also proved to be simultaneously hypersensitive to the DNA damaging agents mitomycin C and/or methyl methanesulfonate . [ 6 ] The mutants were also deficient in somatic homologous recombination. [ 6 ] These findings suggest that repair of some types of DNA damage requires a recombinational process that was defective in the mutants studied. In nature, plants are continuously exposed to UV-B (280–320 nm) radiation, a component of sunlight that damages the DNA of somatic cells. [ 7 ] Cyclobutane pyrimidine dimers (CPD) are a type of damage induced by UV-B. In A. thaliana , homologous recombination appears to be directly involved in repairing CPD damage. [ 7 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Somatic_recombination
Somatic theory is a theory of human social behavior based on the somatic marker hypothesis of António Damásio . The theory proposes a mechanism by which emotional processes can guide (or bias) behavior: in particular, decision-making, the attachment theory of John Bowlby , and the self-psychology of Heinz Kohut (especially as consolidated by Allan Schore ). It draws on various philosophical models: On the Genealogy of Morals of Friedrich Nietzsche , Martin Heidegger on das Man , Maurice Merleau-Ponty practiced on the lived body as a center of experience, Ludwig Wittgenstein on social practices, Michel Foucault on discipline, as well as theories of performativity emerging out of the speech act theory by J. L. Austin , in point of fact was developed by Judith Butler and Shoshana Felman . [ 1 ] Some somatic theorists have also put into somatic theory to performance in the schools of acting, the training was developed by Konstantin Stanislavski and Bertolt Brecht . Barbara Sellers-Young [ 2 ] applies Damasio’s somatic-marker hypothesis to critical thinking as an embodied performance and provides a review of the theoretical literature in performance studies that supports something like Damasio’s approach: Edward Slingerland [ 5 ] applies Damasio's somatic-marker hypothesis to the cognitive linguistics by Gilles Fauconnier and Mark Turner , [ 6 ] as well as George Lakoff and Mark Johnson . [ 7 ] In particular, Slingerland combines Fauconnier and Turner's theory of conceptual blending and Lakoff and Johnson's embodied mind theory of metaphor in his hypothesis. His goal to apply somatic theory into cognitive linguistics is to show that: Douglas Robinson first began developing a somatic theory of language for a keynote presentation at the 9th American Imagery Conference in Los Angeles, in October 1985. It was based on Ahkter Ahsen 's theory of somatic response to images as the basis for therapeutic transformations. In contradistinction to Ahsan's model, which rejected Freud's " talking cure " on the grounds that words do not awaken somatic responses, Robinson argued that there is a very powerful somatics of language. He later incorporated this notion into The Translator's Turn (1991), drawing on the (passing) somatic theories of William James , Ludwig Wittgenstein , and Kenneth Burke in order to argue that somatic response may be "idiosomatic" (somatically idiosyncratic), but is typically "ideosomatic" (somatically ideological, or shaped and guided by society). Furthermore, the ideosomatics of language explain how language remains stable enough for communication to be possible. This work preceded the Damasio group's first scientific publication on the somatic-marker hypothesis in 1991, [ 9 ] and Robinson did not begin to incorporate Damasio's somatic-marker hypothesis into his somatic theory until later in the 1990s. In Translation and Taboo (1996), Robinson drew on the proto-somatic theories of Sigmund Freud , Jacques Lacan , and Gregory Bateson to explore the ways in which the ideosomatics of taboo structure (and partly sanction and conceal) the translation of sacred texts . His first book to draw on Damasio's somatic-marker hypothesis is Performative Linguistics (2003); there he draws on J. L. Austin 's theory of speech acts , Jacques Derrida 's theory of iterability , and Mikhail Bakhtin 's theory of dialogism , to argue that performativity as an activity of the speaking body is grounded in somatic theory. He also draws on Daniel Simeoni's application of Pierre Bourdieu 's theory of habitus in order to argue that his somatics of translation as developed in The Translator's Turn actually explains translation norms more fully than Gideon Toury 's account in Descriptive Translation Studies and beyond (1995). [ 10 ] In 2005, Robinson began writing a series of books exploring somatic theory in different communicative contexts: modernist / formalist theories of estrangement (Robinson 2008), translation as ideological pressure (Robinson 2011), first-year writing (Robinson 2012), and the refugee experience, (de)colonization , and the intergenerational transmission of trauma (Robinson 2013). [ 11 ] In Robinson's articulation, the somatic theory has four main planks: In addition, he has tied additional concepts to somatic theory along the way: the proprioception of the body politic as a homeostatic balancing between too much familiarity and too much strangeness (Robinson 2008); tensions between loconormativity and xenonormativity , the exosomatization of places, objects, and skin color, and paleosomaticity (Robinson 2013); ecosis and icosis (unpublished work). Stephanie Fetta’s approach to somatic theory weaves together an extensive array of disciplinary discourses, ranging from cognitive science and neuroscience to sociology and Sophiology . As a literary and cultural critic, Fetta draws attention to and investigates the role of the soma in her study of US Latin@/x creative texts. [ 12 ] Her scholarly work broadens the scope of somatic theory and literary scholarship by drawing support from the natural and social sciences to position the soma as a “ psychobiological agent” and social actor, and thus an overlooked (albeit indispensable) lens in the study of social power (2018, 37). Building on both biblical and contemporary uses of the term, Fetta reconceptualizes the soma as ‘the emotional, intelligent and communicative body’ and explains that it refers to the gestures of the physical body in internal response to external social pressures. Hence, she is one of the first somatic theorists to employ the term soma along these lines—despite the current spate of studies in neurology , cognitive literary studies, behavioral science , body studies, affect theory , theories of mind (ToM) and philosophy of mind (PoM), which piece together the connections among cognitive processes, bodily feeling reactions, and evaluative perceptions. In 2018, she published Shaming into Brown: Somatic Transactions of Race in Latina/o Literature [ 13 ] —a detailed and analytic transdisciplinary study that renders the soma as “a pervasive yet unexpected site of subjectivity.” She employs this conception of soma as a primary tool to investigate intersectional racialization and the transactions of race in her case studies of Latin@/x literature (xiii). This book develops somatic analysis as a line of investigation, which reviewers maintain has applications in fields such as the humanities, critical race theory, neurology, behavioral studies, and so on. Somatic analysis has inspired, and been cited in, a growing number of academic, personal, [ 14 ] and artistic works. [ 15 ] Fetta’s key applications of somatic analysis are as follows:
https://en.wikipedia.org/wiki/Somatic_theory