text
stringlengths
11
320k
source
stringlengths
26
161
Surface Science Reports is a peer-reviewed scientific journal published by North-Holland that covers the physics and chemistry of surfaces . It was established in 1981. It is the review journal corresponding to the journals Surface Science and Surface Science Letters . This journal is abstracted and indexed by: According to the Journal Citation Reports , Surface Science Reports has a 2020 impact factor of 12.267. [ 1 ] This article about a materials science journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Surface_Science_Reports
A surface acoustic wave ( SAW ) is an acoustic wave traveling along the surface of a material exhibiting elasticity , with an amplitude that typically decays exponentially with depth into the material, such that they are confined to a depth of about one wavelength. [ 2 ] [ 3 ] SAWs were first explained in 1885 by Lord Rayleigh , who described the surface acoustic mode of propagation and predicted its properties in his classic paper. [ 4 ] Named after their discoverer, Rayleigh waves have a longitudinal and a vertical shear component that can couple with any media like additional layers in contact with the surface. This coupling strongly affects the amplitude and velocity of the wave, allowing SAW sensors to directly sense mass and mechanical properties. The term 'Rayleigh waves' is often used synonymously with 'SAWs', although strictly speaking there are multiple types of surface acoustic waves, such as Love waves , which are polarised in the plane of the surface, rather than longitudinal and vertical. SAWs such as Love and Rayleigh waves tend to propagate for much longer than bulk waves, as they only have to travel in two dimensions, rather than in three. Furthermore, in general they have a lower velocity than their bulk counterparts. Surface acoustic wave devices provide wide-range of applications with the use of electronic system, including delay lines , filters, correlators and DC to DC converters . The possibilities of these SAW device could provide potential field in radar system, communication systems. This kind of wave is commonly used in devices called SAW devices in electronic circuits . SAW devices are used as filters , oscillators and transformers , devices that are based on the transduction of acoustic waves. The transduction from electric energy to mechanical energy (in the form of SAWs) is accomplished by the use of piezoelectric materials. Electronic devices employing SAWs normally use one or more interdigital transducers (IDTs) to convert acoustic waves to electrical signals and vice versa by exploiting the piezoelectric effect of certain materials , like quartz , lithium niobate , lithium tantalate , lanthanum gallium silicate , etc. [ 5 ] These devices are fabricated by substrate cleaning/treatments like polishing, metallisation, photolithography , and passivation/protection (dielectric) layer manufacturing. These are typical process steps used in manufacturing of semiconductors like silicon integrated circuits . All parts of the device (substrate, its surface, metallisation material type, thickness of metallisation, its edges formed by photolithography, layers - like passivation coating the metallisation) have effect on the performance of the SAW devices because propagation of Rayleigh waves is highly dependent on the substrate material surface, its quality and all layers in contact with the substrate. For example in SAW filters the sampling frequency is dependent on the width of the IDT fingers, the power handling capability is related to the thickness and materials of the IDT fingers, and the temperature stability depends not only of the temperature behavior of the substrate but also on the metals selected for the IDT electrodes and the possible dielectric layers coating the substrate and the electrodes. SAW filters are now used in mobile telephones , and provide technical advantages in performance, cost, and size over other filter technologies such as quartz crystals (based on bulk waves), LC filters , and waveguide filters specifically at frequencies below 1.5-2.5 GHz depending on the RF power needed to be filtered. Complementing technology to SAW for frequencies above 1.5-2.5 GHz is based on thin-film bulk acoustic resonators (TFBAR, or FBAR). Much research has been done in the last 20 years in the area of surface acoustic wave sensors . [ 6 ] Sensor applications include all areas of sensing (such as chemical, optical, thermal, pressure , acceleration , torque and biological). SAW sensors have seen relatively modest commercial success to date, but are commonly commercially available for some applications such as touchscreen displays. They have been successfully applied to torque sensing in motorsport powertrains [ 7 ] and high performance aerospace applications [ 8 ] as well as temperature sensing in harsh environments such as high voltage electrical power transmission and the combined sensing of torque and temperature on the rotor of electric motors [ 9 ] SAW resonators are used in many of the same applications in which quartz crystals are used, because they can operate at higher frequency. [ 10 ] They are often used in radio transmitters where tunability is not required. They are often used in applications such as garage door opener remote controls, short range radio frequency links for computer peripherals, and other devices where channelization is not required. Where a radio link might use several channels, quartz crystal oscillators are more commonly used to drive a phase locked loop . Since the resonant frequency of a SAW device is set by the mechanical properties of the crystal, it does not drift as much as a simple LC oscillator, where conditions such as capacitor performance and battery voltage will vary substantially with temperature and age. SAW filters are also often used in radio receivers, as they can have precisely determined and narrow passbands. This is helpful in applications where a single antenna must be shared between a transmitter and a receiver operating at closely spaced frequencies. SAW filters are also frequently used in television receivers, for extracting subcarriers from the signal; until the analog switchoff , the extraction of digital audio subcarriers from the intermediate frequency strip of a television receiver or video recorder was one of the main markets for SAW filters. Early pioneer Jeffery Collins incorporated surface acoustic wave devices in a Skynet receiver he developed in the 1970s. It synchronised signals faster than existing technology. [ 11 ] They are also often used in digital receivers, and are well suited to superhet applications. This is because the intermediate frequency signal is always at a fixed frequency after the local oscillator has been mixed with the received signal, and so a filter with a fixed frequency and high Q provides excellent removal of unwanted or interference signals. In these applications, SAW filters are almost always used with a phase locked loop synthesized local oscillator, or a varicap driven oscillator. In seismology surface acoustic waves could become the most destructive type of seismic wave produced by earthquakes , [ 12 ] which propagate in more complex media, such as ocean bottom, rocks, etc. so that it need to be noticed and monitored by people to protect living environment. SAWs play a key role in the field of quantum acoustics (QA) where, in contrast to quantum optics (QO) which studies the interaction between matter and light, the interaction between quantum systems ( phonons , (quasi-)particles and artificial qubits) and acoustic waves is analysed. The propagation speed of the respective waves of QA is five orders of magnitude slower than that of QO. As a result, QA offers a different perspective of the quantum regime in terms of wavelengths which QO has not covered. [ 13 ] One example of these additions is the quantum optical investigation of qubits and quantum dots fabricated in such a way as to emulate essential aspects of natural atoms, e.g. energy-level structures and coupling to an electromagnetic field . [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] These artificial atoms are arranged into a circuit dubbed ‘giant atoms’, due to its size reaching 10 −4 –10 −3 m. [ 19 ] Quantum optical experiments generally made use of microwave fields for matter-light interaction, but because of the difference of wavelength between the giant atoms and microwave fields, the latter of which has a wavelength ranging between 10 −2 –10 −1 m, SAWs were used instead for their more suitable wavelength (10 −6 m). [ 20 ] Within the fields of magnonics and spintronics , a resonant coupling between spin waves and surface acoustic waves with equal wave-vector and frequency allows for the transfer of energy from one form to another, in either direction. [ 13 ] This can for example be useful in the construction of magnetic field sensors, which are sensitive to both the intensity and direction of external magnetic fields. These sensors, constructed using a structure of magnetostrictive and piezoelectric layers have the benefit of operating without batteries and wires, as well as having a broad range of operating conditions, such as high temperatures or rotating systems. [ 21 ] Even at the smallest scales of current semiconductor technology, each operation is carried out by huge streams of electrons. [ 22 ] Reducing the number of electrons involved in these processes, with the ultimate goal of achieving single electron control is a serious challenge. This is due to the electrons being highly interactive with each other and their surroundings, making it difficult to separate just one from the rest. [ 23 ] The use of SAWs can help with achieving this goal. When SAWs are generated on a piezoelectric surface, the strain wave generates an electromagnetic potential. The potential minima can then trap single electrons, allowing them to be individually transported. Although this technique was first thought of as a way to accurately define a standard unit of current, [ 24 ] it turned out to be more useful in the field of quantum information . [ 25 ] Usually, qubits are stationary, making the transfer of information between them difficult. The single electrons, carried by the SAWs, can be used as so called flying qubits, able to transport information from one place to another. To realise this a single electron source is needed, as well as a receiver between which the electron can be transported. Quantum dots (QD) are typically used for these stationary electron confinements. This potential minimum is sometimes called a SAW QD. The process, as seen in the GIF on the right, is typically as follows. First SAWs are generated with an interdigital transducer with specific dimensions between the electrodes to get the favorable wavelengths. [ 22 ] Then from the stationary QD the electron quantum tunnels to the potential minimum, or SAW QD. The SAWs transfer some kinetic energy to the electron, driving it forward. It is then carried through a one dimensional channel on a surface of piezoelectric semiconductor material like GaAs . [ 23 ] [ 24 ] Finally, the electron tunnels out of the SAW QD and into the receiver QD, after which the transfer is complete. This process can also be repeated in both directions. [ 26 ] As acoustic vibrations can interact with the moving charges in a piezoelectric semiconductor through the strain-induced piezoelectric field in bulk materials, this acoustoelectric (AE) coupling is also important in 2D materials, such as graphene . In these 2D materials the two-dimensional electron gas has band gap energies generally much higher than the energy of the SAW phonons traveling through the material. Therefore the SAW phonons are typically absorbed via intra-band electronic transitions . In graphene these transitions are the only way, as the linear dispersion relation of its electrons prevents momentum/energy conservation when it would absorb a SAW for an inter-band transition. [ 27 ] Often the interaction between moving charges and SAWs results in the diminishing of the SAW intensity as it moves through the 2D electron gas, as well as re-normalizing the SAW velocity. The charges take over kinetic energy from the SAW and lose this energy again through carrier scattering . Aside from SAW intensity attenuation, there are specific situations in which the wave can be amplified as well. By applying a voltage over the material, the charge carriers may obtain a higher drift speed than the SAW. Then they pass on a part of their kinetic energy to the SAW, causing it to amplify its intensity and velocity. The converse works as well. If the SAW is moving faster than the carriers, it may transfer kinetic energy to them, and thereby losing some velocity and intensity. [ 28 ] In recent years, attention has been drawn to using SAWs to drive microfluidic actuation and a variety of other processes. Owing to the mismatch of sound velocities in the SAW substrate and fluid, SAWs can be efficiently transferred into the fluid, creating significant inertial forces and fluid velocities. This mechanism can be exploited to drive fluid actions such as pumping , mixing , and jetting . [8] To drive these processes, there is a change of mode of the wave at the liquid-substrate interface. In the substrate, the SAW wave is a transverse wave and upon entering the droplet the wave becomes a longitudinal wave . [9] It is this longitudinal wave that creates the flow of fluid within the microfluidic droplet, allowing mixing to take place. This technique can be used as an alternative to microchannels and microvalves for manipulation of substrates, allowing for an open system. [ 29 ] This mechanism has also been used in droplet-based microfluidics for droplet manipulation. Notably, using SAW as an actuation mechanism, droplets were pushed towards two [ 30 ] [ 31 ] or more [ 32 ] outlets for sorting. Moreover, SAWs were used for droplet size modulation, [ 33 ] [ 34 ] splitting, [ 35 ] [ 30 ] [ 36 ] trapping, [ 37 ] tweezing, [ 38 ] and nanofluidic pipetting. [ 36 ] Droplet impact on flat and inclined surfaces has been manipulated and controlled using SAW. [ 39 ] [ 40 ] PDMS ( polydimethylsiloxane ) is a material that can be used to create microchannels and microfluidic chips. It has many uses, including in experiments where living cells are to be tested or processed. If living organisms need to be kept alive, it is important to monitor and control their environment, such as heat and pH levels; however, if these elements are not regulated, the cells may die or it may result in unwanted reactions. [ 41 ] PDMS has been found to absorb acoustic energy, causing the PDMS to heat up quickly (exceeding 2000 Kelvin/second). [ 42 ] The use of SAW as a way to heat these PDMS devices, along with liquids inside microchannels, is now a technique that can be done in a controlled manner with the ability to manipulate the temperature to within 0.1 °C. [ 42 ] [ 43 ] The development of Flexible Surface Acoustic Wave (SAW) devices has been a significant driver in the advancement of wearable technology and microfluidic systems. These devices are typically fabricated on polymer substrates, such as Polyethylene Naphthalate (PEN) and polyimide , and utilize sputtering deposition of materials like AlN and ZnO . [ 44 ] This combination of flexibility and advanced materials has expanded their application potential across various fields. Surface acoustic waves can be used for flow measurement. SAW relies on the propagation of a wave front, which appears similar to seismic activities. The waves are generated at the excitation centre and spread out along the surface of a solid material. An electric pulse induces them to generate SAWs that propagate like the waves of an earthquake . Interdigital transducer acts as sender and as receiver . When one is in sender mode, the two most distant ones act as receivers. The SAWs travel along the surface of the measuring tube, but a portion will couple out to the liquid. The decoupling angle depends on the liquid respectively the propagation velocity of the wave which is specific to the liquid. On the other side of the measuring tube, portions of the wave will couple into the tube and continue their way along its surface to the next interdigital transducer. Another portion will be coupled out again and travels back to the other side of the measuring tube where the effect repeats itself and the transducer on this side detects the wave. That means excitation of any one transducer here will lead to a sequence of input signals on two other transducers in the distance. Two of the transducers send their signals in the direction of flow, two in the other direction. [ 45 ]
https://en.wikipedia.org/wiki/Surface_acoustic_wave
Surface acoustic wave sensors are a class of microelectromechanical systems (MEMS) which rely on the modulation of surface acoustic waves to sense a physical phenomenon. The sensor transduces an input electrical signal into a mechanical wave which, unlike an electrical signal, can be easily influenced by physical phenomena. The device then transduces this wave back into an electrical signal. Changes in amplitude, phase, frequency, or time-delay between the input and output electrical signals can be used to measure the presence of the desired phenomenon. [ 1 ] [ 2 ] [ 3 ] The basic surface acoustic wave device consists of a piezoelectric substrate with an input interdigitated transducer (IDT) on one side of the surface of the substrate, and an output IDT on the other side of the substrate. The space between the IDTs across which the surface acoustic wave propagates is known as the delay line; the signal produced by the input IDT - a physical wave - moves much slower than its associated electromagnetic form, causing a measurable delay. Surface acoustic wave technology takes advantage of the piezoelectric effect in its operation. Most modern surface acoustic wave sensors use an input interdigitated transducer (IDT) to convert an electrical signal into an acoustic wave. The sinusoidal electrical input signal creates alternating polarity between the fingers of the interdigitated transducer. Between two adjacent sets of fingers, polarity of the fingers will be switched (e.g. + - +). As a result, the direction of the electric field between two fingers will alternate between adjacent sets of fingers. This creates alternating regions of tensile and compressive strain between fingers of the electrode by the piezoelectric effect, producing a mechanical wave at the surface known as a surface acoustic wave . As fingers on the same side of the device will be at the same level of compression or tension, the space between them---known as the pitch---is the wavelength of the mechanical wave. We can express the synchronous frequency f 0 of the device with phase velocity v p and pitch p as: The synchronous frequency is the natural frequency at which mechanical waves should propagate. Ideally, the input electric signal should be at the synchronous frequency to minimize insertion loss. As the mechanical wave will propagate in both directions from the input IDT, half of the energy of the waveform will propagate across the delay line in the direction of the output IDT. In some devices, a mechanical absorber or reflector is added between the IDTs and the edges of the substrate to prevent interference patterns or reduce insertion losses , respectively. The acoustic wave travels across the surface of the device substrate to the other interdigitated transducer, converting the wave back into an electric signal by the piezoelectric effect. Any changes that were made to the mechanical wave will be reflected in the output electric signal. As the characteristics of the surface acoustic wave can be modified by changes in the surface properties of the device substrate, sensors can be designed to quantify any phenomenon which alters these properties. Typically, this is accomplished by the addition of mass to the surface or changing the length of the substrate and the spacing between the fingers. The structure of the basic surface acoustic wave sensor allows for the phenomena of pressure, strain, torque, temperature, and mass to be sensed. The mechanisms for this are discussed below: The phenomena of pressure, strain, torque, temperature, and mass can be sensed by the basic device, consisting of two IDTs separated by some distance on the surface of a piezoelectric substrate. These phenomena can all cause a change in length along the surface of the device. A change in length will affect both the spacing between the interdigitated electrodes---altering the pitch---and the spacing between IDTs---altering the delay. This can be sensed as a phase-shift, frequency-shift, or time-delay in the output electrical signal. The fundamental measurement of a surface acoustic wave sensor is typically strain. When a diaphragm is placed between the environment at a variable pressure and a reference cavity at a fixed pressure, the diaphragm will bend in response to a pressure differential. As the diaphragm bends, the distance along the surface in compression will increase. A surface acoustic wave pressure sensor either replaces the diaphragm with a piezoelectric substrate patterned with interdigitated electrodes or connects a larger diaphragm to the substrate in order to create a measurable strain in the surface acoustic wave device. When measuring Torque, the principle surface strain of the shaft is in the rotating direction is measured, as application to the sensor will cause a deformation of the piezoelectric substrate. A surface acoustic wave temperature sensor can be fashioned from a piezoelectric substrate with a relatively high coefficient of thermal expansion in the direction of the length of the device. Temperature sensing and strain sensing can be combined into a single device in order to deliver temperature compensation of the sensing system. Due to the ability of Surface Acoustic Wave sensors to operate within electromagnetically noisy environments and in close proximity to magnets it has been found that they can be embedded into electric motors in order to improve control by providing active torque and temperature measurement of the machine rotor shaft. They have also been applied to robotic control systems in order to provide dynamic torque feedback in robot movement reducing jitter . The accumulation of mass on the surface of an acoustic wave sensor will affect the surface acoustic wave as it travels across the delay line. The velocity v of a wave traveling through a solid is proportional to the square root of product of the Young's modulus E and the density ρ {\displaystyle \scriptstyle \rho } of the material. Therefore, the wave velocity will decrease with added mass. This change can be measured by a change in time-delay or phase-shift between input and output signals. Signal attenuation could be measured as well, as the coupling with the additional surface mass will reduce the wave energy. In the case of mass-sensing, as the change in the signal will always be due to an increase in mass from a reference signal of zero additional mass, signal attenuation can be effectively used. The inherent functionality of a surface acoustic wave sensor can be extended by the deposition of a thin film of material across the delay line which is sensitive to the physical phenomena of interest. If a physical phenomenon causes a change in length or mass in the deposited thin film, the surface acoustic wave will be affected by the mechanisms mentioned above. Some extended functionality examples are listed below: Chemical vapor sensors use the application of a thin film polymer across the delay line which selectively absorbs the gas or gases of interest. An array of such sensors with different polymeric coatings can be used to sense a large range of gases on a single sensor with resolution down to parts per trillion, allowing for the creation of a sensitive "lab on a chip." A biologically active layer can be placed between the interdigitated electrodes which contains immobilized antibodies. If the corresponding antigen is present in a sample, the antigen will bind to the antibodies, causing a mass-loading on the device. These sensors can be used to detect bacteria and viruses in samples, as well as to quantify the presence of certain mRNA and proteins. Surface acoustic wave humidity sensors require a thermoelectric cooler in addition to a surface acoustic wave device. The thermoelectric cooler is placed below the surface acoustic wave device. Both are housed in a cavity with an inlet and outlet for gases. By cooling the device, water vapor will tend to condense on the surface of the device, causing a mass-loading. Surface acoustic wave devices are made sensitive to optical wavelengths through the phenomenon known as acoustic charge transport (ACT), which involves the interaction between a surface acoustic wave and photogenerated charge carriers from a photoconducting layer. Ultraviolet radiation sensors use a thin layer of zinc oxide across the delay line. When exposed to ultraviolet radiation, zinc oxide generates charge carriers which interact with the electric fields produced in the piezoelectric substrate by the traveling surface acoustic wave. [ 4 ] This interaction produces measurable decreases in both the velocity and amplitude of the acoustic wave signal. Ferromagnetic materials (such as iron, nickel, and cobalt) change their physical dimensions in the presence of an applied magnetic field, a property called magnetostriction. The Young's modulus of the material is dependent on ambient magnetic field strength. If a film of magnetostrictive material is deposited in the delay line of a surface acoustic wave sensor, the change in length of the deposited film in response to a change in the magnetic field will stress the underlying substrate. The resulting strain (i.e., the deformation of the surface of the substrate) produces measurable changes in the phase velocity, phase-shift, and time-delay of the acoustic wave signal, providing information about the magnetic field. Surface acoustic wave devices can be used to measure changes in viscosity of a liquid placed upon it. As the liquid becomes more viscous the resonant frequency of the device will change in correspondence. A network analyser is needed to view the resonant frequency.
https://en.wikipedia.org/wiki/Surface_acoustic_wave_sensor
Surface activated bonding ( SAB ) is a non-high-temperature wafer bonding technology with atomically clean and activated surfaces. Surface activation prior to bonding by using fast atom bombardment is typically employed to clean the surfaces. High-strength bonding of semiconductors , metals , and dielectrics can be obtained even at room temperature. [ 1 ] [ 2 ] In the standard SAB method, wafer surfaces are activated by argon fast atom bombardment in ultra-high vacuum (UHV) of 10 −4 –10 −7 Pa. The bombardment removes adsorbed contaminants and native oxides on the surfaces. The activated surfaces are atomically clean and reactive for formation of direct bonds between wafers when they are brought into contact even at room temperature. The SAB method has been studied for bonding of various materials, as shown in Table I. The standard SAB, however, failed to bond some materials such as SiO 2 and polymer films. The modified SAB was developed to solve this problem, by using a sputtering deposited Si intermediate layer to improve the bond strength. The combined SAB has been developed for SiO 2 -SiO 2 and Cu/SiO 2 hybrid bonding, without use of any intermediate layer.
https://en.wikipedia.org/wiki/Surface_activated_bonding
Surface anatomy (also called superficial anatomy and visual anatomy ) is the study of the external features of the body of an animal. [ 1 ] In birds , this is termed topography . Surface anatomy deals with anatomical features that can be studied by sight, without dissection . As such, it is a branch of gross anatomy , along with endoscopic and radiological anatomy. [ 2 ] Surface anatomy is a descriptive science. [ 3 ] In particular, in the case of human surface anatomy , these are the form and proportions of the human body and the surface landmarks which correspond to deeper structures hidden from view, both in static pose and in motion. In addition, the science of surface anatomy includes the theories and systems of body proportions and related artistic canons. [ citation needed ] The study of surface anatomy is the basis for depicting the human body in classical art . Some pseudo-sciences such as physiognomy , phrenology and palmistry rely on surface anatomy. Knowledge of the surface anatomy of the thorax (chest) is particularly important because it is one of the areas most frequently subjected to physical examination , like auscultation and percussion . [ 4 ] In cardiology, Erb's point refers to the third intercostal space on the left sternal border where S2 heart sound is best auscultated. [ 5 ] [ 6 ] Some sources include the fourth left interspace. [ 7 ] Human female breasts are located on the chest wall, most frequently between the second and sixth rib . [ 4 ] Following are lists of surface anatomical features in humans and other animals. Sorted roughly from head to tail, cranial to caudal . Homologues share a bullet point and are separated by commas. Subcomponents are nested. Class in which component occurs in italic .
https://en.wikipedia.org/wiki/Surface_anatomy
Surface and bulk erosion are two different forms of erosion that describe how a degrading polymer erodes. In surface erosion, the polymer degrades from the exterior surface. The inside of the material does not degrade until all the surrounding material around it has been degraded. [ 1 ] In bulk erosion, degradation occurs throughout the whole material equally. Both the surface and the inside of the material degrade. [ 1 ] Surface erosion and bulk erosion are not exclusive, many materials undergo a combination of surface and bulk erosion. [ 2 ] Therefore, surface and bulk erosion can be thought of as a spectrum instead of two separate categories. In surface erosion, the erosion rate is directly proportional to the surface area of the material. For very thin materials, the surface area remains relatively constant when the material degrades, which allows surface erosion to be characterized as zero order release since the rate of degradation is constant. [ 2 ] [ 3 ] In bulk erosion, the erosion rate depends on the volume of the material. [ 3 ] Due to degradation, the volume of the material decreases during bulk erosion causing the erosion rate to decrease over time. Therefore, bulk erosion rates are difficult to control since it is not zero order. [ 1 ] To determine whether a polymer will undergo surface or bulk erosion, the degradation rate of the polymer in water (how fast the polymer reacts to water) and the rate of diffusion of water penetrating through the material must be considered. If the degradation process is faster than the diffusion process, surface erosion will occur since the material's surface will quickly degrade before water has time to diffuse and penetrate through the material. If the diffusion process is faster than the degradation process bulk erosion will occur because water penetrates through the material before significant erosion occurs on the surface. [ 4 ] A kinetics of the erosion of a polymer can be modified by changing the diffusion process or the degradation process. For example, blending a polymer with another polymer that is very reactive to water will speed up the degradation process and cause surface erosion. On the other hand, decreasing the dimensions of a material will allow water to travel to the center of the material more quickly, which speeds up the diffusion process and causes bulk erosion. [ 2 ] [ 4 ] By mathematically modeling the rate of diffusion of water in the material and the rate of degradation of the material, it is possible to predict whether a certain material will undergo surface or bulk erosion by looking at the ratio between the two rates. The rate of diffusion of water is modeled by the equation t diffusion = < x > 2 π 4 D {\displaystyle t_{\text{diffusion}}={\frac {<x>^{2}\pi }{4D}}} [ 4 ] Where <x> is the mean length of the material and D is the diffusion coefficient of water inside the polymer. The rate of degradation is modeled by the following equation t erosion = ln < x > − ln ⁡ M N A ( N − 1 ) ∗ p 3 k {\displaystyle t_{\text{erosion}}={\frac {\ln <x>-\ln {\sqrt[{3}]{M \over N_{\text{A}}(N-1)*p}}}{k}}} [ 4 ] Where M = molecular weight of polymer, N A = Avogadro constant , N = degree of polymerization , p = density of polymer, k = degradation rate The ratio between diffusion time and degradation time gives us a dimensionless parameter ε called the erosion number. [ 4 ] Erosion Number = t diffusion t erosion = < x > 2 π ∗ k 4 D ∗ ( ln < x > − ln ⁡ M N A ( N − 1 ) ∗ p 3 ) {\displaystyle {\text{Erosion Number}}={\frac {t_{\text{diffusion}}}{t_{\text{erosion}}}}={\frac {<x>^{2}\pi *k}{4D*(\ln <x>-\ln {\sqrt[{3}]{M \over N_{\text{A}}(N-1)*p}})}}} If ε≫1, surface erosion occurs. If ε≪1, bulk erosion occurs. [ 4 ] From the model above, it is clear that certain changing certain parameters can determine what kind of erosion a polymer goes through by either increasing or decreasing the rate of the degradation process or the diffusion process. The table below summarizes how a parameter can be modified to favor surface erosion or bulk erosion. Since surface erosion is easier to control than bulk erosion, surface erosion is preferred in drug delivery where the release of the drug must be constant or be controlled by changing the dimensions of the material. [ 3 ] [ 5 ] A zero order release of a drug can be possible with surface erosion if a very thin material is used or if surface area is kept constant. Surface erosion is also useful for protecting water-soluble drugs until the time of desired drug release, because water will not penetrate through the polymer matrix and reach the drug until all the surrounding polymer has degraded. [ 2 ] However, bulk erosion can be useful in situations that do not require controlled release, such as plastic degradation. [ 3 ]
https://en.wikipedia.org/wiki/Surface_and_bulk_erosion
In cooking several factors, including materials, techniques, and temperature , can influence the surface chemistry of the chemical reactions and interactions that create food. All of these factors depend on the chemical properties of the surfaces of the materials used. The material properties of cookware, such as hydrophobicity, surface roughness, and conductivity can impact the taste of a dish dramatically. The technique of food preparation alters food in fundamentally different ways, which produce unique textures and flavors. The temperature of food preparation must be considered when choosing the correct ingredients. The interactions between food and pan are very dependent on the material that the pan is made of. Whether or not the pan is hydrophilic or hydrophobic, the heat conductivity and capacity, surface roughness, and more all determine how the food is cooked. Stainless steel is considered stainless because it has at least 11% chromium by mass. Chromium is a relatively inert metal and does not rust or react as easily as plain carbon steel. This is what makes it an exceptional material for cooking. It is also fairly inexpensive, but does not have a very high thermal conductivity. From a surface standpoint, this is because of the thin layer of chromium oxide that is formed on the surface. This thin layer protects the metal from rusting or corroding. While it is protective, the oxide layer is not very conductive, which makes cooking food less efficient than it could be. For most cooking applications, high thermal conductivity is desirable to create an evenly heated surface on which to cook. In this way, stainless steel is usually not considered high-grade cookware. In terms of surface interactions, chromium oxide is polar. The oxygen atoms on the surface have a permanent dipole moment, and are therefore hydrophilic. This means that water will wet it, but oils or other lipids will not. Cast-iron cookware is seasoned with oil. The surface of the cast iron is not very smooth; it has pits and peaks that are not conducive to cooking. Typically, the cookware is seasoned with oil. This process leaves a thin coat of oil in the pits and on top of the peaks on the surface of the pan. This thin coat actually polymerizes, making it durable and lasting. It also prevents the cast iron from rusting, which it is prone to do. The oil that is used in a seasoned pan combines with any liquid that is used in the cooking process and creates a good contact between pan and food. Even though the cast iron itself is a poor heat conductor, the oil makes the pan effective when it is at a high temperature. The other effect that the seasoning oil has is to make the surface of a cast-iron pan hydrophobic. This makes the pan non-stick during cooking, since the food will combine with the oil and not the pan. It also makes the pan easier to clean, but eventually the polymerized oil layer which seasons it comes off and it needs to be re-seasoned. [ 1 ] Ceramic cookware (as in pans, not baking dishes) is not made of a solid ceramic, but rather is a metal pan, typically aluminum, with a nano-particle ceramic coating. This makes the surface rough on a small-scale and causes solutions to bead up more and not stick to the surface. The downside is that the increased surface area means less surface contact with the food that is to be cooked and therefore has less heat transfer. Unfortunately, since the surface is fine, it can be scratched off over time, and the benefit from having it in the first place is lost. [ 2 ] Polytetrafluoroethylene (commonly called by its DuPont brand name, Teflon) is a polymer that is used as a coating for non-stick cookware. The polymer is a polyethylene chain with fluorine atoms replacing the hydrogen atoms. The strength of the carbon-fluorine bonds makes it nonreactive to most things. Furthermore fluorine bonded to carbon tends to not form hydrogen bonds, [ 3 ] and this along with the overall relatively weak London dispersion forces present result in Teflon poorly sticking to other substances. Teflon has the third lowest coefficient of friction of any known solid. [ citation needed ] It is also relatively cheap and very common. The downsides of Teflon include the fact that it can be scratched off and get into food during the cooking process. Another problem is that Teflon begins to break down at around 350 °C and can give off poisonous fluorocarbon gasses. The final problem is that the bonding of Teflon to the pan uses a surfactant called perfluorooctanoic acid (PFOA), which can also break down at high temperatures and poison food. Modern pans no longer use PFOA as it's banned in the European Union. Silicone is a heat-resistant rubber that is inert and non-toxic. They are polymers that typically have a silicon–oxygen backbone with methyl ligands. The fairly inert methyl groups are not very reactive, giving the silicone a fairly low coefficient of friction. Just like Teflon, this makes them non-stick and easy to clean. They are also resistant to very high temperatures due to the strong bonds between all of the atoms. This means that they can be baked or used around hot oils. Silicone has a very specific use as cookware, since it is not rigid. Most silicon used in cooking is in the form of spatulas or molds, and as such serve a different purpose than the previously discussed materials. Cooking techniques can be broken down into two major categories: Oil based and water based cooking techniques. Both oil and water based techniques rely on the vaporization of water to cook the food. Oil based cooking techniques have significant surface interactions that greatly affect the quality of the food they produce. These interactions stem from the polar oil molecules interacting with the surface of the food. Water based techniques have far less surface interactions that affect the quality of the food. Pan frying is an oil based cooking technique which is typically used to sear larger cuts of meat or to fully cook thinner cuts. This technique uses a thin layer of heated oil to coat the pan. The oil layer is the method of heat transfer between the burner and the food. Water vapor is a critical component of how pan frying works. Raw meat products contain up to 73% water. [ 4 ] The meat is cooked by the evaporation of this water. When the water is vaporized it leaves the meat through pores in the surface of the meat. Another source of water vapor is the Maillard Reaction . This reaction is responsible for why meat, and many other food products, turn brown when cooked. This reaction only occurs at high temperatures. Water vapor is a byproduct of the Maillard reaction. In pan frying, the water that exits the meat forms a barrier between the meat and the oil or the surface of the pan. This barrier is critical for the success of pan frying meat. When meat cooks, the proteins on the surface of the meat denature because of the heat. This means that many of the secondary bonds that give the proteins their shape are broken. The protein molecules want to reform those interactions to return to their most thermodynamically stable state. Two opportune locations for the surface proteins to bind are the oil and the surface of the pan. Meat sticking to the bottom of the pan is caused by the interactions between proteins on the surface of the meat binding with the molecules on the surface of the pan. The denatured protein can also bind with the oil in the pan. This is not desirable for many health and flavor reasons. Since the proteins, the oil molecules, and in some cases the surface of the pan, all have a significant polarity, the force of their interactions can be high. The force of the interactions between the protein and oil or pan surface is modeled by the Coulomb force equation: where Q {\displaystyle {Q}} represents the charge in coulombs on each object, D {\displaystyle {D}} represents the separation between the two objects in meters, ε 0 {\displaystyle {\varepsilon _{0}}} represents the vacuum permittivity constant which is 8.85... × 10 −12 farads per meter, and ε r {\displaystyle {\varepsilon _{\mathrm {r} }}} represents the relative permittivity of the surrounding material in farads per meter. The value of each individual interaction may be small but when there are millions of interactions the overall force can be noticeable. The presence of water reduces the force of these attractions in three ways. The water puts physical distance between the oil or pan and the proteins on the surface of the meat. This increases the D 2 {\displaystyle D^{2}} value in the equation. Water also has a high permittivity value ( ε r {\displaystyle \varepsilon _{\mathrm {r} }} ). Both of these increase the value of the denominator and reduce the value of the force that is possible. Water is also a polar molecule which means, in certain cases, it can bind to the denatured proteins. Water binding to the proteins on the surface of the meat has no effect on how the meat cooks. Deep frying is another oil based cooking technique that is similar to pan frying. However, in deep frying, the entire piece of food is submerged in the oil. Thus there should be no interaction between the food and the container holding the oil. All the interactions will be between the food and the oil. Oftentimes the food is covered in a liquid batter before it is deep fried. This eliminates the interactions between the denatured proteins on meat and the oil. In deep frying, the interactions are primarily at the interface of the batter and the oil. For proper deep frying the oil temperature should exceed 163 °C. [ 6 ] When the batter, which is typically water based, comes into contact with the high temperature oil, the water in it is instantly vaporized. This vaporization dehydrates the batter and causes the crispiness associated with deep fried foods. Similar to pan frying, the water vapor leaving the batter creates a boundary layer between the oil and the food. Because of the large surface area of the food that is in contact with the oil and the limited water stored in the batter, this boundary layer does not last nearly as long as in pan frying. The boundary layer of water vapor again serves the purpose of preventing interactions between the oil and the surface of the food. Even as the boundary layer of water breaks down, the initial interactions between the oil and the batter will be minimal. The oil will move into the vacancies left in the batter by the vaporization of the water. At this point there is very little bonding between the fatty acids in the oil and the non polar hydrocarbons that make up the majority of the batter. However the polar portion of the triglyceride molecule does begin to induce dipoles in the hydrocarbon chains that make up the batter. Under prolonged heat, the triglycerides in the oil begin to break down. This means the glycerol molecules and the fatty acid chains begin to break apart. As this occurs the oil becomes more polar. As the oil becomes more polar, the Van der Waals interactions between the glycerol and the hydrocarbons begin to increase. In this case, the dipole on glycerol induces a dipole in the hydrocarbon chain. The strength of the dipole induced dipole interactions can be modeled with a combination of Debye, Keesom and London interactions through addition. Debye: [ 5 ] Keesom: [ 5 ] London: [ 5 ] Where M 1 {\displaystyle M_{1}} and M 2 {\displaystyle M_{2}} are the charges per length in coulomb-meters, α 1 {\displaystyle \alpha _{1}} and α 2 {\displaystyle \alpha _{2}} are polarizability in units of C·m 2 ·V −1 , ε 0 {\displaystyle {\varepsilon _{0}}} represents the vacuum permittivity constant which is 8.85... × 10 −12 farads per meter, ε r {\displaystyle {\varepsilon _{\mathrm {r} }}} represents the relative permittivity of the surrounding material in farads per meter, k b {\displaystyle k_{\mathrm {b} }} is the Boltzmann constant, T {\displaystyle T} is temperature in units of kelvin, r {\displaystyle r} is the distance between the molecules, and the h V {\displaystyle hV} terms refer to the ionization energies of the molecules. When the water boundary layer is present the value of ε r is very large. The presence of water also increases the value of r {\displaystyle r} . Since r {\displaystyle r} is raised to the sixth power any increase is significantly magnified. Both these serve to significantly reduce the interactions between the oil and the batter. As the oil breaks down, its polarizability increases. This significantly increases the strength of the London and Debye interaction and thus combination of them. As the strength of the interactions increases the amount of oil that cannot be removed from the food increases. This leads to greasy, oily and unhealthy food. There are many cooking techniques that do not use oil as part of the process such as steaming or boiling . Water based techniques are typically used to cook vegetables or other plants which can be consumed as food. When no oil is present the method of heat transfer to the food is typically water vapor. Water vapor molecules do not have any significant surface interactions with the food surface. Since food, including vegetables, is cooked by the vaporization of water within the food, the use of water vapor as the mode of heat transfer has no effect on the chemical interactions on the surface of the food. Understanding the role of temperature in cooking is an essential part of creating fine cuisine. Temperature plays a vital role in nearly every meal's preparation. Many aspects of cooking rely on the proper treatment of colloids. Things such as sauces, soups, custards, and butters are all created by either creating or destroying a colloid. Heat plays a vital role in the life of a colloid as the balance between thermal excitation and molecular interaction can tip the scale in favor of suspension or coagulation and eventually coalescence. In some cases, like sauces that contain cheeses, heating the sauce to too high of a temperature will cause clumping, ruining the sauce. The smoke point of any oil is defined by the temperature at which light blue smoke rises from the surface. The smoke, which contains acrolein, is an eye irritant and asphixiant. The smoke point of oils vary widely. Depending on origin, refinement, age, and source growth conditions, the smoke point for any given type of oil can drop nearly 20 °C. For example, the smoke point of olive oil can vary from being suitable for high temperature frying to only safely used for stir frying. As a cooking oil is refined its smoke point increases. This is because many of the impurities found in natural oils aid in their breakdown. In general the lighter the oil, the higher its smoke point. It is important to choose the appropriate oil for each cooking technique and temperature as cooking oils degrade rapidly when heated about their smoke point. It is recommended that oils heated beyond their smoke point should not be consumed as the chemicals created are suspected carcinogens.
https://en.wikipedia.org/wiki/Surface_chemistry_of_cooking
As with any material implanted in the body, it is important to minimize or eliminate foreign body response and maximize effectual integration. Neural implants have the potential to increase the quality of life for patients with such disabilities as Alzheimer's , Parkinson's , epilepsy , depression , and migraines . With the complexity of interfaces between a neural implant and brain tissue, adverse reactions such as fibrous tissue encapsulation that hinder the functionality, occur. Surface modifications to these implants can help improve the tissue-implant interface, increasing the lifetime and effectiveness of the implant. Intracranial electrodes consist of conductive electrode arrays implanted on a polymer or silicon, or a wire electrode with an exposed tip and insulation everywhere that stimulation or recording is not desired. Biocompatibility is essential for the entire implant, but special attention is paid to the actual electrodes since they are the site producing the desired function. One main physiological issue that current long-term implanted electrodes suffer from are fibrous glial encapsulations after implantation. This encapsulation is due to the poor biocompatibility and biostability (integration at the hard electrode and soft tissue interface) of many neural electrodes being used today. The encapsulation causes a reduced signal intensity because of the increased electrical impedance and decreased charge transfer between the electrode and the tissue. The encapsulation causes decreased efficiency, performance, and durability. Electrical impedance is the opposition to current flow with an applied voltage, usually represented as Z in units of ohms (Ω). The impedance of an electrode is especially important as it is directly related to its effectiveness. A high impedance causes poor charge transfer and thus poor electrode performance for either stimulating or recording the neural tissue . Electrode impedance is related to surface area at the interface between the electrode and the tissue. At electrode sites, the total impedance is controlled by the double-layer capacitance . [ 1 ] The capacitance value is directly related to the surface area. Increasing the surface area at the electrode-tissue interface will increase the capacitance and thus decrease the impedance. The equation below describes the inverse relationship between the capacitance and impedance. where i is the imaginary unit, w is the frequency of the current, C is capacitance, and R is resistance. A desirable electrode would have a low impedance meaning a higher surface area. One method to increase this area is coating the electrode surfaces with a variety of materials. Many new materials and techniques are being researched to improve the behavior of neural electrodes. [ 2 ] [ 3 ] Currently, research is being done to increase the biocompatibility and integration of electrodes in neural tissue; this research is discussed in more detail below. Surface chemistry of implantable electrodes proves to be more of a design concern for chronically implanted electrodes as compared to those with only acute implantation times. For acute implantations, the main concerns are laceration damage and degradation of particles left behind after electrode removal. For chronically implanted electrodes the cellular response and tissue encapsulation of the foreign body, regardless of degradation – even for materials that are highly biocompatible – are the primary concerns. Degradation, however, is still undesirable because particles can be toxic to tissue, can spread throughout the body, and even trigger an allergic response. Surface chemistry is an area of science applicable to biological implants. Bulk material properties are important when considering applications, however, it is the materials' surface (top several layers of molecules) that determines the biological response and is therefore the key to implant success. [ 4 ] Implants within the central nervous system are unique in their manor of cellular response; there is little room for error. Prosthesis in these areas are typically electrodes or electrode arrays . Electrodes, especially stimulating electrodes and the high current density they discharge, can raise electrochemical issues. Electrodes will be surrounded by tissue and electrolytes ; stimulation, resultant electric fields, and induced polarizations will change local ion concentrations and local pH which can then cause problems such as material corrosion and electrode fouling. [ 5 ] Pourbiax diagrams will show the phases that a material will take in an aqueous environment, based on electrical potential and pH. The brain maintains a pH of around 7.2 to 7.4, and from the Pourbaix diagram of platinum [ 5 ] it can be seen that at around 0.8 volts Pt at the surface will oxidize to PtO 2 , and at around 1.6 volts, PtO 2 will oxidize to PtO 3 . These voltages do not seem to be outside of reasonable range for neural stimulation. The voltage required for stimulation may change significantly over the life of a single electrode. This change is required to maintain a consistent current output through variations in the surrounding environmental resistance. The changes in resistance may be due to: adsorption of material onto the electrode, corrosion of the electrode, encapsulation of the electrode in fibrous tissue – known as a glial scar , or changes in the chemical environment around the electrode. Ohm's law V = I * R shows the interdependence of voltage, current and resistance. When voltage change causes a crossing of equilibrium lines as seen in a Pourbaix diagram during a stimulation, the changing polarization of the electrode is no longer linear. [ 5 ] Undesirable polarization can lead to adverse effects such as corrosion, fouling, and toxicity. Because of this equilibrium potential, pH, and required current density should be considered when making material choice since these can affect the surface chemistry and biocompatibility of the implant. [ 5 ] Corrosion is a major issue with neural electrodes. Corrosion can occur because electrode metals are placed in an electrolytic solution, where the presence of current can either increase the rate of corrosion mechanisms or overcome limiting activation energies. Redox reactions are a mechanism of corrosion that can lead to dissolution of ions from the electrode surface. There is a base level of metal ions in tissue, however, when these levels increase beyond threshold values the ions become toxic and can cause severe health problems. [ 6 ] In addition, the fidelity of the electrode system can be compromised. Knowing the impedance of an electrode is important whether the electrode is used for stimulation or recording. When degradation of the electrode surface occurs because of corrosion, the surface area increases with its roughness. Calculating a new electrode impedance to compensate for the change in surface area once implanted it is not easy. This computational flaw can skew data from recording or pose a dangerous obstacle limiting safe stimulation. Electrode fouling is a major hindrance on the performance of electrodes. Few materials are completely bioinert, as in they trigger no bodily reaction. Some material that may be bioinert in theory fails to be ideal in practice because of defects in their formation, processing, manufacturing or sterilization. Fouling can be caused by adsorption of proteins, fibrous tissue, trapped cells or dead cell fragments, bacteria, or any other reactive particle. Protein adsorption is influenced by the nature and geometry of domains including hydrophobicity, polar and ionic interactions of the material and surrounding particles, charge distribution, kinetic movement, and pH. [ 5 ] Phagocytosis of bacteria and other particles is mainly affected by surface charge, hydrophobicity, and chemical composition of the implant. It is important to note that the initial environment the implant is subjected to after implantation is different and unique compared to the environment after some time has passed since the area will undergo wound repair; the body's natural healing of the trauma will cause changes in local pH, electrolyte concentrations, and the presence and activity of biological compounds. For many reasons known and unknown, protein adsorption varies from material to material. Two of the biggest determining factors that have been observed are surface roughness and surface free energy. [ 7 ] In the case of exposed electrodes, it is desirable to have the adsorbed protein layer as thin as possible to increase sensitivity and performance. Noble metals are an obvious choice for achieving biocompatibility; however, when acting as electrodes, some of these noble metals will actually participate in the reaction, deteriorate, and trigger adverse effects via lost particles. The most (noble metals) are gold (Au), platinum (Pt), and iridium (Ir). The properties of titanium were also investigated in the study that produced the data [ 5 ] for the above table, however, its properties are not listed here since its poor conductive properties make it unsuitable for neural implants. Insight on titanium's surface chemistry may give direction to future research. Titanium has the roughest and most hydrophilic surface of any metal described thus far (the importance of protein adsorption, its mechanisms, and the interplay of hydrophilic properties are discusses further in the hydrogels section of the page). Titanium adsorbed the thickest protein layer after the first day and still after the seventh day, but actually had its thickness reduced by the 28th day. Gold, platinum and iridium's protein layers all continued to grow up until the 28th day, but the rates slowed over time. [ 7 ] Two more conductive materials that have notable characteristics are tungsten and indium tin oxide . Tungsten is electrically conductive and can be manufactured down to a very fine point, and for this reason has been used in intraspinal microstimulation (ISMS) for mapping out spinal cords during terminal surgeries. Tungsten electrodes, however, can corrode and form tungstic ions in the presence of H 2 O 2 and/or O 2 . Tungstic acid has been seen to be highly toxic to cat motorneurons, [ 9 ] and for this reason, does not currently make a suitable material for chronic implants. Indium tin oxide (ITO) has been used as electrode material for in vitro studies. ITO electrodes are very precise when stimulating and recording and when placed amongst plasma proteins, develop and maintain the thinnest protein layer compared to the other materials so far mentioned. [ 7 ] It may have potential for acute in vivo usage, but over time, it has been observed to let go of particles producing highly toxic effects. [ 10 ] A variety of mechanical adaptations, such as tip geometry and surface roughness, to aid in neural implant design have been investigated and implemented in recent years. The geometry of an electrode affects the shape of the electric field emitted. The electric field shape, in turn, affects the current density produced by the electrode. Determining optimum surface roughness for neural implants proves to be a challenging topic. Smooth surfaces may be preferable to rougher ones as they may decrease the likelihood of bacterial adsorption and infection. Smooth surfaces would also make it more difficult for the initiation of a corrosion cell. However, creating a rougher, porous surface, may prove beneficial for at least two reasons: decreased potential polarization at the electrode surface as a result of increased surface area and decreased current density, and a decrease in fibrous tissue encapsulation thickness due to opportunity for tissue ingrowth. It has been determined that if interconnected pores with sizes between 25 and 150 micrometers , ingrowth of tissue can occur and can decrease the exterior tissue encapsulation thickness by a factor of approximately 10 (compared to a smooth electrode such as polished platinum-iridium). [ 5 ] Different material coatings for neural electrode surfaces are being researched to improve the long-term integration of electrodes in the neural tissue by improving biocompatibility, mechanical properties, and the charge transport between the electrode and the living tissue. The electrode functionality can be increased by adding a surface modification on the electrode of a conducting porous polymer with the incorporation of cell adhesion peptides, proteins, and anti-inflammatory drugs. [ 11 ] [ 2 ] [ 12 ] Polymers, especially conductive ones, have been widely researched to coat electrode surfaces. Conductive polymers are organic materials that have properties similar to metals and semiconductors in their ability conduct electricity and attractive optical properties. [ 11 ] These materials have rough surfaces, resulting in large surface area and charge density. Conducting polymer coatings have been shown to improve the performance and stability of neural electrode. Conductive polymers have been shown to lower the impedance of electrodes (an important property as mentioned above), increase the charge density, and improve the mechanical interface between the soft tissue and hard electrode. The porous (rough) structure of many conductive polymer coatings on the electrode increases the surface area. [ 11 ] The high surface area of conductive polymers is directly related to decreased impedance and charge transfer improvement at the tissue-electrode interface. This improved charge transfer allows for better recording and stimulating in neural application. Table 2 below shows some common impedance and charge density values of different electrodes at a frequency of 1 kHz, which is the characteristic of neural biological activity. The porous, high surface area of the conductive polymer coatings allows for target cell adhesion (increased cell and tissue integration), which increase the bio-compatibility and stability of the device. Conducting polymer coatings as mentioned above can greatly improve the interface between the soft tissue in the body and the hard electrode surface. Polymers are softer, which reduces the inflammation from strain mismatch between tissue and electrode surface. The reduced inflammatory reaction causes a decrease in thickness of the glial encapsulation which causes signal degeneration. The elastic modulus of silicon (a common material that electrodes are made from) is around 100 GPa and the tissue in the brain is about 100 kPa. [ 17 ] The electrode modulus (stiffness) is about 100 times greater than that of the tissue in the brain. For the best device integration in the body, it is important to get the stiffness between the two to be as similar as possible. To improve this interface, a conductive polymer coating (smaller modulus than the electrode) can be applied to the electrode surface which causes a gradient of mechanical properties to act as a mediator between the hard and soft surfaces. The added polymer coating reduces the stiffness of the electrode and allows for better integration of the electrode. The figure to the right shows a graph of how the modulus changes when integrating the polymer coating onto the electrode. Another benefit of using conductive polymers as a coating for neural devices is the ease of synthesis and flexibility in processing. [ 11 ] Conducting polymers can be directly "deposited onto electrode surfaces with precisely controlled morphologies". [ 17 ] There are two current ways conducting polymers can be deposited onto electrode surfaces which are chemical polymerization and electrochemical polymerization. In the application for neural implants, electrochemical polymerization is used because of its ability to create thin films and the ease of synthesis. Films can be formed on the order of 20 nm. [ 17 ] Electrochemical polymerization (electrochemical deposition) is performed using a three-electrode configuration in a solution of the monomer of the desired polymer, a solvent, and an electrolyte (dopant). In the case of depositing a polymer coating onto electrode a common dopant used is poly (styrene sulfonate) or PSS because of its stability and biocompatibility. [ 17 ] Two common conductive polymers being investigated for coatings use PSS as a dopant to be electrochemically deposited onto the electrode surface (see sections below). One conducting polymer coating that has shown promising results for improving the performance of neural electrodes is polypyrrole (PPy). Polypyrrole has great biocompatibility and conductive properties, which makes it a good option for the use in neural electrodes. PPy has been shown to have a good interaction with biological tissues. [ 18 ] This is due to the boundary it creates between the hard electrode and the soft tissue. PPy has been shown to support cell adhesion and growth of a number of different cell types including primary neurons which is important in neural implants. [ 15 ] PPy also decreases the impedance of the electrode system by increasing the roughness on the surface. The roughness on the electrode surface is directly related to an increased surface area (increased neuron interface with electrode) which increases the signal conduction. In one paper, polypyrrole (PPy) was doped with polystyrene sulfonate (PSS) to electrochemically deposit a polypyrrole coating on the electrode surface. The film was coated onto the electrode at different thickness, increasing the roughness. The increased roughness (increased effective surface) leads to a decreased overall electrode impedance from about 400 kΩ (bare stent) to less than 10 kΩ (PPy/PSS coating) at 1 kHz. [ 15 ] This decrease in impedance leads to improved charge transfer from the electrode to the tissue and an overall more effective electrode for recording and stimulating applications. Poly(3,4-ethylenedioxythiophene) (PEDOT) is another conducting polymer that is being investigated for coating an electrode surface. [ 19 ] Some benefits of PEDOT over PPy is that it is more stable to oxidation and more conductive; however PPy is much cheaper. As with PPy, PEDOT has been shown to decrease the electrical impedance. In one article, a PEDOT coating was electrochemically deposited on to gold recording electrodes. [ 20 ] The results showed that impedance of the electrode decreased significantly when the PEDOT coating was added. The unmodified gold electrodes had an impedance of 500–1000 kΩ, while the modified gold electrode with the PEDOT coating had an impedance of 3–6 kΩ. [ 16 ] The paper also showed that the interaction between the polymer and neurons improved the stability and durability of the electrode. The study concluded that by adding a conductive polymer the impedance of the electrode system decreased, which increased the charge transfer making a more effective electrode. The ease and control of electrochemically depositing conducting coatings onto electrode surfaces makes it a very attractive surface modification for neural electrodes. Seeding implants with growth factors, such as neural progenitor cells (NPCs) , improves the brain-implant interface. NPCs are progenitor cells that have the ability to differentiate into neurons or cells found in the brain. By coating the implant with NPCs, it can reduce the foreign body reaction and improve biocompatibility. To attach the NPCs, prior surface modification of the implant is required; these modifications can be done via the immobilization of laminin (an extracellular matrix derived protein) on an implant, such as silicon. To verify the success of surface immobilization, Fourier transform infrared spectroscopy (FTIR) and an analysis of hydrophobicity can be used. The Fourier transform infrared spectroscopy can be used to characterize the chemical composition of the surface or a contact angle goniometer can be used to determine the contact angle of water to determine the hydrophobicity. A higher contact angle indicates higher hydrophobicity, showing successful modification of the surface via the laminin protein. The laminin immobilized surface promotes the attachment and growth of the NPCs and also allows for their differentiation, thereby reducing the glial response and foreign body response to the implant. [ 21 ] Using nerve growth factor (NGF) as a neurotrophic co-dopant could induce ideal cell responses in vivo . NGF is a water-soluble protein that promotes the survival and differentiation of neurons. The addition of NGF into polymeric films can induce biological interactions without compromising the conductive properties or the morphology of the polymeric film. Various polymers such as PPy, PEDOT, as well as collagen, can be used as electrode coatings. Extended neurites for both the PPy and PEDOT show that the NGF is biologically active. [ 21 ] Dexamethasone (DEX) is a glucocorticoid that is used as an anti-inflammatory and immunosuppressive agent. PLGA nanoparticles loaded with DEX via oil-in-water emulsion/solvent evaporation method can be embedded in alginate hydrogel matrices. To quantify the amount of DEX that was successfully seeding into the nanoparticle, UV spectrophotometry can be used. It has been shown that the amount of DEX that can be successfully loaded into the nanoparticles was ≈13 wt% and the typical particle size ranged from 400 to 600 nm. In vitro tests have revealed that the impedance of the nanoparticle-loaded hydrogel-coated electrodes have similar impedance to the non-coated electrode (bare gold). This shows that the nanoparticle-loaded hydrogel coating does not significantly hinder the electrical transport. The in vivo tests have shown that the impedance amplitude of the DEX-loaded electrodes was maintained at the same level it was at initially. However, non-coated electrodes had an impedance about 3 times greater than its original impedance 2 weeks earlier. This addition of anti-inflammatory drugs via nanoparticles indicates that this form of surface modification does not have a negative effect on the electrodes performance. [ 17 ] Hydrogel modifications, as with other coatings, are designed to improve the body's response to the implant and thereby improve their consistency and long-term performance. Hydrogel surface modifications achieve this by significantly altering the hydrophilicity of the neural implant surface to one that is less favorable for protein adsorption . [ 22 ] In general, protein adsorption increases with increasing hydrophobicity as a result of the decreased Gibbs energy from the energetically favorable reaction (as seen in the equation below) [ 4 ] Water molecules are bonded to both the proteins and to the surface of the implant; as the protein binds to the implant, water molecules are liberated, resulting in an entropy gain, decreasing the overall energy in the system. [ 23 ] For hydrophilic surfaces, this reaction is energetically unfavorable due to the strong attachment of water to the surface, hence the decreased protein adsorption. The decrease in protein adsorption is beneficial for the implant as it limits the body's ability to both recognize the implant as a foreign material as well as attach potentially deleterious cells such as astrocytes and fibroblasts that can create fibrous glial scars around the implant and hinder stimulating and recording processes. Increasing the hydrophilicity can also enhance the electrical signal transfer by creating a stable ionic conductance layer. However, increasing the water content of the hydrogel too much can cause swelling and eventually mechanical instability. [ 22 ] An appropriate water balance must be created to optimize the efficacy of the implant coating. Significant nonspecific protein adsorption during implantation can cause adverse effects. However, some proteins can be beneficial in stabilizing the implant by reducing micro-motion and implant migration, as well as improving the signal quality through increased neuron connection; improving the long-term performance. Instead of relying on the native cells to secrete these proteins, they can be added to the surface of the material prior to implantation. The surface modification of biomaterials with proteins has been done with great success in various regions of the body. However, since the anatomy of the brain is different from the rest of the body, the types of proteins that must be used in these applications vary from those used elsewhere. Proteins like laminin that promotes neuronal outgrowth and L1 that promotes axonal outgrowth have shown great promise in surface modification applications; [ 24 ] L1 more so than laminin because of the decreased attachment associated with astrocytes – the cells responsible for glial scar formation. [ 25 ] Proteins are typically added to the material surface via self-assembled monolayer (SAM) formation.
https://en.wikipedia.org/wiki/Surface_chemistry_of_neural_implants
The surface chemistry of paper is responsible for many important paper properties, such as gloss, waterproofing, and printability. Many components are used in the paper-making process that affect the surface. Coating components are subject to particle-particle, particle-solvent, and particle-polymer interactions. [ 1 ] Van der Waals forces, electrostatic repulsions, and steric stabilization are the reasons for these interactions. [ 2 ] Importantly, the characteristics of adhesion and cohesion between the components form the base coating structure. Calcium carbonate and kaolin are commonly used pigments . [ 1 ] [ 2 ] Pigments support a structure of fine porosity and form a light scattering surface. The surface charge of the pigment plays an important role in dispersion consistency. The surface charge of calcium carbonate is negative and not dependent on pH , however it can decompose under acidic conditions. [ 3 ] Kaolin has negatively charged faces while the charge of its laterals depend on pH, being positive in acidic conditions and negative in basic conditions with an isoelectric point at 7.5. [ 1 ] The equation for determining the isoelectric point is as follows: In the papermaking process, the pigment dispersions are generally kept at a pH above 8.0. [ 1 ] Binders promote the binding of pigment particles between themselves and the coating layer of the paper. [ 2 ] Binders are spherical particles less than 1 µm in diameterr. Common binders are styrene maleic anhydride copolymer or styrene-acrylate copolymer. [ 1 ] The surface chemical composition is differentiated by the adsorption of acrylic acid or an anionic surfactant , both of which are used for stabilization of the dispersion in water. [ 4 ] Co-binders , or thickeners, are generally water-soluble polymers that influence the paper's color viscosity, water retention, sizing , and gloss. Some common examples are carboxymethyl cellulose (CMC), cationic and anionic hydroxyethyl cellulose (EHEC), modified starch , and dextrin . In sizing , the strength and printability of paper is increased. Sizing also improves the hydrophilic character, liquid spreading, and affinity for ink. Starch is the most common sizing agent. Cationic starch and hydrophilic agents are also applied, including alkenyl succinic anhydride (ASA) and alkyl ketene dimers (AKD). [ 5 ] Cationic starch increases strength because it binds to the anionic paper fibers. [ 6 ] The amount added is usually between ten and thirty pounds per ton. When starch exceeds the amount the fibers can bind to, it causes foaming in the production process as well as decreased retention and drainage. [ 6 ] Surface modification makes paper hydrophobic and oleophilic. [ 7 ] This combination allows ink oil to penetrate the paper, but prevents dampening water absorption , which increases papers printability. Three different plasma-solid interactions are used: etching/ablation, plasma activation , and plasma coating . [ 7 ] Etching or ablation is when material is removed from the surface of the solid. Plasma activation is where species in the plasma like ions, electrons, or radicals are used to chemically or physically modify the surface. Lastly, plasma coating is where material is coated to the surface in the form of a thin film. Plasma coating can be used to add hydrocarbons to surfaces which can make a surface non-polar or hydrophobic. The specific type of plasma coating used to add hydrocarbons is called plasma enhanced chemical vapor deposition process or PCVD. [ 7 ] An ideal hydrophobic surface would have a contact angle of 180 degrees to water. This means that the hydrocarbons lie flat against the surface creating a thin layer and preventing dampening water absorption. However, in practice it is fine or even preferred to have a low level of dampening water absorption because of a phenomenon that occurs when water settles at the surface of paper. [ 7 ] This phenomenon is when ink is unable to transfer to the paper because of the water layer at the surface. The contact angle value for hydrocarbons on a rough pigment-coated paper can be found to be approximately 110° through a contact angle meter. The Young's equation can be used to calculate the surface energy of a liquid on paper. Young's equation is: where γ S L {\displaystyle \gamma _{\mathrm {SL} }} is the interfacial tension between the solid and the liquid, γ L G {\displaystyle \gamma _{\mathrm {LG} }} is the interfacial tension between the liquid and the vapor, and γ S G {\displaystyle \gamma _{\mathrm {SG} }} is the interfacial tension between the solid and the vapor. An ideal oleophilic surface would have a contact angle of 0° with oil, therefore allowing the ink to transfer to the paper and be absorbed. The hydrocarbon plasma coating provides an oleophilic surface to the paper by lowering the contact angle of the paper with the oil in the ink. The hydrocarbon plasma coating increases the non-polar interactions while decreasing polar interactions which allow paper to absorb ink while preventing dampening water absorption. [ 7 ] Printing quality is highly influenced by the various treatments and methods used in creating paper and enhancing the paper surface. Consumers are most concerned with the paper-ink interactions which vary for certain types of paper due to different chemical properties of the surface. [ 8 ] Inkjet paper is the most commercially used type of paper. Filter paper is another key type of paper whose surface chemistry affects its various forms and uses. The ability of adhesives to bond to a paper surface is also affected by the surface chemistry. Co-styrene-maleic anhydride and co-styrene acrylate are common binders associated with a cationic starch pigment in Inkjet printing paper. [ 8 ] Table 1 shows their surface tension under given conditions. There have been several studies that have focused on how the paper printing quality is dependent on the concentration of these binders and ink pigment. Data from the experiments are congruent and stated in Table 2 as the corrected contact angle of water, [ 9 ] the corrected contact angle of black ink, [ 8 ] and the total surface energy. [ 10 ] The contact angle measurement has proven to be a very useful tool to evaluate the influence of the sizing formulation on the printing properties. Surface free energy has also shown to be very valuable in explaining the differences in sample behavior. [ 8 ] Various composite coatings were analyzed on filter paper in an experiment done by Wang et al. [ 11 ] The ability to separate homogenous liquid solutions based on varying surface tensions has great practical use. Creating superhydrophobic and superoleophilic filter paper was achieved by treating the surface of commercially available filter paper with hydrophobic silica nanoparticles and polystyrene solution in toluene. [ 11 ] Oil and water were successfully separated through the use of the filter paper created with an efficiency greater than 96%. In a homogenous solution the filter paper was also successful in separating the liquids through differentiating for surface tensions. Although with a lower efficiency, aqueous ethanol was also extracted from the solution when tested on the filter paper. [ 11 ]
https://en.wikipedia.org/wiki/Surface_chemistry_of_paper
A surface condenser is a water-cooled shell and tube heat exchanger installed to condense exhaust steam from a steam turbine in thermal power stations . [ 1 ] [ 2 ] [ 3 ] These condensers are heat exchangers which convert steam from its gaseous to its liquid state at a pressure below atmospheric pressure . Where cooling water is in short supply, an air-cooled condenser is often used. An air-cooled condenser is however, significantly more expensive and cannot achieve as low a steam turbine exhaust pressure (and temperature) as a water-cooled surface condenser. Surface condensers are also used in applications and industries other than the condensing of steam turbine exhaust in power plants. In thermal power plants, the purpose of a surface condenser is to condense the exhaust steam from a steam turbine to obtain maximum efficiency , and also to convert the turbine exhaust steam into pure water (referred to as steam condensate) so that it may be reused in the steam generator or boiler as boiler feed water. The steam turbine itself is a device to convert the heat in steam to mechanical power . The difference between the heat of steam per unit mass at the inlet to the turbine and the heat of steam per unit mass at the outlet from the turbine represents the heat which is converted to mechanical power. Therefore, the more the conversion of heat per pound or kilogram of steam to mechanical power in the turbine, the better is its efficiency. By condensing the exhaust steam of a turbine at a pressure below atmospheric pressure, the steam pressure drop between the inlet and exhaust of the turbine is increased, which increases the amount of heat available for conversion to mechanical power. Most of the heat liberated due to condensation of the exhaust steam is carried away by the cooling medium (water or air) used by the surface condenser. The adjacent diagram depicts a typical water-cooled surface condenser as used in power stations to condense the exhaust steam from a steam turbine driving an electrical generator as well in other applications. [ 2 ] [ 3 ] [ 4 ] [ 5 ] There are many fabrication design variations depending on the manufacturer, the size of the steam turbine, and other site-specific conditions. The shell is the condenser's outermost body and contains the heat exchanger tubes. The shell is fabricated from carbon steel plates and is stiffened as needed to provide rigidity for the shell. When required by the selected design, intermediate plates are installed to serve as baffle plates that provide the desired flow path of the condensing steam. The plates also provide support that help prevent sagging of long tube lengths. At the bottom of the shell, where the condensate collects, an outlet is installed. In some designs, a sump (often referred to as the hotwell) is provided. Condensate is pumped from the outlet or the hotwell for reuse as boiler feedwater . For most water-cooled surface condensers, the shell is under [partial] vacuum during normal operating conditions. For water-cooled surface condensers, the shell's internal vacuum is most commonly supplied by and maintained by an external steam jet ejector system. Such an ejector system uses steam as the motive fluid to remove any non-condensible gases that may be present in the surface condenser. The Venturi effect , which is a particular case of Bernoulli's principle , applies to the operation of steam jet ejectors. Motor driven mechanical vacuum pumps , such as the liquid ring type, are also popular for this service. At each end of the shell, a sheet of sufficient thickness usually made of stainless steel is provided, with holes for the tubes to be inserted and rolled. The inlet end of each tube is also bellmouthed for streamlined entry of water. This is to avoid eddies at the inlet of each tube giving rise to erosion, and to reduce flow friction. Some makers also recommend plastic inserts at the entry of tubes to avoid eddies eroding the inlet end. In smaller units some manufacturers use ferrules to seal the tube ends instead of rolling. To take care of length wise expansion of tubes some designs have expansion joint between the shell and the tube sheet allowing the latter to move longitudinally. In smaller units some sag is given to the tubes to take care of tube expansion with both end water boxes fixed rigidly to the shell. Generally the tubes are made of stainless steel , copper alloys such as brass or bronze, cupro nickel , or titanium depending on several selection criteria. The use of copper bearing alloys such as brass or cupro nickel is rare in new plants, due to environmental concerns of toxic copper alloys. Also depending on the steam cycle water treatment for the boiler, it may be desirable to avoid tube materials containing copper. Titanium condenser tubes are usually the best technical choice, however the use of titanium condenser tubes has been virtually eliminated by the sharp increases in the costs for this material. The tube lengths range to about 85 ft (26 m) for modern power plants, depending on the size of the condenser. The size chosen is based on transportability from the manufacturers’ site and ease of erection at the installation site. The outer diameter of condenser tubes typically ranges from 3/4 inch to 1-1/4 inch, based on condenser cooling water friction considerations and overall condenser size. The tube sheet at each end with tube ends rolled, for each end of the condenser is closed by a fabricated box cover known as a waterbox, with flanged connection to the tube sheet or condenser shell. The waterbox is usually provided with man holes on hinged covers to allow inspection and cleaning. These waterboxes on inlet side will also have flanged connections for cooling water inlet butterfly valves , small vent pipe with hand valve for air venting at higher level, and hand-operated drain valve at bottom to drain the waterbox for maintenance. Similarly on the outlet waterbox the cooling water connection will have large flanges, butterfly valves , vent connection also at higher level and drain connections at lower level. Similarly thermometer pockets are located at inlet and outlet pipes for local measurements of cooling water temperature. In smaller units, some manufacturers make the condenser shell as well as waterboxes of cast iron . On the cooling water side of the condenser: The tubes, the tube sheets and the water boxes may be made up of materials having different compositions and are always in contact with circulating water. This water, depending on its chemical composition, will act as an electrolyte between the metallic composition of tubes and water boxes. This will give rise to electrolytic corrosion which will start from more anodic materials first. Sea water based condensers , in particular when sea water has added chemical pollutants , have the worst corrosion characteristics. River water with pollutants are also undesirable for condenser cooling water. The corrosive effect of sea or river water has to be tolerated and remedial methods have to be adopted. One method is the use of sodium hypochlorite , or chlorine , to ensure there is no marine growth on the pipes or the tubes. This practice must be strictly regulated to make sure the circulating water returning to the sea or river source is not affected. On the steam (shell) side of the condenser: The concentration of undissolved gases is high over air zone tubes. Therefore, these tubes are exposed to higher corrosion rates. Some times these tubes are affected by stress corrosion cracking, if original stress is not fully relieved during manufacture. To overcome these effects of corrosion some manufacturers provide higher corrosive resistant tubes in this area. As the tube ends get corroded there is the possibility of cooling water leakage to the steam side contaminating the condensed steam or condensate, which is harmful to steam generators . The other parts of water boxes may also get affected in the long run requiring repairs or replacements involving long duration shut-downs. Cathodic protection is typically employed to overcome this problem. Sacrificial anodes of zinc (being cheapest) plates are mounted at suitable places inside the water boxes. These zinc plates will get corroded first being in the lowest range of anodes. Hence these zinc anodes require periodic inspection and replacement. This involves comparatively less down time. The water boxes made of steel plates are also protected inside by epoxy paint. As one might expect, with millions of gallons of circulating water flowing through the condenser tubing from seawater or fresh water, anything that is contained within the water flowing through the tubes can ultimately end up on either the condenser tubesheet (discussed previously) or within the tubing itself. Tube-side fouling for surface condensers falls into five main categories; particulate fouling like silt and sediment, biofouling like slime and biofilms , scaling and crystallization such as calcium carbonate, macrofouling which can include anything from zebra mussels that can grow on the tubesheet, to wood or other debris that blocks the tubing, and finally, corrosion products (discussed previously). Depending on the extent of the fouling, the impact can be quite severe on the condenser's ability to condense the exhaust steam coming from the turbine. As fouling builds up within the tubing, an insulating effect is created and the heat-transfer characteristics of the tubes are diminished, often requiring the turbine to be slowed to a point where the condenser can handle the exhaust steam produced. Typically, this can be quite costly to power plants in the form of reduced output, increase fuel consumption and increased CO 2 emissions. This "derating" of the turbine to accommodate the condenser's fouled or blocked tubing is an indication that the plant needs to clean the tubing in order to return to the turbine's nameplate capacity . A variety of methods for cleaning are available, including online and offline options, depending on the plant's site-specific conditions. National and international test codes are used to standardize the procedures and definitions used in testing large condensers. In the U.S., ASME publishes several performance test codes on condensers and heat exchangers. These include ASME PTC 12.2-2010, Steam Surface Condensers, and PTC 30.1-2007, Air cooled Steam Condensers.
https://en.wikipedia.org/wiki/Surface_condenser
Surface conductivity is an additional conductivity of an electrolyte in the vicinity of the charged interfaces. [ 1 ] Surface and volume conductivity of liquids correspond to the electrically driven motion of ions in an electric field . A layer of counter ions of the opposite polarity to the surface charge exists close to the interface. It is formed due to attraction of counter-ions by the surface charges . This layer of higher ionic concentration is a part of the interfacial double layer . The concentration of the ions in this layer is higher as compared to the ionic strength of the liquid bulk. This leads to the higher electric conductivity of this layer. Smoluchowski was the first to recognize the importance of surface conductivity at the beginning of the 20th century. [ 2 ] There is a detailed description of surface conductivity by Lyklema in "Fundamentals of Interface and Colloid Science" [ 3 ] The Double Layer (DL) has two regions, according to the well established Gouy-Chapman-Stern model. [ 1 ] The upper level, which is in contact with the bulk liquid is the diffuse layer . The inner layer that is in contact with interface is the Stern layer . It is possible that the lateral motion of ions in both parts of the DL contributes to the surface conductivity. The contribution of the Stern layer is less well described. It is often called "additional surface conductivity". [ 4 ] The theory of the surface conductivity of the diffuse part of the DL was developed by Bikerman. [ 5 ] He derived a simple equation that links surface conductivity κ σ with the behaviour of ions at the interface. For symmetrical electrolyte and assuming identical ions diffusion coefficients D + =D − =D it is given in the reference: [ 1 ] where The parameter m characterizes the contribution of electro-osmosis to the motion of ions within the DL: The Dukhin number is a dimensionless parameter that characterizes the contribution of the surface conductivity to a variety of electrokinetic phenomena , such as, electrophoresis and electroacoustic phenomena . [ 6 ] This parameter and, consequently, surface conductivity can be calculated from the electrophoretic mobility using appropriate theory. Electrophoretic instrument by Malvern and electroacoustic instruments by Dispersion Technology contain software for conducting such calculations. Surface conductivity may refer to the electrical conduction across a solid surface measured by surface probes. Experiments may be done to test this material property as in the n-type surface conductivity of p-type . [ 7 ] Additionally, surface conductivity is measured in coupled phenomena such as photoconductivity , for example, for the metal oxide semiconductor ZnO . [ 8 ] Surface conductivity differs from bulk conductivity for analogous reasons to the electrolyte solution case, where the charge carriers of holes (+1) and electrons (-1) play the role of ions in solution.
https://en.wikipedia.org/wiki/Surface_conductivity
A surface core level shift (SCS) is a kind of core-level shift that often emerges in X-ray photoelectron spectroscopy spectra of surface atoms . Because surface atoms have different chemical environments from bulk atoms, small shifts of binding energies are observed by X-ray photoelectron spectroscopy. SCS is ascribed mainly to the lower coordination numbers of surface atoms than bulk atoms. Reduced coordination leads to narrower valence bandwidth. Such narrowing of the bandwidth increases the density of states, and if more than half of the valence band is filled, the band center is lower than bulk and the binding energy increases. In contrast, if less than half of the valence band is filled, the band center is higher than bulk, and the binding energy decreases. Because the binding energy in X-ray photoelectron spectroscopy is affected by the final state and other chemical environments, this simple explanation cannot always be applied to the interpretation of X-ray photoelectron spectra. In spite of such complexity, the SCS gives important information about the chemical nature of surface atoms. This spectroscopy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Surface_core_level_shift
Surface differential reflectivity (SDR) or differential reflectance spectroscopy (DRS) is a spectroscopic technique that measures and compares the reflectivity of a sample in two different physical conditions (modulation spectroscopy). The result is presented in terms of ΔR/R, which is defined as follow: Δ R R = R 1 − R 2 R 2 {\displaystyle {\frac {\Delta R}{R}}={\frac {R_{1}-R_{2}}{R_{2}}}} where R 1 and R 2 represent the reflectivity due to a particular state or condition of the sample. The differential reflectivity is used to enhance just the contributions to the reflected signal coming from the sample. In fact, the light penetration ( α −1 ) inside a solid is related to the adsorption coefficient ( α ) of the material. The contribution of the sample surface (e.g., surface states, ultra-thin and thin deposited films, etc.) to the reflected signal is generally evaluated in the 10 −2 range. [ 1 ] The difference between two sample states (1 and 2) is thought to put in evidence small changes occurring onto the sample surface. If R 1 represents a clean freshly prepared surface (e.g., after a cleavage in vacuum) and R 2 the same sample after the exposure to hydrogen or oxygen contaminants, the ΔR/R spectrum can be related to features of the clean surface (e.g., surface states); [ 2 ] if R 1 is the reflectivity spectrum of a sample covered by an organic film (even if the substrate is only partially covered) and R 2 represents the optical spectrum of the pristine substrate, the ΔR/R spectrum can be related to the optical properties of the deposited molecules; [ 3 ] etc. The experimental SDR definition reported was interpreted in terms of surface (or film) thickness ( d ) and its dielectric function (ε 2 = ε’ 2 - iε” 2 ). This model, which assumes the surface as a well-defined phase above a bulk, is known as the “three-layer model” and states that: [ 4 ] Δ R R = 8 π d λ I m ϵ 1 − ϵ 2 ϵ 1 − ϵ 3 {\displaystyle {\frac {\Delta R}{R}}=8\pi {\frac {d}{\lambda }}Im{\frac {\epsilon _{1}-\epsilon _{2}}{\epsilon _{1}-\epsilon _{3}}}} where ε 1 = 1 is the vacuum dielectric constant and ε 3 = ε’ 3 - iε” 3 is the bulk dielectric function. The SDR measurements are generally realized by exploiting an optical multichannel system coupled with a double optical path in the so-called Michelson-cross configuration. In this configuration, the ΔR/R signal is obtained by a direct comparison between the reflectivity signal R 1 arises from the sample (e.g., a silicon substrate covered by a few amount of molecules) placed inside the UHV chamber (first optical path) and the R 2 signal acquired from a reference sample (dummy sample; e.g., a silicon wafer) placed along the second optical path. The difference between R 1 and R 2 is due to the deposited molecules, which can affect the reflectivity signal in the 10 −3 ÷10 −2 range of the overall reflected signal of the real sample. Consequently, a high signal stability is required and the two optical paths must be as comparable as possible. The SDR apparatus was firstly described and used by G. Chiarotti for the investigation of the surface states contribution in the Ge(111) reflectivity properties. [ 5 ] This work also represents the first direct evidence of the existence of surface states in semiconductors. An evolution of the SDR set-up by using linearly polarized light was firstly described by P. Chiaradia and co-workers for testing the structure of the Si(111) 2 × 1 surface. [ 6 ] Other equivalent SDR set-up have been exploited for studying: the surface roughening evolution, [ 7 ] the reactivity of halogens with semiconductor surfaces, [ 8 ] the adhesion of nanoparticles during their growth, [ 9 ] the growth of heavy metals on semiconductors, [ 10 ] the nano-antennas characterization, [ 11 ] just to mention some of the works related to this surface optical technique.
https://en.wikipedia.org/wiki/Surface_differential_reflectivity
In surface science , surface energy (also interfacial free energy or surface free energy ) quantifies the disruption of intermolecular bonds that occurs when a surface is created. In solid-state physics , surfaces must be intrinsically less energetically favorable than the bulk of the material (that is, the atoms on the surface must have more energy than the atoms in the bulk), otherwise there would be a driving force for surfaces to be created, removing the bulk of the material by sublimation . The surface energy may therefore be defined as the excess energy at the surface of a material compared to the bulk, or it is the work required to build an area of a particular surface. Another way to view the surface energy is to relate it to the work required to cut a bulk sample, creating two surfaces. There is "excess energy" as a result of the now-incomplete, unrealized bonding between the two created surfaces. Cutting a solid body into pieces disrupts its bonds and increases the surface area, and therefore increases surface energy. If the cutting is done reversibly , then conservation of energy means that the energy consumed by the cutting process will be equal to the energy inherent in the two new surfaces created. The unit surface energy of a material would therefore be half of its energy of cohesion , all other things being equal; in practice, this is true only for a surface freshly prepared in vacuum . Surfaces often change their form away from the simple " cleaved bond " model just implied above. They are found to be highly dynamic regions, which readily rearrange or react , so that energy is often reduced by such processes as passivation or adsorption . The most common way to measure surface energy is through contact angle experiments. [ 1 ] In this method, the contact angle of the surface is measured with several liquids, usually water and diiodomethane . Based on the contact angle results and knowing the surface tension of the liquids, the surface energy can be calculated. In practice, this analysis is done automatically by a contact angle meter. [ 2 ] There are several different models for calculating the surface energy based on the contact angle readings. [ 3 ] The most commonly used method is OWRK, which requires the use of two probe liquids and gives out as a result the total surface energy as well as divides it into polar and dispersive components. Contact angle method is the standard surface energy measurement method due to its simplicity, applicability to a wide range of surfaces and quickness. The measurement can be fully automated and is standardized. [ 4 ] In general, as surface energy increases, the contact angle decreases because more of the liquid is being "grabbed" by the surface. Conversely, as surface energy decreases, the contact angle increases, because the surface doesn't want to interact with the liquid. The surface energy of a liquid may be measured by stretching a liquid membrane (which increases the surface area and hence the surface energy). In that case, in order to increase the surface area of a mass of liquid by an amount, δA , a quantity of work , γ δA , is needed (where γ is the surface energy density of the liquid). However, such a method cannot be used to measure the surface energy of a solid because stretching of a solid membrane induces elastic energy in the bulk in addition to increasing the surface energy. The surface energy of a solid is usually measured at high temperatures. At such temperatures the solid creeps and even though the surface area changes, the volume remains approximately constant. If γ is the surface energy density of a cylindrical rod of radius r and length l at high temperature and a constant uniaxial tension P , then at equilibrium, the variation of the total Helmholtz free energy vanishes and we have where F is the Helmholtz free energy and A is the surface area of the rod: Also, since the volume ( V ) of the rod remains constant, the variation ( δV ) of the volume is zero, that is, Therefore, the surface energy density can be expressed as The surface energy density of the solid can be computed by measuring P , r , and l at equilibrium. This method is valid only if the solid is isotropic , meaning the surface energy is the same for all crystallographic orientations. While this is only strictly true for amorphous solids ( glass ) and liquids, isotropy is a good approximation for many other materials. In particular, if the sample is polygranular (most metals) or made by powder sintering (most ceramics) this is a good approximation. In the case of single-crystal materials, such as natural gemstones , anisotropy in the surface energy leads to faceting . The shape of the crystal (assuming equilibrium growth conditions) is related to the surface energy by the Wulff construction . The surface energy of the facets can thus be found to within a scaling constant by measuring the relative sizes of the facets. In the deformation of solids, surface energy can be treated as the "energy required to create one unit of surface area", and is a function of the difference between the total energies of the system before and after the deformation: Calculation of surface energy from first principles (for example, density functional theory ) is an alternative approach to measurement. Surface energy is estimated from the following variables: width of the d-band, the number of valence d-electrons , and the coordination number of atoms at the surface and in the bulk of the solid. [ 5 ] [ page needed ] In density functional theory , surface energy can be calculated from the following expression: where For a slab, we have two surfaces and they are of the same type, which is reflected by the number 2 in the denominator. To guarantee this, we need to create the slab carefully to make sure that the upper and lower surfaces are of the same type. Strength of adhesive contacts is determined by the work of adhesion which is also called relative surface energy of two contacting bodies. [ 6 ] [ page needed ] The relative surface energy can be determined by detaching of bodies of well defined shape made of one material from the substrate made from the second material. [ 7 ] For example, the relative surface energy of the interface " acrylic glass – gelatin " is equal to 0.03 N/m. Experimental setup for measuring relative surface energy and its function can be seen in the video. [ 8 ] To estimate the surface energy of a pure, uniform material, an individual region of the material can be modeled as a cube. In order to move a cube from the bulk of a material to the surface, energy is required. This energy cost is incorporated into the surface energy of the material, which is quantified by: where z σ and z β are coordination numbers corresponding to the surface and the bulk regions of the material, and are equal to 5 and 6, respectively; a 0 is the surface area of an individual molecule, and W AA is the pairwise intermolecular energy. Surface area can be determined by squaring the cube root of the volume of the molecule: Here, M̄ corresponds to the molar mass of the molecule, ρ corresponds to the density, and N A is the Avogadro constant . In order to determine the pairwise intermolecular energy, all intermolecular forces in the material must be broken. This allows thorough investigation of the interactions that occur for single molecules. During sublimation of a substance, intermolecular forces between molecules are broken, resulting in a change in the material from solid to gas. For this reason, considering the enthalpy of sublimation can be useful in determining the pairwise intermolecular energy. Enthalpy of sublimation can be calculated by the following equation: Using empirically tabulated values for enthalpy of sublimation, it is possible to determine the pairwise intermolecular energy. Incorporating this value into the surface energy equation allows for the surface energy to be estimated. The following equation can be used as a reasonable estimate for surface energy: The presence of an interface influences generally all thermodynamic parameters of a system. There are two models that are commonly used to demonstrate interfacial phenomena: the Gibbs ideal interface model and the Guggenheim model. In order to demonstrate the thermodynamics of an interfacial system using the Gibbs model, the system can be divided into three parts: two immiscible liquids with volumes V α and V β and an infinitesimally thin boundary layer known as the Gibbs dividing plane ( σ ) separating these two volumes. The total volume of the system is: All extensive quantities of the system can be written as a sum of three components: bulk phase α , bulk phase β , and the interface σ . Some examples include internal energy U , the number of molecules of the i th substance n i , and the entropy S . While these quantities can vary between each component, the sum within the system remains constant. At the interface, these values may deviate from those present within the bulk phases. The concentration of molecules present at the interface can be defined as: where c iα and c iβ represent the concentration of substance i in bulk phase α and β , respectively. It is beneficial to define a new term interfacial excess Γ i which allows us to describe the number of molecules per unit area: Surface energy comes into play in wetting phenomena. To examine this, consider a drop of liquid on a solid substrate. If the surface energy of the substrate changes upon the addition of the drop, the substrate is said to be wetting . The spreading parameter can be used to mathematically determine this: where S is the spreading parameter, γ s the surface energy of the substrate, γ l the surface energy of the liquid, and γ s-l the interfacial energy between the substrate and the liquid. If S < 0 , the liquid partially wets the substrate. If S > 0 , the liquid completely wets the substrate. [ 9 ] A way to experimentally determine wetting is to look at the contact angle ( θ ), which is the angle connecting the solid–liquid interface and the liquid–gas interface (as in the figure). The Young equation relates the contact angle to interfacial energy: where γ s-g is the interfacial energy between the solid and gas phases, γ s-l the interfacial energy between the substrate and the liquid, γ l-g is the interfacial energy between the liquid and gas phases, and θ is the contact angle between the solid–liquid and the liquid–gas interface. [ 11 ] The energy of the bulk component of a solid substrate is determined by the types of interactions that hold the substrate together. High-energy substrates are held together by bonds , while low-energy substrates are held together by forces . Covalent , ionic , and metallic bonds are much stronger than forces such as van der Waals and hydrogen bonding . High-energy substrates are more easily wetted than low-energy substrates. [ 12 ] In addition, more complete wetting will occur if the substrate has a much higher surface energy than the liquid. [ 13 ] The most commonly used surface modification protocols are plasma activation , wet chemical treatment, including grafting, and thin-film coating. [ 14 ] [ 15 ] [ 16 ] Surface energy mimicking is a technique that enables merging the device manufacturing and surface modifications, including patterning, into a single processing step using a single device material. [ 17 ] Many techniques can be used to enhance wetting. Surface treatments, such as corona treatment , [ 18 ] plasma treatment and acid etching , [ 19 ] can be used to increase the surface energy of the substrate. Additives can also be added to the liquid to decrease its surface tension. This technique is employed often in paint formulations to ensure that they will be evenly spread on a surface. [ 20 ] As a result of the surface tension inherent to liquids, curved surfaces are formed in order to minimize the area. This phenomenon arises from the energetic cost of forming a surface. As such the Gibbs free energy of the system is minimized when the surface is curved. The Kelvin equation is based on thermodynamic principles and is used to describe changes in vapor pressure caused by liquids with curved surfaces. The cause for this change in vapor pressure is the Laplace pressure. The vapor pressure of a drop is higher than that of a planar surface because the increased Laplace pressure causes the molecules to evaporate more easily. Conversely, in liquids surrounding a bubble, the pressure with respect to the inner part of the bubble is reduced, thus making it more difficult for molecules to evaporate. The Kelvin equation can be stated as: where P K 0 is the vapor pressure of the curved surface, P 0 is the vapor pressure of the flat surface, γ is the surface tension , V m is the molar volume of the liquid, R is the universal gas constant , T is temperature (in kelvin ), and R 1 and R 2 are the principal radii of curvature of the surface. Pigments offer great potential in modifying the application properties of a coating. Due to their fine particle size and inherently high surface energy, they often require a surface treatment in order to enhance their ease of dispersion in a liquid medium. A wide variety of surface treatments have been previously used, including the adsorption on the surface of a molecule in the presence of polar groups, monolayers of polymers, and layers of inorganic oxides on the surface of organic pigments. [ 21 ] New surfaces are constantly being created as larger pigment particles get broken down into smaller subparticles. These newly-formed surfaces consequently contribute to larger surface energies, whereby the resulting particles often become cemented together into aggregates. Because particles dispersed in liquid media are in constant thermal or Brownian motion , they exhibit a strong affinity for other pigment particles nearby as they move through the medium and collide. [ 21 ] This natural attraction is largely attributed to the powerful short-range van der Waals forces , as an effect of their surface energies. The chief purpose of pigment dispersion is to break down aggregates and form stable dispersions of optimally sized pigment particles. This process generally involves three distinct stages: wetting, deaggregation, and stabilization. A surface that is easy to wet is desirable when formulating a coating that requires good adhesion and appearance. This also minimizes the risks of surface tension related defects, such as crawling, cratering, and orange peel . [ 22 ] This is an essential requirement for pigment dispersions; for wetting to be effective, the surface tension of the pigment's vehicle must be lower than the surface free energy of the pigment. [ 21 ] This allows the vehicle to penetrate into the interstices of the pigment aggregates, thus ensuring complete wetting. Finally, the particles are subjected to a repulsive force in order to keep them separated from one another and lowers the likelihood of flocculation . Dispersions may become stable through two different phenomena: charge repulsion and steric or entropic repulsion. [ 22 ] In charge repulsion, particles that possess the same like electrostatic charges repel each other. Alternatively, steric or entropic repulsion is a phenomenon used to describe the repelling effect when adsorbed layers of material (such as polymer molecules swollen with solvent) are present on the surface of the pigment particles in dispersion. Only certain portions (anchors) of the polymer molecules are adsorbed, with their corresponding loops and tails extending out into the solution. As the particles approach each other their adsorbed layers become crowded; this provides an effective steric barrier that prevents flocculation . [ 23 ] This crowding effect is accompanied by a decrease in entropy, whereby the number of conformations possible for the polymer molecules is reduced in the adsorbed layer. As a result, energy is increased and often gives rise to repulsive forces that aid in keeping the particles separated from each other.
https://en.wikipedia.org/wiki/Surface_energy
Surface engineering is the sub-discipline of materials science which deals with the surface of solid matter. It has applications to chemistry , mechanical engineering , and electrical engineering (particularly in relation to semiconductor manufacturing ). Solids are composed of a bulk material covered by a surface. The surface which bounds the bulk material is called the surface phase . It acts as an interface to the surrounding environment. The bulk material in a solid is called the bulk phase . The surface phase of a solid interacts with the surrounding environment. This interaction can degrade the surface phase over time. Environmental degradation of the surface phase over time can be caused by wear , corrosion , fatigue and creep . Surface engineering involves altering the properties of the surface phase in order to reduce the degradation over time. This is accomplished by making the surface robust to the environment in which it will be used. It provides a cost-effective material for robust design. A spectrum of topics that represent the diverse nature of the field of surface engineering includes plating technologies, nano and emerging technologies and surface engineering, characterization and testing. Surface engineering techniques are being used in the automotive, aerospace, missile, power, electronic, biomedical, textile, petroleum, petrochemical, chemical, steel, cement, machine tools and construction industries including road surfacing . Surface engineering techniques can be used to develop a wide range of functional properties, including physical, chemical, electrical, electronic, magnetic, mechanical, wear-resistant and corrosion-resistant properties at the required substrate surfaces. Almost all types of materials, including metals, ceramics, polymers, and composites can be coated on similar or dissimilar materials. It is also possible to form coatings of newer materials (e.g., met glass. beta-C 3 N 4 ), graded deposits, multi-component deposits etc. The advanced materials and deposition processes including recent developments in ultra hard materials like BAM (AlMgB compound)are fully covered in a recent book[R. Chattopadhyay:Green Tribology,Green Surface Engineering and Global Warming,ASM International,USA,2014] In 1995, surface engineering was a £10 billion market in the United Kingdom. Coatings, to make surface life robust from wear and corrosion, was approximately half the market. In recent years, there has been a paradigm shift in surface engineering from age-old electroplating to processes such as vapor phase deposition, diffusion, thermal spray & welding using heat sources, such as, laser,plasma,solar beam.microwave;friction.pulsed combustion. ion, electron pulsed arc, spark, friction and induction.[Ref:R.Chattopadhyay:Advanced Thermally Assisted Surface Engineering Processes,Springer, New York, USA,2004] It is estimated that loss due to wear and corrosion in the US is approximately $500 billion. In the US, there are around 9524 establishments (including automotive, aircraft, power and construction industries) who depend on engineered surfaces with support from 23,466 industries. There are around 65 academic institutions world-wide engaged in surface engineering research and education. Surface cleaning, synonymously referred to as dry cleaning, is a mechanical cleaning technique used to reduce superficial soil, dust, grime, insect droppings, accretions, or other surface deposits. (Dry cleaning, as the term is used in paper conservation, does not employ the use of organic solvents.) Surface cleaning may be used as an independent cleaning technique, as one step (usually the first) in a more comprehensive treatment, or as a prelude to further treatments (e.g., aqueous immersion) which may cause dirt to set irreversibly in paper fibers. The purpose of surface cleaning is to reduce the potential for damage to paper artifacts by removing foreign material which can be abrasive, acidic, hygroscopic, or degradative. The decision to remove surface dirt is also for aesthetic reasons when it interferes with the visibility of the imagery or information. A decision must be made balancing the probable care of each object against the possible problems related to surface cleaning. The application of surface engineering to components leads to improved lifetime (e.g., by corrosion resistance) and improved efficiency (e.g., by reducing friction) which directly reduces the emissions corresponding to those components. Applying innovative surface engineering technologies to the energy sector has the potential of reducing annual CO 2 -eq emissions by up to 1.8 Gt in 2050 and 3.4 Gt in 2100. This corresponds to 7% and 8.5% annual reduction in the energy sector in 2050 and 2100, respectively. [ 1 ] Despite those benefits, a major environmental drawback is the dissipative losses occurring throughout the life cycle of the components, and the associated environmental impacts of them. In thermal spray surface engineering applications, the majority of those dissipative losses occur at the coating stage (up to 39%), where part of the sprayed powders do not adhere to the substrate. [ 2 ]
https://en.wikipedia.org/wiki/Surface_engineering
Surface finish, also known as surface texture or surface topography, is the nature of a surface as defined by the three characteristics of lay, surface roughness , and waviness . [ 1 ] It comprises the small, local deviations of a surface from the perfectly flat ideal (a true plane ). Surface texture is one of the important factors that control friction and transfer layer formation during sliding. Considerable efforts have been made to study the influence of surface texture on friction and wear during sliding conditions. Surface textures can be isotropic or anisotropic . Sometimes, stick-slip friction phenomena can be observed during sliding, depending on surface texture. Each manufacturing process (such as the many kinds of machining ) produces a surface texture. The process is usually optimized to ensure that the resulting texture is usable. If necessary, an additional process will be added to modify the initial texture. The latter process may be grinding (abrasive cutting) , polishing , lapping , abrasive blasting , honing , electrical discharge machining (EDM), milling , lithography , industrial etching / chemical milling , laser texturing, or other processes. Lay is the direction of the predominant surface pattern, ordinarily determined by the production method used. The term is also used to denote the winding direction of fibers and strands of a rope . [ 2 ] Surface roughness, commonly shortened to roughness, is a measure of the total spaced surface irregularities. [ 1 ] In engineering, this is what is usually meant by "surface finish." A Lower number constitutes finer irregularities, i.e., a smoother surface. Waviness is the measure of surface irregularities with a spacing greater than that of surface roughness. These irregularities usually occur due to warping , vibrations , or deflection during machining. [ 1 ] Surface finish may be measured in two ways: contact and non-contact methods. Contact methods involve dragging a measurement stylus across the surface; these instruments are called profilometers . Non-contact methods include: interferometry , confocal microscopy , focus variation , structured light , electrical capacitance , electron microscopy , atomic force microscopy and photogrammetry . Optical metrology plays a key role in non-contact surface roughness measurements, offering high-resolution and non-destructive analysis of complex or delicate surfaces. These techniques are particularly useful in environments where contact-based methods may damage the material or provide limited accessibility. Common optical techniques include: These optical methods are widely implemented in industries such as aerospace, automotive, biomedical engineering, and microelectronics, where precise surface texture control is critical. In the United States, surface finish is usually specified using the ASME Y14.36M standard. The other common standard is International Organization for Standardization (ISO) 1302:2002, although the same has been withdrawn in favour of ISO 21920-1:2021. [ 6 ] Many factors contribute to the surface finish in manufacturing. In forming processes, such as molding or metal forming , surface finish of the die determines the surface finish of the workpiece. In machining, the interaction of the cutting edges and the microstructure of the material being cut both contribute to the final surface finish. [ citation needed ] In general, the cost of manufacturing a surface increases as the surface finish improves. [ 7 ] Any given manufacturing process is usually optimized enough to ensure that the resulting texture is usable for the part's intended application. If necessary, an additional process will be added to modify the initial texture. The expense of this additional process must be justified by adding value in some way—principally better function or longer lifespan. Parts that have sliding contact with others may work better or last longer if the roughness is lower. Aesthetic improvement may add value if it improves the saleability of the product. A practical example is as follows. An aircraft maker contracts with a vendor to make parts. A certain grade of steel is specified for the part because it is strong enough and hard enough for the part's function. The steel is machinable although not free-machining . The vendor decides to mill the parts. The milling can achieve the specified roughness (for example, ≤ 3.2 μm) as long as the machinist uses premium-quality inserts in the end mill and replaces the inserts after every 20 parts (as opposed to cutting hundreds before changing the inserts). There is no need to add a second operation (such as grinding or polishing) after the milling as long as the milling is done well enough (correct inserts, frequent-enough insert changes, and clean coolant ). The inserts and coolant cost money, but the costs that grinding or polishing would incur (more time and additional materials) would cost even more than that. Obviating the second operation results in a lower unit cost and thus a lower price . The competition between vendors elevates such details from minor to crucial importance. It was certainly possible to make the parts in a slightly less efficient way (two operations) for a slightly higher price; but only one vendor can get the contract, so the slight difference in efficiency is magnified by competition into the great difference between the prospering and shuttering of firms. Just as different manufacturing processes produce parts at various tolerances, they are also capable of different roughnesses. Generally, these two characteristics are linked: manufacturing processes that are dimensionally precise create surfaces with low roughness. In other words, if a process can manufacture parts to a narrow dimensional tolerance, the parts will not be very rough. Due to the abstractness of surface finish parameters, engineers usually use a tool that has a variety of surface roughnesses created using different manufacturing methods. [ 7 ]
https://en.wikipedia.org/wiki/Surface_finish
Surface force denoted f s is the force that acts across an internal or external surface element in a material body. Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces. Surface force can be decomposed into two perpendicular components: normal forces and shear forces . A normal force acts normally over an area and a shear force acts tangentially over an area. Since pressure is f o r c e a r e a = N m 2 {\displaystyle {\frac {\mathit {force}}{\mathit {area}}}=\mathrm {\frac {N}{m^{2}}} } , [ 1 ] and area is a ( l e n g t h ) ⋅ ( w i d t h ) = m ⋅ m = m 2 {\displaystyle (length)\cdot (width)=\mathrm {m\cdot m} =\mathrm {m^{2}} } , This classical mechanics –related article is a stub . You can help Wikipedia by expanding it . This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Surface_force
The Surface Force Apparatus ( SFA ) is a scientific instrument which measures the interaction force of two surfaces as they are brought together and retracted using multiple beam interferometry to monitor surface separation and directly measure contact area and observe any surface deformations occurring in the contact zone. One surface is held by a cantilevered spring, and the deflection of the spring is used to calculate the force being exerted. [ 2 ] The technique was pioneered by David Tabor and R.H.S. Winterton in the late 1960s at Cambridge University . [ 3 ] By the mid-1970s, J.N. Israelachvili had adapted the original design to operate in liquids, notably aqueous solutions, while at the Australian National University , [ 4 ] and further advanced the technique to support friction and electro-chemical surface studies [ 5 ] while at the University of California Santa Barbara . A Surface Force Apparatus uses piezoelectric positioning elements (in addition to conventional motors for coarse adjustments), and senses the distance between the surfaces using optical interferometry . [ 6 ] Using these sensitive elements, the device can resolve distances to within 0.1 nanometer , and forces at the 10 −8 N level. This extremely sensitive technique can be used to measure electrostatic forces, elusive van der Waals forces , and even hydration or solvation forces. SFA is in some ways similar to using an atomic force microscope to measure interaction between a tip (or molecule adsorbed onto the tip) and a surface. The SFA, however, is more ideally suited to measuring surface-surface interactions, can measure much longer-range forces more accurately, and is well-suited to situations where long relaxation times play a role (ordering, high-viscosity, corrosion). The SFA technique is quite demanding, nevertheless, labs worldwide have adopted the technique as part of their surface science research instrumentation. In the SFA, method two smooth cylindrically curved surfaces whose cylindrical axes are positioned at 90° to each other are made to approach each other in a direction normal to the axes. The distance between the surfaces at the point of closest approach varies between a few micrometers to a few nanometers depending on the apparatus. When the two curved cylinders have the same radius of curvature, R , this so-called 'crossed cylinders' geometry is mathematically equivalent to the interaction between a flat surface and a sphere of radius R . Using the crossed cylinder geometry makes alignment much easier, enables testing of many different surface regions for better statistics, and also enables angle-dependent measurements to be taken. A typical setup involves R = 1 cm. Position measurements are typically made using multiple beam interferometry (MBI). The transparent surfaces of the perpendicular cylinders, usually mica, are backed with a highly reflective material usually silver before being mounted to the glass cylinders. When a white-light source is shined normal to the perpendicular cylinders the light will reflect back and forth until it is transmitted at where the surfaces are closest. These rays create an interference pattern, known as fringes of equal chromatic order (FECO), which can be observed by microscope. Distance between the two surfaces can be determined by analyzing these patterns. Mica is used because it is extremely flat, easy to work with, and optically transparent. Any other material or molecule of interest can be coated or adsorbed onto the mica layer. In the jump method, the top cylinder is mounted to a pair of cantilever springs, while the bottom cylinder is brought up towards the top cylinder. While the bottom cylinder approaches the top, there comes a point when they will "jump" into contact with each other. The measurements, in this case, are based on the distance from which they jump and the spring constant. These measurements are usually between surfaces 1.25 nm and 20 nm apart. [ 6 ] The jump method is difficult to execute mainly due to unaccounted vibrations entering the instrument. To overcome this, researchers developed the resonance method which measured surface forces at larger distances, 10 nm to 130 nm. In this case, the bottom cylinder is oscillated at a known frequency, while the frequency of the top cylinder is measured using a piezoelectric bimorph strain gauge. To minimize the dampening due to the surrounding substance, these measurements were originally done in a vacuum. [ 6 ] Early experiments measured the force between mica surfaces in air or vacuum . [ 6 ] The technique has been extended, however, to enable an arbitrary vapor or solvent to be introduced between the two surfaces. [ 7 ] In this way, interactions in various media can be carefully probed, and the dielectric constant of the gap between the surfaces can be tuned. Moreover, use of water as a solvent enables the measurement of interactions between biological molecules (such as lipids in biological membranes or proteins ) in their native environment. In a solvent environment, SFA can even measure the oscillatory solvation and structural forces arising from the packing of individual layers of solvent molecules. It can also measure the electrostatic 'double layer' forces between charged surfaces in an aqueous medium with electrolyte . The SFA has more recently been extended to perform dynamic measurements, thereby determining viscous and viscoelastic properties of fluids, frictional and tribological properties of surfaces, and the time-dependent interaction between biological structures. [ 8 ] The force measurements of the SFA are based primarily on Hooke's law , F = k x {\displaystyle F=kx} where F is the restoring force of a spring, k is the spring constant and x is the displacement of the spring. Using a cantilevered spring, the lower surface is brought towards the top surface using a fine micrometer or piezotube. The force between the two surfaces is measured by Δ F ( x ) = k ( Δ x applied − Δ x measured ) {\displaystyle \Delta F(x)=k(\Delta x_{\text{applied}}-\Delta x_{\text{measured}})} where Δ x applied {\textstyle \Delta x_{\text{applied}}} is the change in displacement applied by the micrometer and Δ x measured {\displaystyle \Delta x_{\text{measured}}} is the change displacement measured by interferometry. The spring constants can range anywhere from 30 × 10 5 N m {\displaystyle 30\times 10^{5}{\frac {N}{m}}} to 5 × 10 5 N m {\displaystyle 5\times 10^{5}{\frac {N}{m}}} . [ 2 ] When measuring higher forces, a spring with a higher spring constant would be used.
https://en.wikipedia.org/wiki/Surface_forces_apparatus
Surface freezing is the appearance of long-range crystalline order in a near-surface layer of a liquid . The surface freezing effect is opposite to a far more common surface melting , or premelting . Surface freezing was experimentally discovered in melts of alkanes and related chain molecules in the early 1990s independently by two groups: John Earnshaw and his group ( Queen's University of Belfast ) used light scattering. [ 1 ] This method did not allow a determination of the frozen layer's thickness, and whether or not it is laterally ordered. A group led by Ben Ocko ( Brookhaven National Laboratory ), Eric Sirota (Exxon) and Moshe Deutsch ( Bar-Ilan University , Israel) independently discovered the same effect, using x-ray surface diffraction which allowed them to show that the frozen layer is a crystalline monolayer, with molecules oriented roughly along the surface normal, and ordered in an hexagonal lattice. A related effect, the existence of a smectic phase at the surface of a nematic liquid bulk, was observed in liquid crystals by Jens Als-Nielsen (Risø National Laboratory, Denmark) and Peter Pershan (Harvard University) in the early 1980s. However, the surface layer there was neither ordered, nor confined to a single layer. Surface freezing has since been found in a wide range of chain molecules and at various interfaces: liquid-air, liquid-solid and liquid-liquid. This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Surface_freezing
In mathematics and physics , surface growth refers to models used in the dynamical study of the growth of a surface, usually by means of a stochastic differential equation of a field . Popular growth models include: [ 1 ] [ 2 ] They are studied for their fractal properties, scaling behavior, critical exponents , universality classes , and relations to chaos theory , dynamical system , non-equilibrium / disordered / complex systems. Popular tools include statistical mechanics , renormalization group , rough path theory , etc. Kinetic Monte Carlo (KMC) is a form of computer simulation in which atoms and molecules are allowed to interact at given rate that could be controlled based on known physics . This simulation method is typically used in the micro-electrical industry to study crystal surface growth, and it can provide accurate models surface morphology in different growth conditions on a time scales typically ranging from micro-seconds to hours. Experimental methods such as scanning electron microscopy (SEM) , X-ray diffraction , and transmission electron microscopy (TEM) , and other computer simulation methods such as molecular dynamics (MD) , and Monte Carlo simulation (MC) are widely used. First, the model tries to predict where an atom would land on a surface and its rate at particular environmental conditions, such as temperature and vapor pressure. In order to land on a surface, atoms have to overcome the so-called activation energy barrier. The frequency of passing through the activation barrier can by calculated by the Arrhenius equation : A i n = A 0 , i n exp ⁡ ( − E a , i n k T ) {\displaystyle A_{in}=A_{0,in}\exp \left(-{\frac {E_{a,in}}{kT}}\right)} where A is the thermal frequency of molecular vibration , E a {\displaystyle E_{a}} is the activation energy, k is the Boltzmann constant and T is the absolute temperature . When atoms land on a surface, there are two possibilities. First, they would diffuse on the surface and find other atoms to make a cluster, which will be discussed below. Second, they could come off of the surface or so-called desorption process. The desorption is described exactly as in the absorption process, with the exception of a different activation energy barrier. A o u t = A 0 , o u t exp ⁡ ( − E a , o u t k T ) {\displaystyle A_{out}=A_{0,out}\exp \left(-{\frac {E_{a,out}}{kT}}\right)} For example, if all positions on the surface of the crystal are energy equivalent, the rate of growth can be calculated from Turnbull formula : V c = h C 0 ( A o u t − A 0 , o u t ) = h C 0 exp ⁡ ( − E a , i n k T ) ⋅ ( 1 − exp ⁡ ( − Δ G k T ) ) {\displaystyle V_{c}=hC_{0}(A_{out}-A_{0,out})=hC_{0}\exp \left(-{\frac {E_{a,in}}{kT}}\right)\cdot \left(1-\exp \left(-{\frac {\Delta G}{kT}}\right)\right)} where V c {\displaystyle V_{c}} is the rate of growth, ∆G = E in – E out , A out , A 0 out are frequencies to go in or out of crystal for any given molecule on the surface, h is the height of the molecule in the growth direction and C 0 the concentration of the molecules in direct distance from the surface. Diffusion process can also be calculated with Arrhenius equation: D = D 0 exp ⁡ ( − E d k T ) {\displaystyle D=D_{0}\exp \left(-{\frac {E_{d}}{kT}}\right)} where D is the diffusion coefficient and E d is diffusion activation energy . All three processes strongly depend on surface morphology at a certain time. For example, atoms tend to lend at the edges of a group of connected atoms, the so-called island, rather than on a flat surface, this reduces the total energy. When atoms diffuse and connect to an island, each atom tends to diffuse no further, because activation energy to detach itself out of the island is much higher. Moreover, if an atom landed on top of an island, it would not diffuse fast enough, and the atom would tend to move down the steps and enlarge it. Because of limited computing power, specialized simulation models have been developed for various purposes depending on the time scale: a) Electronic scale simulations (density function theory, ab-initio molecular dynamics): sub-atomic length scale in femto-second time scale b) Atomic scale simulations (MD) : nano to micro-meter length scale in nano-second time scale c) Film scale simulation (KMC) : micro-meter length scale in micro to hour time scale. d) Reactor scale simulation (phase field model) : meter length scale in year time scale. Multiscale modeling techniques have also been developed to deal with overlapping time scales. The interest of growing a smooth and defect-free surface requires a combination set of physical conditions throughout the process. Such conditions are bond strength , temperature, surface-diffusion limited and supersaturation (or impingement) rate. Using KMC surface growth method, following pictures describe final surface structure at different conditions. Bond strength and temperature certainly play important roles in the crystal grow process. For high bond strength, when atoms land on a surface, they tend to be closed to atomic surface clusters, which reduce total energy. This behavior results in many isolated cluster formations with a variety of size yielding a rough surface . Temperature, on the other hand, controls the high of the energy barrier. Conclusion: high bond strength and low temperature is preferred to grow a smoothed surface. Thermodynamically, a smooth surface is the lowest ever configuration, which has the smallest surface area . However, it requires a kinetic process such as surface and bulk diffusion to create a perfectly flat surface. Conclusion: enhancing surface and bulk diffusion will help create a smoother surface. Conclusion: low impingement rate helps creating smoother surface. With the control of all growth conditions such as temperature, bond strength, diffusion, and saturation level, desired morphology could be formed by choosing the right parameters. Following is the demonstration how to obtain some interesting surface features:
https://en.wikipedia.org/wiki/Surface_growth
Surface hopping is a mixed quantum-classical technique that incorporates quantum mechanical effects into molecular dynamics simulations. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Traditional molecular dynamics assume the Born-Oppenheimer approximation , where the lighter electrons adjust instantaneously to the motion of the nuclei. Though the Born-Oppenheimer approximation is applicable to a wide range of problems, there are several applications, such as photoexcited dynamics , electron transfer , and surface chemistry where this approximation falls apart. Surface hopping partially incorporates the non-adiabatic effects by including excited adiabatic surfaces in the calculations, and allowing for 'hops' between these surfaces, subject to certain criteria. Molecular dynamics simulations numerically solve the classical equations of motion . These simulations, though, assume that the forces on the electrons are derived solely by the ground adiabatic surface . Solving the time-dependent Schrödinger equation numerically incorporates all these effects, but is computationally unfeasible when the system has many degrees of freedom. To tackle this issue, one approach is the mean field or Ehrenfest method, where the molecular dynamics is run on the average potential energy surface given by a linear combination of the adiabatic states. This was applied successfully for some applications, but has some important limitations. When the difference between the adiabatic states is large, then the dynamics must be primarily driven by only one surface, and not an average potential. In addition, this method also violates the principle of microscopic reversibility. [ 3 ] Surface hopping accounts for these limitations by propagating an ensemble of trajectories, each one of them on a single adiabatic surface at any given time. The trajectories are allowed to 'hop' between various adiabatic states at certain times such that the quantum amplitudes for the adiabatic states follow the time dependent Schrödinger equation. The probability of these hops are dependent on the coupling between the states, and is generally significant only in the regions where the difference between adiabatic energies is small. The formulation described here is in the adiabatic representation for simplicity. [ 5 ] It can easily be generalized to a different representation. The coordinates of the system are divided into two categories: quantum ( q {\displaystyle \mathbf {q} } ) and classical ( R {\displaystyle \mathbf {R} } ). The Hamiltonian of the quantum degrees of freedom with mass m n {\displaystyle m_{n}} is defined as: where V {\displaystyle V} describes the potential for the whole system. The eigenvalues of H {\displaystyle H} as a function of R {\displaystyle \mathbf {R} } are called the adiabatic surfaces : ϕ n ( q ; R ) {\displaystyle \phi _{n}(\mathbf {q} ;\mathbf {R} )} . Typically, q {\displaystyle \mathbf {q} } corresponds to the electronic degree of freedom, light atoms such as hydrogen , or high frequency vibrations such as O-H stretch. The forces in the molecular dynamics simulations are derived only from one adiabatic surface, and are given by: where n {\displaystyle n} represents the chosen adiabatic surface. The last equation is derived using the Hellmann-Feynman theorem . The brackets show that the integral is done only over the quantum degrees of freedom. Choosing only one adiabatic surface is an excellent approximation if the difference between the adiabatic surfaces is large for energetically accessible regions of R {\displaystyle \mathbf {R} } . When this is not the case, the effect of the other states become important. This effect is incorporated in the surface hopping algorithm by considering the wavefunction of the quantum degrees of freedom at time t as an expansion in the adiabatic basis: where c n ( t ) {\displaystyle c_{n}(t)} are the expansion coefficients. Substituting the above equation into the time dependent Schrödinger equation gives where V j n {\displaystyle V_{jn}} and the nonadiabatic coupling vector d j n {\displaystyle \mathbf {d} _{jn}} are given by The adiabatic surface can switch at any given time t based on how the quantum probabilities | c j ( t ) | 2 {\displaystyle |c_{j}(t)|^{2}} are changing with time. The rate of change of | c j ( t ) | 2 {\displaystyle |c_{j}(t)|^{2}} is given by: where a n j = c n c j ∗ {\displaystyle a_{nj}=c_{n}c_{j}^{*}} . For a small time interval dt, the fractional change in | c j ( t ) | 2 {\displaystyle |c_{j}(t)|^{2}} is given by This gives the net change in flux of population from state j {\displaystyle j} . Based on this, the probability of hopping from state j to n is proposed to be This criterion is known as the "fewest switching" algorithm, as it minimizes the number of hops required to maintain the population in various adiabatic states. Whenever a hop takes place, the velocity is adjusted to maintain conservation of energy . To compute the direction of the change in velocity, the nuclear forces in the transition is where E j = ⟨ ϕ j | H | ϕ j ⟩ {\displaystyle E_{j}=\langle \phi _{j}|H|\phi _{j}\rangle } is the eigen value. For the last equality, d j n = − d n j {\displaystyle d_{jn}=-d_{nj}} is used. This shows that the nuclear forces acting during the hop are in the direction of the nonadiabatic coupling vector d j n {\displaystyle \mathbf {d} _{jn}} . Hence d j n {\displaystyle \mathbf {d} _{jn}} is a reasonable choice for the direction along which velocity should be changed. If the velocity reduction required to conserve energy while making a hop is greater than the component of the velocity to be adjusted, then the hop is known as frustrated. In other words, a hop is frustrated if the system does not have enough energy to make the hop. Several approaches have been suggested to deal with these frustrated hops. The simplest of these is to ignore these hops. [ 2 ] Another suggestion is not to change the adiabatic state, but reverse the direction of the component of the velocity along the nonadiabatic coupling vector. [ 5 ] Yet another approach is to allow the hop to happen if an allowed hopping point is reachable within uncertainty time δ t = ℏ / 2 Δ E {\displaystyle \delta t=\hbar /2\Delta E} , where Δ E {\displaystyle \Delta E} is the extra energy that the system needed to make the hop possible. [ 6 ] Ignoring forbidden hops without any form of velocity reversal does not recover the correct scaling for Marcus theory in the nonadiabatic limit, but a velocity reversal can usually correct the errors [ 7 ] Surface hopping can develop nonphysical coherences between the quantum coefficients over large time which can degrade the quality of the calculations, at times leading the incorrect scaling for Marcus theory . [ 8 ] To eliminate these errors, the quantum coefficients for the inactive state can be damped or set to zero after a predefined time has elapsed after the trajectory crosses the region where hopping has high probabilities. [ 5 ] The state of the system at any time t {\displaystyle t} is given by the phase space of all the classical particles, the quantum amplitudes, and the adiabatic state. The simulation broadly consists of the following steps: Step 1. Initialize the state of the system. The classical positions and velocities are chosen based on the ensemble required. Step 2. Compute forces using Hellmann-Feynman theorem, and integrate the equations of motion by time step Δ t {\displaystyle \Delta t} to obtain the classical phase space at time t + Δ t {\displaystyle t+\Delta t} . Step 3. Integrate the Schrödinger equation to evolve quantum amplitudes from time t {\displaystyle t} to t + Δ t {\displaystyle t+\Delta t} in increments of δ t {\displaystyle \delta t} . This time step δ t {\displaystyle \delta t} is typically much smaller than Δ t {\displaystyle \Delta t} . Step 4. Compute probability of hopping from current state to all other states. Generate a random number, and determine whether a switch should take place. If a switch does occur, change velocities to conserve energy. Go back to step 2, till trajectories have been evolved for the desired time. The method has been applied successfully to understand dynamics of systems that include tunneling, conical intersections and electronic excitation . [ 9 ] [ 10 ] [ 11 ] [ 12 ] In practice, surface hopping is computationally feasible only for a limited number of quantum degrees of freedom. In addition, the trajectories must have enough energy to be able to reach the regions where probability of hopping is large. Most of the formal critique of the surface hopping method comes from the unnatural separation of classical and quantum degrees of freedom. Recent work has shown, however, that the surface hopping algorithm can be partially justified by comparison with the Quantum Classical Liouville Equation. [ 13 ] It has further been demonstrated that spectroscopic observables can be calculated in close agreement with the formally exact hierarchical equations of motion . [ 14 ]
https://en.wikipedia.org/wiki/Surface_hopping
Surface imperfections on optical surfaces such as lenses or mirrors , can be caused during the manufacturing of the part or handling. These imperfections are part of the surface and cannot be removed by cleaning. Surface quality is characterized either by the American military standard notation (eg "60-40") or by specifying RMS ( root mean square ) roughness (eg "0.3 nm RMS"). [ 1 ] American notation focuses on how visible surface defects are, and is a "cosmetic" specification. RMS notation is an objective measurable property of the surface. Tighter specifications increase the costs of fabricating optical elements but looser ones affect performance. [ 2 ] [ 3 ] While surface imperfections can be labeled "cosmetic defects", they are not purely cosmetic. Optics for laser applications are more sensitive to surface quality as any imperfections can lead to laser-induced damage. In some cases, imperfections in optical elements will be directly imaged as defects in the image plane. Optical systems requiring high radiation intensity tend to be sensitive to any loss of power due to surface scattering caused by imperfections. Systems operating in the ultraviolet range require a more demanding standard as the shorter wavelength of the ultraviolet radiation is more sensitive to scattering. There are many different standards used by optical element manufacturers, designers, and users which vary by geographic region and industry. For example, German manufacturers use ISO 10110, while the US military developed MIL-PRF-13830 and their long-standing use of it has made it the de facto global standard. [ 3 ] It is not always possible to translate the scratch grade by one standard to another and sometimes the translation ends up being statistical (sampling defects to ensure that statistically, the percentage rejected elements will be similar in both methods). [ 4 ] Examining surface quality in terms of 'Scratch & Dig' is a specialized skill that takes time to develop. The practice is to compare the element to a standard master (reference). [ 3 ] Automated systems now replace the human technician, for flat optics, but recently also for convex and concave lenses . [ 3 ] [ 5 ] In contrast, 'Roughness' characterization is done with more precise and easier-to-quantify methods. The various standards separate two main categories for surface quality: scratch & dig and roughness. [ 1 ] A scratch is defined as a long and narrow defect that tears the surface of the glass or coating . [ 6 ] There are standards that refer to the degree of visibility, which is the relative brightness of the scratch. In these cases, there is also a standard for the lighting conditions used for the test. Other standards classify scratches according to their dimensions. A dig is defined as a pit, a rough area, or a small crater on the surface of the glass (or any other optical material). [ 6 ] All standards measure the physical size of the dig. Some standards include small defects within the glass that are visible through the surface, such as bubbles and inclusions. Roughness , texture or optical finish is a defect that originates from the element's manufacturing. Texture is a periodical phenomenon with a high spatial frequency (or in other words, in small dimensions), which affects the entire surface and causes the scattering of incident light. [ 7 ] A higher value of roughness means a rougher surface. [ 7 ] The texture is especially important in cases where the polishing is carried out using new processing methods such as diamond turning , which leaves a residual periodical signature on the surface, affecting the quality of the obtained image or the level of scattering from the surface. The amount of scattered light is proportional to the square of the RMS of the roughness. [ 1 ] This is the most common standard, stemming from a standard that was originally proposed by McLeod and Sherwood of Kodak back in 1945 and evolved in 1954 into the military standard MIL-O-13830A. [ 1 ] [ 3 ] It defines the quality of the surface by a pair of numbers, the first is a measure of the visibility of the scratch and the second is the size of the dig. Scratch visibility grades are described by a series of arbitrary numbers: 10, 20, 40, 60, and 80 where the brightest scratches, the easiest to see using the naked eye , are grade 80, while the most difficult to detect are grade 10. A scratch on a tested part is compared with an industrial or military standard (master) on which there are scratches of different degrees of visibility and the comparison is made using the naked eye, under controlled lighting conditions. [ 2 ] It is important to recognize that this is a subjective test and its results can vary between different people. [ 2 ] The scratches' visibility largely depends on their shape, and contrary to popular belief, there is little correlation between the scratch's visibility grade and its width. [ 3 ] One cannot measure the width of a scratch to determine its grade. [ 3 ] On the other hand, a dig's grade is a precise and measurable value. It is the diameter of the largest dig that is found on the tested surface, in units of hundredths of a millimeter. It is customary to use discrete grades of 5, 10, 20, 40, or 50, where of course the larger numbers describe larger imperfections. There are many default definitions in the MIL standard. For example, the grade that must be required outside the clear aperture (the part of the lens to which the standard applies, also called "effective diameter" or CA) is, in the absence of another definition, 80-50. [ 8 ] This is a very basic surface characterization and is easy to achieve. It describes a scratch whose brightness is less than that of a scratch at visibility grade 80 and a dig with a diameter of up to 0.5 mm (50 hundredths = 50/100=0.5). 60-40 is considered "commercial" quality, while for demanding laser applications 20-10 or even 10-5 are used. [ 6 ] The scratches on a 10-5 or 20-10 surface can be hard to see, making the visibility standard more subjective. [ 1 ] Other standards may work better when precision surfaces are required. Optical coating can change scratch visibility, so for example an element that passes 40-20 before coating can be worse than 60-40 after coating. [ 1 ] Accumulation and concentration rules regulate common situations in which there are multiple defects on the surface of an optical element, and clarify how they should be added up. [ 2 ] [ 8 ] For example, if one or more scratches are found with the maximum visibility allowed, to pass the test, the sum of the length of these scratches is limited to a quarter of the diameter of the element. [ 2 ] [ 8 ] The number of digs at the maximum permitted level is determined by dividing the measured clear aperture diameter (in millimeters) by 20, and rounding up. For example, for a clear aperture of 81 mm, 5 digs are allowed at the maximum level. [ 4 ] Since the comparison master is only in possession of the US Army , several commercial masters have been developed that are intended to be compatible, but due to the complexity of the factors that make a scratch visible, these masters are not always compatible with the original and there is no way to match one set to another. [ 3 ] For example, a visibility grade 10 scratch on one master can appear brighter than a visibility grade 60 scratch on another master. [ 1 ] For this reason, it is recommended to also indicate on the drawing the type of master set to which it must be compared during the test. [ 1 ] Examples of such commercial comparison sets made of plastic or glass are Davidson Optronics, Brysen Optical, and Jenoptik Paddle – sold by ThorLabs and Edmund Optics. [ 1 ] [ 3 ] This standard is used in the USA, China, Japan, Russia, and all of Europe. The notation as of 2007 is: 5/ N x A; C N' x A'; L N" x A"; E A''', where N and A represent the number of defects and the maximum size of the defect, N' and A' represent the number of imperfections on the coating and their maximum size, N'' and A'' represent the number of scratches allowed and their maximum size and A''' represent the maximum size of an edge chip (a defect on the rim of the optical element). A scratch in this case is defined as a defect longer than 2 mm. Only the first part of the characterization, N x A, is mandatory. The rest of the details can be omitted. A and A' are given as the square root of the area of the defect and are indicated by discrete values from the series: 4,2.5,1.6,1,0.63,0.4,0.25. [ 4 ] In addition to the limits on the number of defects and their size, the total area of all imperfections must not exceed A*N 2 . [ 4 ] Long defects (scratches) are summed up by their width, independent of length. There is no limit on the number of edge chips, and the concentration of imperfections is limited by the rule that at most 20% of the defects, allowance can be concentrated in an area of 5% of the clear aperture. [ 4 ] A fundamental advantage of ISO is a relatively simple translation between the percentage of light scattered from a surface and the characterization of its surface, according to the formula: [ 4 ] Scatter % = 4 x [(N x A 2 )+(N' x A' 2 )+ N" x A" x Φ]/(π x Φ 2 ) Unlike MIL-PRF-13830B which is cheap and fast to use, but suffers from inaccuracies, the use of the dimensional standard of ISO 10110-7 is more accurate but takes a longer time to test and is therefore expensive. [ 2 ] The relatively long test time is derived from the fact that testing according to this standard is carried out using a microscope, comparing sizes of each defect to defects on a master, and because of the large magnification needed the field of view is small, requiring several measurements to map each optical element. [ 2 ] David Aikens, director of Optics and Electro-Optics Standards Council, [ 9 ] presented a recommended conversion chart that preserves the level of quality control, or percent fall, in ISO scratch & dig testing versus the military standard. For example 5/2x0.40; L 3 x0.010 is a statistically-equivalent standard to 60-40 of the strict military standard, over a 20 mm opening. [ 4 ] The logical flaw of this dimensional standard is in defining a scratch according only to its width. For example, if a lens with a diameter of 100 mm has a requirement of L 1 x 0.025, a single scratch with a thickness of up to 25 microns is allowed, even if it covers the entire 100 mm diameter. However, if the manufacturer polishes the surface and removes the scratch from the central 95 millimeters of the lens, there will be two scratches each 2.5 mm in length and now the lens will fail the acceptance tests because the characterization allows only one scratch. The illogicality here is obvious: it is not acceptable to reject a component due to a process that improves its quality. [ 4 ] As of 2017, to support quick measurements intended for less sensitive surfaces, ISO 10110-7 also allows the definition of scratches according to their visibility, and the definition of digs according to their diameter, just like MIL-PRF-13830B, using the same grades, for example 60-40. [ 2 ] [ 4 ] It is possible to expand and also mark coating imperfections as well as edge chips, similarly the definition in the dimensional standard: 5 / S - D; C S' - D'; E A''' where S and D are the definitions for scratches and digs, S' and D' for these defects on the coating and A''' characterizes edge chip as defined above. As explained about the military standard, it is important to explicitly specify which master set the scratches brightness are to be compared to. [ 4 ] These standards are almost as popular as MIL-PRF-13830B but they have become less popular with time. These standards define scratches and digs according to their physical size and mark their grade with the letters: A, B, C, D, E, F, G (and H which is used only for digs). [ 3 ] The letter A represents the narrowest scratch, which is 0.005 mm wide, and the smallest dig, which is 0.05 mm in diameter. On the other hand, the letter G represents a scratch that is 0.12 mm wide and a dig that is 0.7 mm in diameter. A microscope or magnifying glass is used for testing, or sometimes even just using the naked eye to compare to a master. This American standard was first published in 2006. [ 3 ] Just like in the MIL-PRF-13830B standard, ANSI OP1.002 defines digs according to their diameter. ANSI OP1.002 also supports two separate methods for scratches: visibility and size. The visibility method defines scratches according to their visibility and is identical in design and terminology to the MIL-PRF-13830B standard. Just like the military standard, it uses two numbers, the first for scratches and the second for digs, maintaining their meaning as in the military standard. Examples: 80-50, 60-40. This method takes advantage of the speed and low cost of the visual inspection and is used for elements with looser tolerances. The dimensionality method for scratches is based on the MIL-C-48497A standard, which is considered easy to use and functional. [ 1 ] The dimensional method uses two letters, the first for scratches and the second for digs. For example: A-A or E-E. This standard is intended for parts with tight surface quality tolerances, such as CCD cover glasses or demanding laser applications. [ 1 ] scratch width in microns specification letter dig diameter in microns The OP1.002 standard allows using a microscope to compare with the master. [ 1 ] This standard allows a relatively easy translation between the desired scattering level and the surface quality, as mentioned above. [ 1 ] This original standard was common in nature, not intended for the characterization of polished surfaces per se. It used parameters that are not typically used for the characterization of optical elements such as average roughness. [ 1 ] This standard replaced MIL-STD-10A and defines more than forty different parameters including RMS (root mean square), slope, skew, PSD (Power Spectral Density, which is the most comprehensive characteristic), and more. There is a significant improvement in this standard because it allows the characterization of machined surfaces, at different spatial frequencies, which is especially important in cases where the optics were produced using techniques that leave periodic marks, such as caused by diamond turning . For most uses it is sufficient to use RMS. [ 1 ] In all cases, it is important to specify the range in which the calculation is performed because without defining the spatial frequency range in which the measurement is performed, this standard is meaningless. This popular standard, similar to ASME B46.1, also defines the RMS of the surface over a specific length scale, PSD and more. It differs from the ASME specification by using symbols instead of words. [ 1 ] [ 7 ]
https://en.wikipedia.org/wiki/Surface_imperfections_(optics)
In mathematics , particularly multivariable calculus , a surface integral is a generalization of multiple integrals to integration over surfaces . It can be thought of as the double integral analogue of the line integral . Given a surface, one may integrate over this surface a scalar field (that is, a function of position which returns a scalar as a value), or a vector field (that is, a function which returns a vector as value). If a region R is not flat, then it is called a surface as shown in the illustration. Surface integrals have applications in physics , particularly in the classical theories of electromagnetism and fluid mechanics . Assume that f is a scalar, vector, or tensor field defined on a surface S . To find an explicit formula for the surface integral of f over S , we need to parameterize S by defining a system of curvilinear coordinates on S , like the latitude and longitude on a sphere . Let such a parameterization be r ( s , t ) , where ( s , t ) varies in some region T in the plane . Then, the surface integral is given by where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of r ( s , t ) , and is known as the surface element (which would, for example, yield a smaller value near the poles of a sphere, where the lines of longitude converge more dramatically, and latitudinal coordinates are more compactly spaced). The surface integral can also be expressed in the equivalent form where g is the determinant of the first fundamental form of the surface mapping r ( s , t ) . [ 1 ] [ 2 ] For example, if we want to find the surface area of the graph of some scalar function, say z = f ( x , y ) , we have where r = ( x , y , z ) = ( x , y , f ( x , y )) . So that ∂ r ∂ x = ( 1 , 0 , f x ( x , y ) ) {\displaystyle {\partial \mathbf {r} \over \partial x}=(1,0,f_{x}(x,y))} , and ∂ r ∂ y = ( 0 , 1 , f y ( x , y ) ) {\displaystyle {\partial \mathbf {r} \over \partial y}=(0,1,f_{y}(x,y))} . So, which is the standard formula for the area of a surface described this way. One can recognize the vector in the second-last line above as the normal vector to the surface. Because of the presence of the cross product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, where the metric tensor is given by the first fundamental form of the surface. Consider a vector field v on a surface S , that is, for each r = ( x , y , z ) in S , v ( r ) is a vector. The integral of v on S was defined in the previous section. Suppose now that it is desired to integrate only the normal component of the vector field over the surface, the result being a scalar, usually called the flux passing through the surface. For example, imagine that we have a fluid flowing through S , such that v ( r ) determines the velocity of the fluid at r . The flux is defined as the quantity of fluid flowing through S per unit time. This illustration implies that if the vector field is tangent to S at each point, then the flux is zero because, on the surface S , the fluid just flows along S , and neither in nor out. This also implies that if v does not just flow along S , that is, if v has both a tangential and a normal component, then only the normal component contributes to the flux. Based on this reasoning, to find the flux, we need to take the dot product of v with the unit surface normal n to S at each point, which will give us a scalar field, and integrate the obtained field as above. In other words, we have to integrate v with respect to the vector surface element d s = n d s {\displaystyle \mathrm {d} \mathbf {s} ={\mathbf {n} }\mathrm {d} s} , which is the vector normal to S at the given point, whose magnitude is d s = ‖ d s ‖ . {\displaystyle \mathrm {d} s=\|\mathrm {d} {\mathbf {s} }\|.} We find the formula The cross product on the right-hand side of the last expression is a (not necessarily unital) surface normal determined by the parametrisation. This formula defines the integral on the left (note the dot and the vector notation for the surface element). We may also interpret this as a special case of integrating 2-forms, where we identify the vector field with a 1-form, and then integrate its Hodge dual over the surface. This is equivalent to integrating ⟨ v , n ⟩ d S {\displaystyle \left\langle \mathbf {v} ,\mathbf {n} \right\rangle \mathrm {d} S} over the immersed surface, where d S {\displaystyle \mathrm {d} S} is the induced volume form on the surface, obtained by interior multiplication of the Riemannian metric of the ambient space with the outward normal of the surface. Let be a differential 2-form defined on a surface S , and let be an orientation preserving parametrization of S with ( s , t ) {\displaystyle (s,t)} in D . Changing coordinates from ( x , y ) {\displaystyle (x,y)} to ( s , t ) {\displaystyle (s,t)} , the differential forms transform as So d x d y {\displaystyle \mathrm {d} x\mathrm {d} y} transforms to ∂ ( x , y ) ∂ ( s , t ) d s d t {\displaystyle {\frac {\partial (x,y)}{\partial (s,t)}}\mathrm {d} s\mathrm {d} t} , where ∂ ( x , y ) ∂ ( s , t ) {\displaystyle {\frac {\partial (x,y)}{\partial (s,t)}}} denotes the determinant of the Jacobian of the transition function from ( s , t ) {\displaystyle (s,t)} to ( x , y ) {\displaystyle (x,y)} . The transformation of the other forms are similar. Then, the surface integral of f on S is given by where is the surface element normal to S . Let us note that the surface integral of this 2-form is the same as the surface integral of the vector field which has as components f x {\displaystyle f_{x}} , f y {\displaystyle f_{y}} and f z {\displaystyle f_{z}} . Various useful results for surface integrals can be derived using differential geometry and vector calculus , such as the divergence theorem , magnetic flux , and its generalization, Stokes' theorem . Let us notice that we defined the surface integral by using a parametrization of the surface S . We know that a given surface might have several parametrizations. For example, if we move the locations of the North Pole and the South Pole on a sphere, the latitude and longitude change for all the points on the sphere. A natural question is then whether the definition of the surface integral depends on the chosen parametrization. For integrals of scalar fields, the answer to this question is simple; the value of the surface integral will be the same no matter what parametrization one uses. For integrals of vector fields, things are more complicated because the surface normal is involved. It can be proven that given two parametrizations of the same surface, whose surface normals point in the same direction, one obtains the same value for the surface integral with both parametrizations. If, however, the normals for these parametrizations point in opposite directions, the value of the surface integral obtained using one parametrization is the negative of the one obtained via the other parametrization. It follows that given a surface, we do not need to stick to any unique parametrization, but, when integrating vector fields, we do need to decide in advance in which direction the normal will point and then choose any parametrization consistent with that direction. Another issue is that sometimes surfaces do not have parametrizations which cover the whole surface. The obvious solution is then to split that surface into several pieces, calculate the surface integral on each piece, and then add them all up. This is indeed how things work, but when integrating vector fields, one needs to again be careful how to choose the normal-pointing vector for each piece of the surface, so that when the pieces are put back together, the results are consistent. For the cylinder, this means that if we decide that for the side region the normal will point out of the body, then for the top and bottom circular parts, the normal must point out of the body too. Last, there are surfaces which do not admit a surface normal at each point with consistent results (for example, the Möbius strip ). If such a surface is split into pieces, on each piece a parametrization and corresponding surface normal is chosen, and the pieces are put back together, we will find that the normal vectors coming from different pieces cannot be reconciled. This means that at some junction between two pieces we will have normal vectors pointing in opposite directions. Such a surface is called non-orientable , and on this kind of surface, one cannot talk about integrating vector fields.
https://en.wikipedia.org/wiki/Surface_integral
Surface integrity is the surface condition of a workpiece after being modified by a manufacturing process. The term was coined by Michael Field [ 1 ] and John F. Kahles [ 2 ] in 1964. [ 3 ] The surface integrity of a workpiece or item changes the material's properties. The consequences of changes to surface integrity are a mechanical engineering design problem, but the preservation of those properties are a manufacturing consideration. [ 4 ] Surface integrity can have a great impact on a parts function; for example, Inconel 718 can have a fatigue limit as high as 540 MPa (78,000 psi) after a gentle grinding or as low as 150 MPa (22,000 psi) after electrical discharge machining (EDM). [ 5 ] There are two aspects to surface integrity: topography characteristics and surface layer characteristics . The topography is made up of surface roughness , waviness , errors of form, and flaws. The surface layer characteristics that can change through processing are: plastic deformation , residual stresses , cracks, hardness , overaging, phase changes , recrystallization , intergranular attack, and hydrogen embrittlement . When a traditional manufacturing process is used, such as machining , the surface layer sustains local plastic deformation. [ 3 ] [ 4 ] The processes that affect surface integrity can be conveniently broken up into three classes: traditional processes , non-traditional processes , and finishing treatments . Traditional processes are defined as processes where the tool contacts the workpiece surface; for example: grinding , turning , and machining. These processes will only damage the surface integrity if the improper parameters are used, such as dull tools, too high feed speeds, improper coolant or lubrication, or incorrect grinding wheel hardness. Nontraditional processes are defined as processes where the tool does not contact the workpiece; examples of this type of process include EDM, electrochemical machining , and chemical milling . These processes will produce different surface integrity depending on how the processes are controlled; for instance, they can leave a stress-free surface, a remelted surface, or excessive surface roughness. Finishing treatments are defined as processes that negate surface finishes imparted by traditional and non-traditional processes or improve the surface integrity. For example, compressive residual stress can be enhanced via peening or roller burnishing or the recast layer left by EDMing can be removed via chemical milling. [ 6 ] Finishing treatments can affect the workpiece surface in a wide variety of manners. Some clean and/or remove defects, such as scratches, pores, burrs , flash , or blemishes. Other processes improve or modify the surface appearance by improving smoothness, texture, or color. They can also improve corrosion resistance , wear resistance, and/or reduce friction . Coatings are another type of finishing treatment that may be used to plate an expensive or scarce material onto a less expensive base material. [ 6 ] Manufacturing processes have five main variables: the workpiece, the tool , the machine tool , the environment, and process variables. All of these variables can affect the surface integrity of the workpiece by producing: [ 3 ]
https://en.wikipedia.org/wiki/Surface_integrity
Surface metrology is the measurement of small-scale features on surfaces, and is a branch of metrology . Surface primary form , surface fractality , and surface finish (including surface roughness ) are the parameters most commonly associated with the field. It is important to many disciplines and is mostly known for the machining of precision parts and assemblies which contain mating surfaces or which must operate with high internal pressures. Surface finish may be measured in two ways: contact and non-contact methods. Contact methods involve dragging a measurement stylus across the surface; these instruments are called profilometers . Non-contact methods include: interferometry , digital holography , confocal microscopy , focus variation , structured light , electrical capacitance , electron microscopy , photogrammetry and non-contact profilometers. The most common method is to use a diamond stylus profilometer . The stylus is run perpendicular to the lay of the surface. [ 1 ] The probe usually traces along a straight line on a flat surface or in a circular arc around a cylindrical surface. The length of the path that it traces is called the measurement length . The wavelength of the lowest frequency filter that will be used to analyze the data is usually defined as the sampling length . Most standards recommend that the measurement length should be at least seven times longer than the sampling length, and according to the Nyquist–Shannon sampling theorem it should be at least two times longer than the wavelength of interesting features. The assessment length or evaluation length is the length of data that will be used for analysis. Commonly one sampling length is discarded from each end of the measurement length. 3D measurements can be made with a profilometer by scanning over a 2D area on the surface. The disadvantage of a profilometer is that it is not accurate when the size of the features of the surface are close to the same size as the stylus. Another disadvantage is that profilometers have difficulty detecting flaws of the same general size as the roughness of the surface. [ 1 ] There are also limitations for non-contact instruments. For example, instruments that rely on optical interference cannot resolve features that are less than some fraction of the operating wavelength. This limitation can make it difficult to accurately measure roughness even on common objects, since the interesting features may be well below the wavelength of light. The wavelength of red light is about 650 nm, [ 2 ] while the average roughness, (R a ) of a ground shaft might be 200 nm. The first step of analysis is to filter the raw data to remove very high frequency data (called "micro-roughness") since it can often be attributed to vibrations or debris on the surface. Filtering out the micro-roughness at a given cut-off threshold also allows to bring closer the roughness assessment made using profilometers having different stylus ball radius e.g. 2 μm and 5 μm radii. Next, the data is separated into roughness, waviness and form. This can be accomplished using reference lines, envelope methods, digital filters, fractals or other techniques. Finally, the data is summarized using one or more roughness parameters, or a graph. In the past, surface finish was usually analyzed by hand. The roughness trace would be plotted on graph paper, and an experienced machinist decided what data to ignore and where to place the mean line. Today, the measured data is stored on a computer, and analyzed using methods from signal analysis and statistics. [ 3 ] Stylus-based contact instruments have the following advantages: Technologies : Optical measurement instruments have some advantages over the tactile ones as follows: Vertical scanning : Horizontal scanning : Non-scanning Because every instrument has advantages and disadvantages the operator must choose the right instrument depending on the measurement application. In the following some advantages and disadvantages to the main technologies are listed: The scale of the desired measurement will help decide which type of microscope will be used. For 3D measurements, the probe is commanded to scan over a 2D area on the surface. The spacing between data points may not be the same in both directions. In some cases, the physics of the measuring instrument may have a large effect on the data. This is especially true when measuring very smooth surfaces. For contact measurements, most obvious problem is that the stylus may scratch the measured surface. Another problem is that the stylus may be too blunt to reach the bottom of deep valleys and it may round the tips of sharp peaks. In this case the probe is a physical filter that limits the accuracy of the instrument. The real surface geometry is so complicated that a finite number of parameters cannot provide a full description. If the number of parameters used is increased, a more accurate description can be obtained. This is one of the reasons for introducing new parameters for surface evaluation. Surface roughness parameters are normally categorised into three groups according to its functionality. These groups are defined as amplitude parameters, spacing parameters, and hybrid parameters. [ 6 ] Parameters used to describe surfaces are largely statistical indicators obtained from many samples of the surface height. Some examples include: This is a small subset of available parameters described in standards like ASME B46.1 [ 7 ] and ISO 4287. [ 8 ] Most of these parameters originated from the capabilities of profilometers and other mechanical probe systems. In addition, new measures of surface dimensions have been developed which are more directly related to the measurements made possible by high-definition optical gauging technologies. Most of these parameters can be estimated using the SurfCharJ plugin [1] for the ImageJ . The surface roughness can also be calculated over an area. This gives S a instead of R a values. The ISO 25178 series describes all these roughness values in detail. The advantage over the profile parameters are: Surfaces have fractal properties, multi-scale measurements can also be made such as Length-scale Fractal Analysis or Area-scale Fractal Analysis. [ 9 ] To obtain the surface characteristic almost all measurements are subject to filtering. It is one of the most important topics when it comes to specifying and controlling surface attributes such as roughness, waviness, and form error. These components of the surface deviations must be distinctly separable in measurement to achieve a clear understanding between the surface supplier and the surface recipient as to the expected characteristics of the surface in question. Typically, either digital or analog filters are used to separate form error, waviness, and roughness resulting from a measurement. Main multi-scale filtering methods are Gaussian filtering, Wavelet transform and more recently Discrete Modal Decomposition. There are three characteristics of these filters that should be known in order to understand the parameter values that an instrument may calculate. These are the spatial wavelength at which a filter separates roughness from waviness or waviness from form error, the sharpness of a filter or how cleanly the filter separates two components of the surface deviations and the distortion of a filter or how much the filter alters a spatial wavelength component in the separation process. [ 7 ]
https://en.wikipedia.org/wiki/Surface_metrology
Surface micromachining builds microstructures by deposition and etching structural layers over a substrate . [ 1 ] This is different from Bulk micromachining , in which a silicon substrate wafer is selectively etched to produce structures. Generally, polysilicon is used as one of the substrate layers while silicon dioxide is used as a sacrificial layer. The sacrificial layer is removed or etched out to create any necessary void in the thickness direction. Added layers tend to vary in size from 2-5 micrometres. The main advantage of this machining process is the ability to build electronic and mechanical components (functions) on the same substrate. Surface micro-machined components are smaller compared to their bulk micro-machined counterparts. As the structures are built on top of the substrate and not inside it, the substrate's properties are not as important as in bulk micro-machining. Expensive silicon wafers can be replaced by cheaper substrates, such as glass or plastic . The size of the substrates may be larger than a silicon wafer, and surface micro-machining is used to produce thin-film transistors on large area glass substrates for flat panel displays. This technology can also be used for the manufacture of thin film solar cells , which can be deposited on glass, polyethylene terepthalate substrates or other non-rigid materials. Micro-machining starts with a silicon wafer or other substrate upon which new layers are grown. These layers are selectively etched by photo-lithography ; either a wet etch involving an acid , or a dry etch involving an ionized gas (or plasma ). Dry etching can combine chemical etching with physical etching or ion bombardment. Surface micro-machining involves as many layers as are needed with a different mask (producing a different pattern) on each layer. Modern integrated circuit fabrication uses this technique and can use as many as 100 layers. Micro-machining is a younger technology and usually uses no more than 5 or 6 layers. Surface micro-machining uses developed technology (although sometimes not enough for demanding applications) which is easily repeatable for volume production. A sacrificial layer is used to build complicated components, such as movable parts. For example, a suspended cantilever can be built by depositing and structuring a sacrificial layer, which is then selectively removed at the locations where the future beams must be attached to the substrate (i.e. the anchor points). A structural layer is then deposited on top of the polymer and structured to define the beams. Finally, the sacrificial layer is removed to release the beams, using a selective etch process that does not damage the structural layer. Many combinations of structural and sacrificial layers are possible. The combination chosen depends on the process. For example, it is important for the structural layer not to be damaged by the process used to remove the sacrificial layer. Surface Micro-machining can be seen in action in the following MEMS (Microelectromechanical) products:
https://en.wikipedia.org/wiki/Surface_micromachining
Surface modification is the act of modifying the surface of a material by bringing physical, chemical or biological characteristics different from the ones originally found on the surface of a material. [ 1 ] This modification is usually made to solid materials, but it is possible to find examples of the modification to the surface of specific liquids. The modification can be done by different methods with a view to altering a wide range of characteristics of the surface, such as: roughness, [ 2 ] hydrophilicity, [ 3 ] surface charge, [ 4 ] surface energy , biocompatibility [ 3 ] [ 5 ] and reactivity. [ 6 ] Surface engineering is the sub-discipline of materials science which deals with the surface of solid matter. It has applications to chemistry , mechanical engineering , and electrical engineering (particularly in relation to semiconductor manufacturing ). Solids are composed of a bulk material covered by a surface. The surface which bounds the bulk material is called the Surface phase . It acts as an interface to the surrounding environment. The bulk material in a solid is called the Bulk phase . The surface phase of a solid interacts with the surrounding environment. This interaction can degrade the surface phase over time. Environmental degradation of the surface phase over time can be caused by wear , corrosion , fatigue and creep . Surface engineering involves altering the properties of the Surface Phase in order to reduce the degradation over time. This is accomplished by making the surface robust to the environment in which it will be used. Surface engineering techniques are being used in the automotive, aerospace, missile, power, electronic, biomedical, [ 3 ] textile, petroleum, petrochemical, chemical, steel, power, cement, machine tools, construction industries. Surface engineering techniques can be used to develop a wide range of functional properties, including physical, chemical, electrical, electronic, magnetic, mechanical, wear-resistant and corrosion-resistant properties at the required substrate surfaces. Almost all types of materials, including metals, ceramics, polymers, and composites can be coated on similar or dissimilar materials. It is also possible to form coatings of newer materials (e.g., met glass. beta-C 3 N 4 ), graded deposits, multi-component deposits etc. In 1995, surface engineering was a £10 billion market in the United Kingdom. Coatings, to make surface life robust from wear and corrosion, was approximately half the market. [ 7 ] Functionalization of Antimicrobial Surfaces is a unique technology that can be used for sterilization in health industry, self-cleaning surfaces and protection from bio films. In recent years, there has been a paradigm shift in surface engineering from age-old electroplating to processes such as vapor phase deposition, [ 8 ] [ 9 ] diffusion, thermal spray & welding using advanced heat sources like plasma, [ 2 ] [ 3 ] laser, [ 10 ] ion, electron, microwave, solar beams, synchrotron radiation, [ 3 ] pulsed arc, pulsed combustion, spark, friction and induction. It's estimated that loss due to wear and corrosion in the US is approximately $500 billion. In the US, there are around 9524 establishments (including automotive, aircraft, power and construction industries) who depend on engineered surfaces with support from 23,466 industries. [ citation needed ] Surface functionalization introduces chemical functional groups to a surface. This way, materials with functional groups on their surfaces can be designed from substrates with standard bulk material properties. Prominent examples can be found in semiconductor industry and biomaterial research. [ 3 ] Plasma processing technologies are successfully employed for polymers surface functionalization.
https://en.wikipedia.org/wiki/Surface_modification
Surface nuclear magnetic resonance (SNMR), also known as magnetic resonance Sounding (MRS), is a geophysical technique specially designed for hydrogeology . It is based on the principle of nuclear magnetic resonance (NMR) and measurements can be used to indirectly estimate the water content of saturated and unsaturated zones in the earth's subsurface. [ 1 ] SNMR is used to estimate aquifer properties, including the quantity of water contained in the aquifer , porosity , and hydraulic conductivity . The MRS technique was originally conceived in the 1960s by Russell H. Varian , one of the inventors of the proton magnetometer . [ 2 ] SNMR is a product of a joint effort by many scientists and engineers who started developing this method in the USSR under the guidance of A.G. Semenov and continued this work all over the world. [ 3 ] Semenov's team used nuclear magnetic resonance (NMR) for non-invasive detection of proton-containing liquids (hydrocarbons or water) in the subsurface. The Voevodsky Institute of Chemical Kinetics and Combustion of the Siberian Branch of the Russian Academy of Sciences fabricated [ original research? ] the first version of the instrument for measurements of magnetic resonance signals from subsurface water ("hydroscope") in 1981. The basic principle of operation of magnetic resonance sounding, hitherto known as surface proton magnetic resonance (PMR), is similar to that of the proton magnetometer . They both assume records of the magnetic resonance signal from a proton-containing liquid (for example, water or hydrocarbons). However, in the proton magnetometer, a special sample of liquid is placed into the receiving coil and only the signal frequency is a matter of interest. In MRS, a wire loop 100 m in diameter is used as a transmitting/receiving antenna to probe water in the subsurface. Thus, the main advantage of the MRS method, compared with other geophysical methods, is that the surface measurement of the PMR signal from water molecules ensures that this method only responds to the subsurface water. A typical MRS survey is conducted in three stages. First, the ambient electromagnetic (EM) noise is measured. Then, a pulse of electrical current is transmitted through a cable on the surface of the ground, applying an external EM field to the subsurface. Finally, the external EM field is terminated, and the magnetic resonance signal is measured. [ 4 ] Three parameters of the measured MRS signal are: As with many other geophysical methods, MRS is site-dependent. Modeling results show that MRS performance depends on the magnitude of the natural geomagnetic field, the electrical conductivity of rocks, the electromagnetic noise and other factors SNMR can be used in both oil and water exploration, but since oil is generally deep down, the more common usage is in water exploration. With depth resolution of 200 meter, SNMR is the best way to model aquifers .
https://en.wikipedia.org/wiki/Surface_nuclear_magnetic_resonance
A surface of revolution is a surface in Euclidean space created by rotating a curve (the generatrix ) one full revolution around an axis of rotation (normally not intersecting the generatrix, except at its endpoints). [ 1 ] The volume bounded by the surface created by this revolution is the solid of revolution . Examples of surfaces of revolution generated by a straight line are cylindrical and conical surfaces depending on whether or not the line is parallel to the axis. A circle that is rotated around any diameter generates a sphere of which it is then a great circle , and if the circle is rotated around an axis that does not intersect the interior of a circle, then it generates a torus which does not intersect itself (a ring torus ). The sections of the surface of revolution made by planes through the axis are called meridional sections . Any meridional section can be considered to be the generatrix in the plane determined by it and the axis. [ 2 ] The sections of the surface of revolution made by planes that are perpendicular to the axis are circles. Some special cases of hyperboloids (of either one or two sheets) and elliptic paraboloids are surfaces of revolution. These may be identified as those quadratic surfaces all of whose cross sections perpendicular to the axis are circular. If the curve is described by the parametric functions x ( t ) , y ( t ) , with t ranging over some interval [ a , b ] , and the axis of revolution is the y -axis, then the surface area A y is given by the integral A y = 2 π ∫ a b x ( t ) ( d x d t ) 2 + ( d y d t ) 2 d t , {\displaystyle A_{y}=2\pi \int _{a}^{b}x(t)\,{\sqrt {\left({dx \over dt}\right)^{2}+\left({dy \over dt}\right)^{2}}}\,dt,} provided that x ( t ) is never negative between the endpoints a and b . This formula is the calculus equivalent of Pappus's centroid theorem . [ 3 ] The quantity ( d x d t ) 2 + ( d y d t ) 2 d t {\displaystyle {\sqrt {\left({dx \over dt}\right)^{2}+\left({dy \over dt}\right)^{2}}}\,dt} comes from the Pythagorean theorem and represents a small segment of the arc of the curve, as in the arc length formula. The quantity 2π x ( t ) is the path of (the centroid of) this small segment, as required by Pappus' theorem. Likewise, when the axis of rotation is the x -axis and provided that y ( t ) is never negative, the area is given by [ 4 ] A x = 2 π ∫ a b y ( t ) ( d x d t ) 2 + ( d y d t ) 2 d t . {\displaystyle A_{x}=2\pi \int _{a}^{b}y(t)\,{\sqrt {\left({dx \over dt}\right)^{2}+\left({dy \over dt}\right)^{2}}}\,dt.} If the continuous curve is described by the function y = f ( x ) , a ≤ x ≤ b , then the integral becomes A x = 2 π ∫ a b y 1 + ( d y d x ) 2 d x = 2 π ∫ a b f ( x ) 1 + ( f ′ ( x ) ) 2 d x {\displaystyle A_{x}=2\pi \int _{a}^{b}y{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}}\,dx=2\pi \int _{a}^{b}f(x){\sqrt {1+{\big (}f'(x){\big )}^{2}}}\,dx} for revolution around the x -axis, and A y = 2 π ∫ a b x 1 + ( d y d x ) 2 d x {\displaystyle A_{y}=2\pi \int _{a}^{b}x{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}}\,dx} for revolution around the y -axis (provided a ≥ 0 ). These come from the above formula. [ 5 ] This can also be derived from multivariable integration. If a plane curve is given by ⟨ x ( t ) , y ( t ) ⟩ {\displaystyle \langle x(t),y(t)\rangle } then its corresponding surface of revolution when revolved around the x-axis has Cartesian coordinates given by r ( t , θ ) = ⟨ y ( t ) cos ⁡ ( θ ) , y ( t ) sin ⁡ ( θ ) , x ( t ) ⟩ {\displaystyle \mathbf {r} (t,\theta )=\langle y(t)\cos(\theta ),y(t)\sin(\theta ),x(t)\rangle } with 0 ≤ θ ≤ 2 π {\displaystyle 0\leq \theta \leq 2\pi } . Then the surface area is given by the surface integral A x = ∬ S d S = ∬ [ a , b ] × [ 0 , 2 π ] ‖ ∂ r ∂ t × ∂ r ∂ θ ‖ d θ d t = ∫ a b ∫ 0 2 π ‖ ∂ r ∂ t × ∂ r ∂ θ ‖ d θ d t . {\displaystyle A_{x}=\iint _{S}dS=\iint _{[a,b]\times [0,2\pi ]}\left\|{\frac {\partial \mathbf {r} }{\partial t}}\times {\frac {\partial \mathbf {r} }{\partial \theta }}\right\|\ d\theta \ dt=\int _{a}^{b}\int _{0}^{2\pi }\left\|{\frac {\partial \mathbf {r} }{\partial t}}\times {\frac {\partial \mathbf {r} }{\partial \theta }}\right\|\ d\theta \ dt.} Computing the partial derivatives yields ∂ r ∂ t = ⟨ d y d t cos ⁡ ( θ ) , d y d t sin ⁡ ( θ ) , d x d t ⟩ , {\displaystyle {\frac {\partial \mathbf {r} }{\partial t}}=\left\langle {\frac {dy}{dt}}\cos(\theta ),{\frac {dy}{dt}}\sin(\theta ),{\frac {dx}{dt}}\right\rangle ,} ∂ r ∂ θ = ⟨ − y sin ⁡ ( θ ) , y cos ⁡ ( θ ) , 0 ⟩ {\displaystyle {\frac {\partial \mathbf {r} }{\partial \theta }}=\langle -y\sin(\theta ),y\cos(\theta ),0\rangle } and computing the cross product yields ∂ r ∂ t × ∂ r ∂ θ = ⟨ y cos ⁡ ( θ ) d x d t , y sin ⁡ ( θ ) d x d t , y d y d t ⟩ = y ⟨ cos ⁡ ( θ ) d x d t , sin ⁡ ( θ ) d x d t , d y d t ⟩ {\displaystyle {\frac {\partial \mathbf {r} }{\partial t}}\times {\frac {\partial \mathbf {r} }{\partial \theta }}=\left\langle y\cos(\theta ){\frac {dx}{dt}},y\sin(\theta ){\frac {dx}{dt}},y{\frac {dy}{dt}}\right\rangle =y\left\langle \cos(\theta ){\frac {dx}{dt}},\sin(\theta ){\frac {dx}{dt}},{\frac {dy}{dt}}\right\rangle } where the trigonometric identity sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1 {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1} was used. With this cross product, we get A x = ∫ a b ∫ 0 2 π ‖ ∂ r ∂ t × ∂ r ∂ θ ‖ d θ d t = ∫ a b ∫ 0 2 π ‖ y ⟨ y cos ⁡ ( θ ) d x d t , y sin ⁡ ( θ ) d x d t , y d y d t ⟩ ‖ d θ d t = ∫ a b ∫ 0 2 π y cos 2 ⁡ ( θ ) ( d x d t ) 2 + sin 2 ⁡ ( θ ) ( d x d t ) 2 + ( d y d t ) 2 d θ d t = ∫ a b ∫ 0 2 π y ( d x d t ) 2 + ( d y d t ) 2 d θ d t = ∫ a b 2 π y ( d x d t ) 2 + ( d y d t ) 2 d t {\displaystyle {\begin{aligned}A_{x}&=\int _{a}^{b}\int _{0}^{2\pi }\left\|{\frac {\partial \mathbf {r} }{\partial t}}\times {\frac {\partial \mathbf {r} }{\partial \theta }}\right\|\ d\theta \ dt\\[1ex]&=\int _{a}^{b}\int _{0}^{2\pi }\left\|y\left\langle y\cos(\theta ){\frac {dx}{dt}},y\sin(\theta ){\frac {dx}{dt}},y{\frac {dy}{dt}}\right\rangle \right\|\ d\theta \ dt\\[1ex]&=\int _{a}^{b}\int _{0}^{2\pi }y{\sqrt {\cos ^{2}(\theta )\left({\frac {dx}{dt}}\right)^{2}+\sin ^{2}(\theta )\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}}}\ d\theta \ dt\\[1ex]&=\int _{a}^{b}\int _{0}^{2\pi }y{\sqrt {\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}}}\ d\theta \ dt\\[1ex]&=\int _{a}^{b}2\pi y{\sqrt {\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}}}\ dt\end{aligned}}} where the same trigonometric identity was used again. The derivation for a surface obtained by revolving around the y-axis is similar. For example, the spherical surface with unit radius is generated by the curve y ( t ) = sin( t ) , x ( t ) = cos( t ) , when t ranges over [0,π] . Its area is therefore A = 2 π ∫ 0 π sin ⁡ ( t ) ( cos ⁡ ( t ) ) 2 + ( sin ⁡ ( t ) ) 2 d t = 2 π ∫ 0 π sin ⁡ ( t ) d t = 4 π . {\displaystyle {\begin{aligned}A&{}=2\pi \int _{0}^{\pi }\sin(t){\sqrt {{\big (}\cos(t){\big )}^{2}+{\big (}\sin(t){\big )}^{2}}}\,dt\\&{}=2\pi \int _{0}^{\pi }\sin(t)\,dt\\&{}=4\pi .\end{aligned}}} For the case of the spherical curve with radius r , y ( x ) = √ r 2 − x 2 rotated about the x -axis A = 2 π ∫ − r r r 2 − x 2 1 + x 2 r 2 − x 2 d x = 2 π r ∫ − r r r 2 − x 2 1 r 2 − x 2 d x = 2 π r ∫ − r r d x = 4 π r 2 {\displaystyle {\begin{aligned}A&=2\pi \int _{-r}^{r}{\sqrt {r^{2}-x^{2}}}\,{\sqrt {1+{\frac {x^{2}}{r^{2}-x^{2}}}}}\,dx\\&=2\pi r\int _{-r}^{r}\,{\sqrt {r^{2}-x^{2}}}\,{\sqrt {\frac {1}{r^{2}-x^{2}}}}\,dx\\&=2\pi r\int _{-r}^{r}\,dx\\&=4\pi r^{2}\,\end{aligned}}} A minimal surface of revolution is the surface of revolution of the curve between two given points which minimizes surface area . [ 6 ] A basic problem in the calculus of variations is finding the curve between two points that produces this minimal surface of revolution. [ 6 ] There are only two minimal surfaces of revolution ( surfaces of revolution which are also minimal surfaces): the plane and the catenoid . [ 7 ] A surface of revolution given by rotating a curve described by y = f ( x ) {\displaystyle y=f(x)} around the x-axis may be most simply described by y 2 + z 2 = f ( x ) 2 {\displaystyle y^{2}+z^{2}=f(x)^{2}} . This yields the parametrization in terms of x {\displaystyle x} and θ {\displaystyle \theta } as ( x , f ( x ) cos ⁡ ( θ ) , f ( x ) sin ⁡ ( θ ) ) {\displaystyle (x,f(x)\cos(\theta ),f(x)\sin(\theta ))} . If instead we revolve the curve around the y-axis, then the curve is described by y = f ( x 2 + z 2 ) {\displaystyle y=f({\sqrt {x^{2}+z^{2}}})} , yielding the expression ( x cos ⁡ ( θ ) , f ( x ) , x sin ⁡ ( θ ) ) {\displaystyle (x\cos(\theta ),f(x),x\sin(\theta ))} in terms of the parameters x {\displaystyle x} and θ {\displaystyle \theta } . If x and y are defined in terms of a parameter t {\displaystyle t} , then we obtain a parametrization in terms of t {\displaystyle t} and θ {\displaystyle \theta } . If x {\displaystyle x} and y {\displaystyle y} are functions of t {\displaystyle t} , then the surface of revolution obtained by revolving the curve around the x-axis is described by ( x ( t ) , y ( t ) cos ⁡ ( θ ) , y ( t ) sin ⁡ ( θ ) ) {\displaystyle (x(t),y(t)\cos(\theta ),y(t)\sin(\theta ))} , and the surface of revolution obtained by revolving the curve around the y-axis is described by ( x ( t ) cos ⁡ ( θ ) , y ( t ) , x ( t ) sin ⁡ ( θ ) ) {\displaystyle (x(t)\cos(\theta ),y(t),x(t)\sin(\theta ))} . Meridians are always geodesics on a surface of revolution. Other geodesics are governed by Clairaut's relation . [ 8 ] A surface of revolution with a hole in, where the axis of revolution does not intersect the surface, is called a toroid. [ 9 ] For example, when a rectangle is rotated around an axis parallel to one of its edges, then a hollow square-section ring is produced. If the revolved figure is a circle , then the object is called a torus .
https://en.wikipedia.org/wiki/Surface_of_revolution
Surface photovoltage ( SPV ) measurements are a widely used method to determine the minority carrier diffusion length of semiconductors . Since the transport of minority carriers determines the behavior of the p-n junctions that are ubiquitous in semiconductor devices, surface photovoltage data can be very helpful in understanding their performance. As a contactless method, SPV is a popular technique for characterizing poorly understood compound semiconductors where the fabrication of ohmic contacts or special device structures may be difficult. As the name suggests, SPV measurements involve monitoring the potential of a semiconductor surface while generating electron-hole pairs with a light source. The surfaces of semiconductors are often depletion regions (or space charge regions) where a built-in electric field due to defects has swept out mobile charge carriers. A reduced carrier density means that the electronic energy band of the majority carriers is bent away from the Fermi level . This band-bending gives rise to a surface potential. When a light source creates electron-hole pairs deep within the semiconductor, they must diffuse through the bulk before reaching the surface depletion region. The photogenerated minority carriers have a shorter diffusion length than the much more numerous majority carriers, with which they can radiatively recombine . The change in surface potential upon illumination is therefore a measure of the ability of minority carriers to reach the surface, namely the minority carrier diffusion length. As always in diffusive processes, the diffusion length L {\displaystyle L} is approximately related to the lifetime τ b u l k {\displaystyle \tau _{\mathrm {bulk} }} by the expression L = D τ b u l k {\displaystyle L={\sqrt {D\tau _{\mathrm {bulk} }}}} , where D {\displaystyle D} is the diffusion coefficient . The diffusion length is independent of any built-in fields in contrast to the drift behavior of the carriers. Note that the photogenerated majority carriers will also diffuse towards the surface but their number as a fraction of the thermally generated majority carrier density in a moderately doped semiconductor will be too small to create a measurable photovoltage. Both carrier types will also diffuse towards the rear contact where their collection can confuse interpretation of the data when the diffusion lengths are larger than the film thickness. In a real semiconductor, the measured diffusion length L m e a s = D τ e f f {\displaystyle L_{\mathrm {meas} }={\sqrt {D\tau _{\mathrm {eff} }}}} includes the effect of surface recombination, which is best understood through its effect on carrier lifetime : where τ e f f {\displaystyle \tau _{\mathrm {eff} }} is the effective carrier lifetime, τ b u l k {\displaystyle \tau _{\mathrm {bulk} }} is the bulk carrier lifetime, s {\displaystyle s} is the surface recombination velocity and d {\displaystyle d} is the film or wafer thickness. Even for well characterized materials, uncertainty about the value of the surface recombination velocity reduces the accuracy with which the diffusion length can be determined for thinner films. Surface photovoltage measurements are performed by placing a wafer or sheet film of a semiconducting material on a ground electrode and positioning a kelvin probe a small distance above the sample. The surface is illuminated with light of fixed wavelength in industrial applications or with light whose wavelength is scanned using a monochromator so as to vary the absorption depth of the photons. The deeper in the semiconductor that carrier generation occurs, the fewer the number of minority carriers that will reach the surface and the smaller the photovoltage. On a semiconductor whose spectral absorption coefficient is known, the minority carrier diffusion length can in principle be extracted from a measurement of photovoltage versus wavelength. The optical properties of a novel semiconductor may not be well known or may not be homogeneous across the sample. The temperature of the semiconductor must be carefully controlled during an SPV measurement test thermal drift complicate the comparison of different samples. Typically SPV measurements are done in an AC-coupled fashion using a chopped light source rather than a vibrating Kelvin probe. The minority carrier diffusion length is critical in determining the performance of devices such as photoconducting detectors and bipolar transistors . In both cases the ratio of the diffusion length to the device dimensions determines the gain . In photovoltaic devices, photodiodes and field-effect transistors , the drift behavior due to built-in fields is more important under typical conditions than the diffusive behavior. Even so the SPV is a convenient method of measuring the density of impurity-derived recombination centers that limit device performance. SPV is performed both as an automated and routine test of material quality in a production environment and as an experimental tool to probe the behavior of less well studied semiconducting materials. Time-resolved photoluminescence is an alternate contactless method of determining minority carrier transport properties.
https://en.wikipedia.org/wiki/Surface_photovoltage
Surface plasmon polaritons ( SPPs ) are electromagnetic waves that travel along a metal – dielectric or metal–air interface, practically in the infrared or visible -frequency. The term "surface plasmon polariton" explains that the wave involves both charge motion in the metal (" surface plasmon ") and electromagnetic waves in the air or dielectric (" polariton "). [ 1 ] They are a type of surface wave , guided along the interface in much the same way that light can be guided by an optical fiber. SPPs have a shorter wavelength than light in vacuum at the same frequency (photons). [ 2 ] Hence, SPPs can have a higher momentum and local field intensity . [ 2 ] Perpendicular to the interface, they have subwavelength-scale confinement. An SPP will propagate along the interface until its energy is lost either to absorption in the metal or scattering into other directions (such as into free space). Application of SPPs enables subwavelength optics in microscopy and photolithography beyond the diffraction limit . It also enables the first steady-state micro-mechanical measurement of a fundamental property of light itself: the momentum of a photon in a dielectric medium. Other applications are photonic data storage, light generation, and bio-photonics. [ 2 ] [ 3 ] [ 4 ] [ 5 ] SPPs can be excited by both electrons and photons. Excitation by electrons is created by firing electrons into the bulk of a metal. [ 6 ] As the electrons scatter, energy is transferred into the bulk plasma. The component of the scattering vector parallel to the surface results in the formation of a surface plasmon polariton. [ 7 ] For a photon to excite an SPP, both must have the same frequency and momentum. However, for a given frequency, a free-space photon has less momentum than an SPP because the two have different dispersion relations (see below). This momentum mismatch is the reason that a free-space photon from air cannot couple directly to an SPP. For the same reason, an SPP on a smooth metal surface cannot emit energy as a free-space photon into the dielectric (if the dielectric is uniform). This incompatibility is analogous to the lack of transmission that occurs during total internal reflection . Nevertheless, coupling of photons into SPPs can be achieved using a coupling medium such as a prism or grating to match the photon and SPP wave vectors (and thus match their momenta). A prism can be positioned against a thin metal film in the Kretschmann configuration or very close to a metal surface in the Otto configuration (Figure 1). A grating coupler matches the wave vectors by increasing the parallel wave vector component by an amount related to the grating period (Figure 2). This method, while less frequently utilized, is critical to the theoretical understanding of the effect of surface roughness . Moreover, simple isolated surface defects such as a groove, a slit or a corrugation on an otherwise planar surface provide a mechanism by which free-space radiation and SPs can exchange energy and hence couple. The properties of an SPP can be derived from Maxwell's equations . We use a coordinate system where the metal–dielectric interface is the z = 0 {\displaystyle z=0} plane, with the metal at z < 0 {\displaystyle z<0} and dielectric at z > 0 {\displaystyle z>0} . The electric and magnetic fields as a function of position ( x , y , z ) {\displaystyle (x,y,z)} and time t are as follows: [ 8 ] [ 9 ] [ 10 ] where A wave of this form satisfies Maxwell's equations only on condition that the following equations also hold: and Solving these two equations, the dispersion relation for a wave propagating on the surface is In the free electron model of an electron gas , which neglects attenuation, the metallic dielectric function is [ 11 ] where the bulk plasma frequency in SI units is where n is the electron density, e is the charge of the electron, m ∗ is the effective mass of the electron and ε 0 {\displaystyle {\varepsilon _{0}}} is the permittivity of free-space. The dispersion relation is plotted in Figure 3. At low k , the SPP behaves like a photon, but as k increases, the dispersion relation bends over and reaches an asymptotic limit called the "surface plasma frequency". [ a ] Since the dispersion curve lies to the right of the light line, ω = k ⋅ c , the SPP has a shorter wavelength than free-space radiation such that the out-of-plane component of the SPP wavevector is purely imaginary and exhibits evanescent decay. The surface plasma frequency is the asymptote of this curve, and is given by In the case of air, this result simplifies to If we assume that ε 2 is real and ε 2 > 0, then it must be true that ε 1 < 0, a condition which is satisfied in metals. Electromagnetic waves passing through a metal experience damping due to Ohmic losses and electron-core interactions. These effects show up in as an imaginary component of the dielectric function . The dielectric function of a metal is expressed ε 1 = ε 1 ′ + i ⋅ ε 1 ″ where ε 1 ′ and ε 1 ″ are the real and imaginary parts of the dielectric function, respectively. Generally | ε 1 ′ | >> ε 1 ″ so the wavenumber can be expressed in terms of its real and imaginary components as [ 8 ] The wave vector gives us insight into physically meaningful properties of the electromagnetic wave such as its spatial extent and coupling requirements for wave vector matching. As an SPP propagates along the surface, it loses energy to the metal due to absorption. The intensity of the surface plasmon decays with the square of the electric field , so at a distance x , the intensity has decreased by a factor of exp ⁡ { − 2 k x ″ x } {\textstyle \exp\{-2k_{x}''x\}} . The propagation length is defined as the distance for the SPP intensity to decay by a factor of 1/e . This condition is satisfied at a length [ 12 ] Likewise, the electric field falls off evanescently perpendicular to the metal surface. At low frequencies, the SPP penetration depth into the metal is commonly approximated using the skin depth formula. In the dielectric, the field will fall off far more slowly. The decay lengths in the metal and dielectric medium can be expressed as [ 12 ] where i indicates the medium of propagation. SPPs are very sensitive to slight perturbations within the skin depth and because of this, SPPs are often used to probe inhomogeneities of a surface. Nanofabricated systems that exploit SPPs demonstrate potential for designing and controlling the propagation of light in matter. In particular, SPPs can be used to channel light efficiently into nanometer scale volumes, leading to direct modification of resonate frequency dispersion properties (substantially shrinking the wavelength of light and the speed of light pulses for example), as well as field enhancements suitable for enabling strong interactions with nonlinear materials . The resulting enhanced sensitivity of light to external parameters (for example, an applied electric field or the dielectric constant of an adsorbed molecular layer) shows great promise for applications in sensing and switching. Current research is focused on the design, fabrication, and experimental characterization of novel components for measurement and communications based on nanoscale plasmonic effects. These devices include ultra-compact plasmonic interferometers for applications such as biosensing , optical positioning and optical switching, as well as the individual building blocks (plasmon source, waveguide and detector) needed to integrate a high-bandwidth, infrared-frequency plasmonic communications link on a silicon chip. In addition to building functional devices based on SPPs, it appears feasible to exploit the dispersion characteristics of SPPs traveling in confined metallo-dielectric spaces to create photonic materials with artificially tailored bulk optical characteristics, otherwise known as metamaterials . [ 5 ] Artificial SPP modes can be realized in microwave and terahertz frequencies by metamaterials; these are known as spoof surface plasmons . [ 13 ] [ 14 ] The excitation of SPPs is frequently used in an experimental technique known as surface plasmon resonance (SPR). In SPR, the maximum excitation of surface plasmons are detected by monitoring the reflected power from a prism coupler as a function of incident angle , wavelength or phase . [ 15 ] Surface plasmon -based circuits, including both SPPs and localized plasmon resonances , have been proposed as a means of overcoming the size limitations of photonic circuits for use in high performance data processing nano devices. [ 16 ] The ability to dynamically control the plasmonic properties of materials in these nano-devices is key to their development. A new approach that uses plasmon-plasmon interactions has been demonstrated recently. Here the bulk plasmon resonance is induced or suppressed to manipulate the propagation of light. [ 17 ] This approach has been shown to have a high potential for nanoscale light manipulation and the development of a fully CMOS- compatible electro-optical plasmonic modulator. CMOS compatible electro-optic plasmonic modulators will be key components in chip-scale photonic circuits. [ 18 ] In surface second harmonic generation , the second harmonic signal is proportional to the square of the electric field. The electric field is stronger at the interface because of the surface plasmon resulting in a non-linear optical effect . This larger signal is often exploited to produce a stronger second harmonic signal. [ 19 ] The wavelength and intensity of the plasmon-related absorption and emission peaks are affected by molecular adsorption that can be used in molecular sensors. For example, a fully operational prototype device detecting casein in milk has been fabricated. The device is based on monitoring changes in plasmon-related absorption of light by a gold layer. [ 20 ] Surface plasmon polaritons can only exist at the interface between a positive- permittivity material and a negative-permittivity material. [ 21 ] The positive-permittivity material, often called the dielectric material , can be any transparent material such as air or (for visible light) glass. The negative-permittivity material, often called the plasmonic material , [ 22 ] may be a metal or other material. It is more critical, as it tends to have a large effect on the wavelength, absorption length, and other properties of the SPP. Some plasmonic materials are discussed next. For visible and near-infrared light, the only plasmonic materials are metals, due to their abundance of free electrons, [ 22 ] which leads to a high plasma frequency . (Materials have negative real permittivity only below their plasma frequency.) Unfortunately, metals suffer from ohmic losses that can degrade the performance of plasmonic devices. The need for lower loss has fueled research aimed at developing new materials for plasmonics [ 22 ] [ 23 ] [ 24 ] and optimizing the deposition conditions of existing materials. [ 25 ] Both the loss and polarizability of a material affect its optical performance. The quality factor Q S P P {\displaystyle Q_{SPP}} for a SPP is defined as ε ′ 2 ε ″ {\displaystyle {\frac {\varepsilon '^{2}}{\varepsilon ''}}} . [ 24 ] The table below shows the quality factors and SPP propagation lengths for four common plasmonic metals; Al, Ag, Au and Cu deposited by thermal evaporation under optimized conditions. [ 25 ] The quality factors and SPP propagation lengths were calculated using the optical data from the Al , Ag , Au and Cu films. [ 10 ] Silver exhibits the lowest losses of current materials in both the visible, near-infrared (NIR) and telecom wavelengths. [ 25 ] Gold and copper perform equally well in the visible and NIR with copper having a slight advantage at telecom wavelengths. Gold has the advantage over both silver and copper of being chemically stable in natural environments making it well suited for plasmonic biosensors. [ 26 ] However, an interband transition at ~470 nm greatly increases the losses in gold at wavelengths below 600 nm. [ 27 ] Aluminum is the best plasmonic material in the ultraviolet regime (< 330 nm) and is also CMOS compatible along with copper. The fewer electrons a material has, the lower (i.e. longer-wavelength) its plasma frequency becomes. Therefore, at infrared and longer wavelengths, various other plasmonic materials also exist besides metals. [ 22 ] These include transparent conducting oxides , which have typical plasma frequency in the NIR - SWIR infrared range. [ 28 ] At longer wavelengths, semiconductors may also be plasmonic. Some materials have negative permittivity at certain infrared wavelengths related to phonons rather than plasmons (so-called reststrahlen bands ). The resulting waves have the same optical properties as surface plasmon polaritons, but are called by a different term, surface phonon polaritons . In order to understand the effect of roughness on SPPs, it is beneficial to first understand how a SPP is coupled by a grating Figure2 . When a photon is incident on a surface, the wave vector of the photon in the dielectric material is smaller than that of the SPP. In order for the photon to couple into a SPP, the wave vector must increase by Δ k = k S P − k x , photon {\displaystyle \Delta k=k_{SP}-k_{x,{\text{photon}}}} . The grating harmonics of a periodic grating provide additional momentum parallel to the supporting interface to match the terms. where k grating {\displaystyle k_{\text{grating}}} is the wave vector of the grating, θ 0 {\displaystyle \theta _{0}} is the angle of incidence of the incoming photon, a is the grating period, and n is an integer. Rough surfaces can be thought of as the superposition of many gratings of different periodicities. Kretschmann proposed [ 29 ] that a statistical correlation function be defined for a rough surface where z ( x , y ) {\displaystyle z(x,y)} is the height above the mean surface height at the position ( x , y ) {\displaystyle (x,y)} , and A {\displaystyle A} is the area of integration. Assuming that the statistical correlation function is Gaussian of the form where δ {\displaystyle \delta } is the root mean square height, r {\displaystyle r} is the distance from the point ( x , y ) {\displaystyle (x,y)} , and σ {\displaystyle \sigma } is the correlation length, then the Fourier transform of the correlation function is where s {\displaystyle s} is a measure of the amount of each spatial frequency k surf {\displaystyle k_{\text{surf}}} which help couple photons into a surface plasmon. If the surface only has one Fourier component of roughness (i.e. the surface profile is sinusoidal), then the s {\displaystyle s} is discrete and exists only at k = 2 π a {\displaystyle k={\frac {2\pi }{a}}} , resulting in a single narrow set of angles for coupling. If the surface contains many Fourier components, then coupling becomes possible at multiple angles. For a random surface, s {\displaystyle s} becomes continuous and the range of coupling angles broadens. As stated earlier, SPPs are non-radiative. When a SPP travels along a rough surface, it usually becomes radiative due to scattering. The Surface Scattering Theory of light suggests that the scattered intensity d I {\displaystyle dI} per solid angle d Ω {\displaystyle d\Omega } per incident intensity I 0 {\displaystyle I_{0}} is [ 30 ] where | W | 2 {\displaystyle |W|^{2}} is the radiation pattern from a single dipole at the metal/dielectric interface. If surface plasmons are excited in the Kretschmann geometry and the scattered light is observed in the plane of incidence (Fig. 4), then the dipole function becomes with where ψ {\displaystyle \psi } is the polarization angle and θ {\displaystyle \theta } is the angle from the z -axis in the xz -plane. Two important consequences come out of these equations. The first is that if ψ = 0 {\displaystyle \psi =0} (s-polarization), then | W | 2 = 0 {\displaystyle |W|^{2}=0} and the scattered light d I d Ω I 0 = 0 {\displaystyle {\frac {dI}{d\Omega \ I_{0}}}=0} . Secondly, the scattered light has a measurable profile which is readily correlated to the roughness. This topic is treated in greater detail in reference. [ 30 ]
https://en.wikipedia.org/wiki/Surface_plasmon_polariton
Surface plasmon resonance ( SPR ) is a phenomenon that occurs where electrons in a thin metal sheet become excited by light that is directed to the sheet with a particular angle of incidence, and then travel parallel to the sheet. Assuming a constant light source wavelength and that the metal sheet is thin, the angle of incidence that triggers SPR is related to the refractive index of the material and even a small change in the refractive index will cause SPR to not be observed. This makes SPR a possible technique for detecting particular substances ( analytes ) and SPR biosensors have been developed to detect various important biomarkers. [ 1 ] [ 2 ] The surface plasmon polariton is a non-radiative electromagnetic surface wave that propagates in a direction parallel to the negative permittivity/dielectric material interface. Since the wave is on the boundary of the conductor and the external medium (air, water or vacuum for example), these oscillations are very sensitive to any change of this boundary, such as the adsorption of molecules to the conducting surface. [ 3 ] To describe the existence and properties of surface plasmon polaritons, one can choose from various models (quantum theory, Drude model , etc.). The simplest way to approach the problem is to treat each material as a homogeneous continuum, described by a frequency-dependent relative permittivity between the external medium and the surface. This quantity, hereafter referred to as the materials' " dielectric function ", is the complex permittivity . In order for the terms that describe the electronic surface plasmon to exist, the real part of the dielectric constant of the conductor must be negative and its magnitude must be greater than that of the dielectric. This condition is met in the infrared-visible wavelength region for air/metal and water/metal interfaces (where the real dielectric constant of a metal is negative and that of air or water is positive). LSPRs ( localized surface plasmon resonances) are collective electron charge oscillations in metallic nanoparticles that are excited by light. They exhibit enhanced near-field amplitude at the resonance wavelength. This field is highly localized at the nanoparticle and decays rapidly away from the nanoparticle/dielectric interface into the dielectric background, though far-field scattering by the particle is also enhanced by the resonance. Light intensity enhancement is a very important aspect of LSPRs and localization means the LSPR has very high spatial resolution (subwavelength), limited only by the size of nanoparticles. Because of the enhanced field amplitude, effects that depend on the amplitude such as magneto-optical effect are also enhanced by LSPRs. [ 4 ] [ 5 ] In order to excite surface plasmon polaritons in a resonant manner, one can use electron bombardment or incident light beam (visible and infrared are typical). The incoming beam has to match its momentum to that of the plasmon. [ 6 ] In the case of p-polarized light (polarization occurs parallel to the plane of incidence), this is possible by passing the light through a block of glass to increase the wavenumber (and the momentum ), and achieve the resonance at a given wavelength and angle. S-polarized light (polarization occurs perpendicular to the plane of incidence) cannot excite electronic surface plasmons. Electronic and magnetic surface plasmons obey the following dispersion relation : where k( ω {\displaystyle \omega } ) is the wave vector, ε {\displaystyle \varepsilon } is the relative permittivity, and μ {\displaystyle \mu } is the relative permeability of the material (1: the glass block, 2: the metal film), while ω {\displaystyle \omega } is angular frequency and c {\displaystyle {c}} is the speed of light in vacuum. [ 7 ] Typical metals that support surface plasmons are silver and gold, but metals such as copper, titanium or chromium have also been used. When using light to excite SP waves, there are two configurations which are well known. In the Otto configuration , the light illuminates the wall of a glass block, typically a prism, and is totally internally reflected . A thin metal film (for example gold) is positioned close enough to the prism wall so that an evanescent wave can interact with the plasma waves on the surface and hence excite the plasmons. [ 8 ] In the Kretschmann configuration (also known as Kretschmann–Raether configuration ), the metal film is evaporated onto the glass block. The light again illuminates the glass block, and an evanescent wave penetrates through the metal film. The plasmons are excited at the outer side of the film. This configuration is used in most practical applications. [ 8 ] When the surface plasmon wave interacts with a local particle or irregularity, such as a rough surface , part of the energy can be re-emitted as light. This emitted light can be detected behind the metal film from various directions. Surface plasmon resonance can be implemented in analytical instrumentation. SPR instruments consist of a light source, an input scheme, a prism with analyte interface, a detector, and computer. The detectors used in surface plasmon resonance convert the photons of light reflected off the metallic film into an electrical signal. A position sensing detector (PSD) or charged-coupled device (CCD) may be used to operate as detectors. [ 9 ] Surface plasmons have been used to enhance the surface sensitivity of several spectroscopic measurements including fluorescence , Raman scattering , and second-harmonic generation . In their simplest form, SPR reflectivity measurements can be used to detect molecular adsorption, such as polymers, DNA or proteins, etc. Technically, it is common to measure the angle of minimum reflection (angle of maximum absorption). This angle changes in the order of 0.1° during thin (about nm thickness) film adsorption. (See also the Examples.) In other cases the changes in the absorption wavelength is followed. [ 10 ] The mechanism of detection is based on the adsorbing molecules causing changes in the local index of refraction, changing the resonance conditions of the surface plasmon waves. The same principle is exploited in the recently developed competitive platform based on loss-less dielectric multilayers ( DBR ), supporting surface electromagnetic waves with sharper resonances ( Bloch surface waves ). [ 11 ] If the surface is patterned with different biopolymers, using adequate optics and imaging sensors (i.e. a camera), the technique can be extended to surface plasmon resonance imaging (SPRI). This method provides a high contrast of the images based on the adsorbed amount of molecules, somewhat similar to Brewster angle microscopy (this latter is most commonly used together with a Langmuir–Blodgett trough ). For nanoparticles, localized surface plasmon oscillations can give rise to the intense colors of suspensions or sols containing the nanoparticles . Nanoparticles or nanowires of noble metals exhibit strong absorption bands in the ultraviolet – visible light regime that are not present in the bulk metal. This extraordinary absorption increase has been exploited to increase light absorption in photovoltaic cells by depositing metal nanoparticles on the cell surface. [ 12 ] The energy (color) of this absorption differs when the light is polarized along or perpendicular to the nanowire. [ 13 ] Shifts in this resonance due to changes in the local index of refraction upon adsorption to the nanoparticles can also be used to detect biopolymers such as DNA or proteins. Related complementary techniques include plasmon waveguide resonance, QCM , extraordinary optical transmission , and dual-polarization interferometry . The first SPR immunoassay was proposed in 1983 by Liedberg, Nylander, and Lundström, then of the Linköping Institute of Technology (Sweden). [ 15 ] They adsorbed human IgG onto a 600-Ångström silver film, and used the assay to detect anti-human IgG in water solution. Unlike many other immunoassays, such as ELISA , an SPR immunoassay is label free in that a label molecule is not required for detection of the analyte. [ 16 ] [ 17 ] [ 14 ] Additionally, the measurements on SPR can be followed real-time allowing the monitoring of individual steps in sequential binding events particularly useful in the assessment of for instance sandwich complexes. Multi-parametric surface plasmon resonance , a special configuration of SPR, can be used to characterize layers and stacks of layers. Besides binding kinetics, MP-SPR can also provide information on structural changes in terms of layer true thickness and refractive index. MP-SPR has been applied successfully in measurements of lipid targeting and rupture, [ 18 ] CVD-deposited single monolayer of graphene (3.7Å) [ 19 ] as well as micrometer thick polymers. [ 20 ] The most common data interpretation is based on the Fresnel formulas , which treat the formed thin films as infinite, continuous dielectric layers. This interpretation may result in multiple possible refractive index and thickness values. Usually only one solution is within the reasonable data range. In multi-parametric surface plasmon resonance , two SPR curves are acquired by scanning a range of angles at two different wavelengths, which results in a unique solution for both thickness and refractive index. Metal particle plasmons are usually modeled using the Mie scattering theory. In many cases no detailed models are applied, but the sensors are calibrated for the specific application, and used with interpolation within the calibration curve. Due to the versatility of SPR instrumentation, this technique pairs well with other approaches, leading to novel applications in various fields, such as biomedical and environmental studies. When coupled with nanotechnology , SPR biosensors can use nanoparticles as carriers for therapeutic implants. For instance, in the treatment of Alzheimer's disease , nanoparticles can be used to deliver therapeutic molecules in targeted ways. [ 21 ] In general, SPR biosensing is demonstrating advantages over other approaches in the biomedical field due to this technique being label-free, lower in costs, applicable in point-of-care settings, and capable of producing faster results for smaller research cohorts. In the study of environmental pollutants, SPR instrumentation can be used as a replacement for former chromatography-based techniques. Current pollution research relies on chromatography to monitor increases in pollution in an ecosystem over time. When SPR instrumentation with a Kretschmann prism configuration was used in the detection of chlorophene, an emerging pollutant, it was demonstrated that SPR has similar precision and accuracy levels as chromatography techniques. [ 22 ] Furthermore, SPR sensing surpasses chromatography techniques through its high-speed, straightforward analysis. One of the first common applications of surface plasmon resonance spectroscopy was the measurement of the thickness (and refractive index) of adsorbed self-assembled nanofilms on gold substrates. The resonance curves shift to higher angles as the thickness of the adsorbed film increases. This example is a 'static SPR' measurement. When higher speed observation is desired, one can select an angle right below the resonance point (the angle of minimum reflectance), and measure the reflectivity changes at that point. This is the so-called 'dynamic SPR' measurement. The interpretation of the data assumes that the structure of the film does not change significantly during the measurement. SPR can be used to study the real-time kinetics of molecular interactions. Determining the affinity between two ligands involves establishing the equilibrium dissociation constant, representing the equilibrium value for the product quotient. This constant can be determined using dynamic SPR parameters, calculated as the dissociation rate divided by the association rate. K D = k d k a {\displaystyle K_{\rm {D}}={\frac {k_{\text{d}}}{k_{\text{a}}}}} In this process, a ligand is immobilized on the dextran surface of the SPR crystal. Through a microflow system, a solution with the analyte is injected over the ligand-covered surface. The binding of the analyte to the ligand causes an increase in the SPR signal (expressed in response units, RU). Following the association time, a solution without the analyte (typically a buffer) is introduced into the microfluidics to initiate the dissociation of the bound complex between the ligand and analyte. As the analyte dissociates from the ligand, the SPR signal decreases. From these association ('on rate', k a ) and dissociation rates ('off rate', k d ), the equilibrium dissociation constant ('binding constant', K D ) can be calculated. The detected SPR signal is a consequence of the electromagnetic 'coupling' of the incident light with the surface plasmon of the gold layer. This interaction is particularly sensitive to the characteristics of the layer at the gold–solution interface, which is usually just a few nanometers thick. When substances bind to the surface, it alters the way light is reflected, causing a change in the reflection angle, which can be measured as a signal in SPR experiments. One common application is measuring the kinetics of antibody-antigen interactions . As SPR biosensors facilitate measurements at different temperatures, thermodynamic analysis can be performed to obtain a better understanding of the studied interaction. By performing measurements at different temperatures, typically between 4 and 40 °C, it is possible to relate association and dissociation rate constants with activation energy and thereby obtain thermodynamic parameters including binding enthalpy, binding entropy, Gibbs free energy and heat capacity. As SPR allows real-time monitoring, individual steps in sequential binding events can be thoroughly assessed when investigating the suitability between antibodies in a sandwich configuration. Additionally, it allows the mapping of epitopes as antibodies of overlapping epitopes will be associated with an attenuated signal compared to those capable of interacting simultaneously. Recently, there has been an interest in magnetic surface plasmons. These require materials with large negative magnetic permeability, a property that has only recently been made available with the construction of metamaterials . Layering graphene on top of gold has been shown to improve SPR sensor performance. [ 23 ] Its high electrical conductivity increases the sensitivity of detection. The large surface area of graphene also facilitates the immobilization of biomolecules while its low refractive index minimizes its interference. Enhancing SPR sensitivity by incorporating graphene with other materials expands the potential of SPR sensors, making them practical in a broader range of applications. For instance, the enhanced sensitivity of graphene can be used in conjunction with a silver SPR sensor, providing a cost-effective alternative for measuring glucose levels in urine. [ 24 ] Graphene has also been shown to improve the resistance of SPR sensors to high-temperature annealing up to 500 °C. [ 25 ] Recent advancements in SPR technology have given rise to novel formats increasing the scope and applicability of SPR sensing. Fiber optic SPR involves the integration of SPR sensors onto the ends of optical fibers, enabling the direct coupling of light with the surface plasmons as the analytes are passed through a hollow SPR core. [ 26 ] This format offers enhanced sensitivity and allows for the development of compact sensing devices, making it particularly valuable for applications requiring remote sensing in the field. [ 27 ] It also offers an increased surface area for analytes to bind to the inner lining of the fiber optic .
https://en.wikipedia.org/wiki/Surface_plasmon_resonance
Surface plasmon resonance microscopy ( SPRM ), also called surface plasmon resonance imaging ( SPRI ), [ 1 ] is a label free analytical tool that combines the surface plasmon resonance of metallic surfaces with imaging of the metallic surface. [ 2 ] The heterogeneity of the refractive index of the metallic surface imparts high contrast images, caused by the shift in the resonance angle. [ 1 ] SPRM can achieve a sub-nanometer thickness sensitivity [ 3 ] and lateral resolution achieves values of micrometer scale. [ 4 ] SPRM is used to characterize surfaces such as self-assembled monolayers , multilayer films , metal nanoparticles , oligonucleotide arrays , and binding and reduction reactions. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Surface plasmon polaritons are surface electromagnetic waves coupled to oscillating free electrons of a metallic surface that propagate along a metal/dielectric interface. [ 10 ] Since polaritons are highly sensitive to small changes in the refractive index of the metallic material, [ 11 ] it can be used as a biosensing tool that does not require labeling. SPRM measurements can be made in real-time, [ 12 ] such as measuring binding kinetics of membrane proteins in single cells, [ 13 ] or DNA hybridization. [ 14 ] [ 15 ] The concept of classical SPR has been since 1968 but the SPR imaging technique was introduced in 1988 by Rothenhäusler and Knoll. [ 16 ] Capturing a high resolution image of low contrast samples for optical measuring techniques is a near impossible task until the introduction of SPRM technique that came into existence in the year 1988. In SPRM technique, plasmon surface polariton (PSP) waves are used for illumination. In simple words, SPRI technology is an advanced version of classical SPR analysis, where the sample is monitored without label through the use of a CCD camera. The SPRI technology with the aid of CCD camera gives advantage of recording the sensograms and SPR images, and simultaneously analyzes hundreds of interactions. [ 17 ] Surface plasmons or surface plasmon polaritons are generated by coupling of electrical field with free electrons in a metal. [ 18 ] [ 19 ] SPR waves propagate along the interface between dielectrics and a conducting layer rich in free electrons. [ 20 ] As shown in Figure 2, when light passes from a medium of high refractive index to a second medium with a lower refractive index, the light is totally reflected under certain conditions. [ 21 ] In order to get total internal reflection (TIR), the θ 1 and θ 2 should be within a certain range that can be explained through the Snell's law. When light passes through a high refractive index media to a lower refractive media, it is reflected at an angle θ 2 , which is defined in Equation 1. [ citation needed ] In the TIR process some portion of the reflected light leaks a small portion of electrical field intensity into medium 2 ( η 1 > η 2 ). The light leaked into the medium 2 penetrates as an evanescent wave. The intensity and penetration depth of the evanescent wave can be calculated according to Equations 2 and 3, respectively. [ 22 ] Figure 3 shows a schematic representation of surface plasmons coupled to electron density oscillations. The light wave is trapped on the surface of the metal layer by collective coupling to the electrons of the metal surface. When the electron's plasma and the electric field of the wave light couple their frequency oscillations they enters into resonance. [ 23 ] [ 24 ] Recently, the leakage light inside of the metal surface had been imaged. [ 25 ] Radiation of different wavelengths (green, red and blue) was converted into surface plasmon polaritons, through the interaction of the photons at the metal/dielectric interface. Two different metal surfaces were used; gold and silver. The propagation length of the SPP along the x-y plane (metal plane) in each metal and photon wavelength were compared. The propagation length is defined as the distance traveled by the SPP along the metal before its intensity decreases by a factor of 1/e, as defined in Equation 4. [ citation needed ] Figure 4 shows the leakage light captured by a color CCD camera, of the green, red and blue photons in gold (a) and silver (b) films. In part c) of Figure 4, the intensity of the surface plasmon polaritons with the distance is shown. It was determined that the leakage light intensity is proportional to the intensity in the waveguide. [ citation needed ] where δ SPP is the propagation length; ε’m and ε’’m are the relative permittivity of the metal and λ0 is the free space wavelength. [ 26 ] The metallic film is capable of absorbing light due to the coherent oscillation of the conduction band electrons induced by the interaction with an electromagnetic field. [ 27 ] Electrons in the conduction band induce polarization after interaction with the electric field of the radiation. A net charge difference is created in the surface of the metal film, creating a collective dipolar oscillation of electrons with the same phase. [ 28 ] When the electron motion matches the frequency of the electromagnetic field, the absorption of incident radiation occurs. The oscillation frequency of gold surface plasmons is found in the visible region of the electromagnetic spectrum, giving a red color while silver gives yellow color. [ 29 ] Nanorods exhibit two absorption peaks in the UV-vis region due to longitudinal and transversal oscillation, for gold nanorods the transverse oscillation generates a peak at 520 nm, while the longitudinal oscillation generates absorption at longer wavelengths, within a range of 600 to 800 nm. [ 29 ] [ 30 ] Silver nanoparticles shift their light absorption wavelengths to higher energy levels, where the blue shifting goes from 408 nm to 380 nm, and 372 nm, when they change from sphere to rod and wire, respectively. [ 31 ] The absorption intensity and wavelength of gold and silver depends on the size and shape of the particles. [ 32 ] In Figure 5, the size and shape of silver nanoparticles influenced the intensity of the scattered light and maximum wavelength of silver nanoparticles. The triangular shaped particles appear red with a maximum scattered light at 670–680 nm, the pentagonal particles appear in green (620–630 nm) and the spherical particles have higher absorption energies (440–450 nm), appear in blue. [ 33 ] Surface plasmon polaritons are quasiparticles, composed by electromagnetic waves-coupled to free electrons of the conduction band of metals. [ 34 ] One of widely used methods uses to couple p-polarized light with the metal-dielectric interface is prism-based coupling. [ 35 ] Prism couplers are the most widely used to excite surface plasmon polaritons. This method is also called Kretschmann–Raether configuration, where TIR creates an evanescent wave that couples the free electrons of the metal surface. [ 36 ] High numerical aperture objective lenses have been explored as a variant of prism-coupling to excite surface plasmon polaritons. [ 15 ] Waveguide coupling is also used to create surface plasmons. Kretschmann–Raether configuration is used to achieve resonance between light and free electrons of the metal surface. In this configuration a prism with high refractive index is interfaced with a metal film. Light from a source propagates through the prism is made incident on the metal film. As a consequence of the TIR, some leaked through metal film, forming evanescent wave in the dielectric medium as in Figure 6. [ 12 ] The evanescent wave penetrates a characteristic distance into the less optically dense medium where it is attenuated. [ 37 ] Figure 6 shows the Kretschmann–Raether configuration, where a prism with refractive index of η 1 is coupled to a dielectric surface with a refractive index η 2 , the incidence angle of the light is θ . The interaction between the light and the surface polaritons in the TIR can be explained by using the Fresnel multilayer reflection; the amplitude reflection coefficient ( r pmd ) is expressed as follows in Equation 5. [ 38 ] The power reflection coefficient R is defined as follows: In Figure 7, a schematic representation of the Otto prism coupling prism is shown. In the Figure 7, the air gap was shown a little thick just to explain the schematic although in reality, the air gap is so thin between prism and metal layer. The electromagnetic waves are conducted through an optical waveguide. When light enters to the region with a thin metal layer, it evanescently penetrates through the metal layer exciting a Surface Plasmon Wave (SPW). In waveguide coupling configuration, the waveguide is created when the refraction index of the grating is greater than that of substrate. Incident radiation propagates along the waveguide layer with high refractive index. [ 39 ] In Figure 8, electromagnetic waves are guided through a wave-guiding layer, once the optical waves reached the interface wave-guiding layer metal an evanescent wave is created. The evanescent wave excites the surface plasmon at the metal-dielectric interface. [ 40 ] Due to the periodic grating, the phase matching between the incident light and the guide mode is easy to obtain. [ 41 ] According to Equation 7, the propagation vector ( Kz ) in the z direction can be tuned by changing the periodicity Λ. The grating vector can be modified, and the angle of resonant excitation can be controlled. [ 42 ] In Figure 9, q is the diffraction order it can have values of any integer (positive, negative or zero). [ 43 ] The propagation constant of a monochromatic beam of light parallel to the surface is defined by Equation 8. [ 44 ] where θ is the angle of incidence, k sp is the propagation constant of the surface plasmon, and n ( p ) is the refractive index of the prism. When the wave vector of the SPW, k sp matches the wave vector of the incident light k x {\textstyle k_{x}} , SPW is expressed as: [ 44 ] Here εd and εm represent the dielectric constant of dielectrics and the metal while the wavelength of the incident light corresponds to λ . kx and ksp can be represented as: [ 44 ] The surface plasmons are evanescent waves that have their maximum intensity at the interface and decay exponentially away from the phase boundary to a penetration depth. [ 13 ] The propagation of the surface plasmons is intensely affected by a thin film coating on the conducting layer. The resonance angle θ shifts, when the metal surface is coated with a dielectric material, due to the change of the propagation vector k of the surface plasmon. [ 45 ] This sensitivity is due to the shallow penetration depth of the evanescent wave. Materials with a high amount of free electrons are used. Metal films of roughly 50 nm made of copper, titanium, chromium and gold are used. However, Au is the most common metal used in SPR as well as in SPRM. Scanning angle SPR is the most widely used method for detecting biomolecular interactions. [ 40 ] It measures the reflectance percentage (%R) from a prism/metal film assembly as a function of the incident angle at a fixed excitation wavelength. When the angle of incidence matches the propagation constant of the interface, this mode is excited at expenditure of the reflected light. As a consequence, the reflectivity value at the resonance angle is dumped. [ 46 ] The propagation constant of the polaritons can be modified by varying the dielectric material. This modification causes resonance angle shifting as in the example shown in Figure 10, from θ 1 to θ 2 due to the change on the surface plasmon propagation constant. The resonance angle can be found by using Equation 11. where n 1 is n 2 and n g are the refractive index of medium 1, 2 and the metal layer, respectively. [ 46 ] Using TIR two-dimensional imaging is possible to achieve spatial differences in %R at a fixed angle θ . A beam of monochromatic light is used to irradiate the sample at a fixed incident angle. The SPR image is created from the reflected light detected by a CCD camera. [ 13 ] The minimum value of %R at the resonance angle provides SPRM. [ 8 ] Huang and collaborators developed a microscope with an objective with high numerical aperture (NA), which improve the lateral resolution at expense of the longitudinal resolution. [ 47 ] The resolution of a conventional light microscopy is limited by the light diffraction limit. In SPRM, the excited surface plasmons adopt a horizontal configuration from the incident beam light. The polaritons will travel along the metal-dielectric interface, for a determined period, until they decay back into photons. Therefore, the resolution achieved by SPRM is determined by the propagation length ksp of the surface plasmons parallel to the incident plane. [ 46 ] The separation between two areas should be approximately the magnitude of ksp in order to be resolved. Berger, Kooyman and Greve showed that the lateral resolution can be tuned by changing the excitation wavelength, the better resolution is achieved when the excitation energy increases. Equations 4 and 12 defines the magnitude of the wave vector of the surface plasmons. [ 48 ] where n 2 is the refractive index of medium 2, n g is the refractive index of the metal film, and λ is the excitation wavelength. [ 46 ] The surface plasmon resonance microscopy is based on surface plasmon resonance and recording desired images of the structures present on the substrate using an instrument equipped with a CCD camera. In the past decade, SPR sensing has been demonstrated to be an exceedingly powerful technique and used quite extensively in the research and development of materials, biochemistry and pharmaceutical sciences. [ 49 ] The SPRM instrument works with the combination of the following main components: source light (typically He-Ne laser), that further travels through a prism that is attached to a glass side, coated with a thin metal film (typically gold or silver), where the light beam reflects at the gold/solution interface at an angle greater than the critical angle. [ 1 ] The reflected light from the interface surface area is recorded by a CCD detector, and an image is recorded. Although the above-mentioned components are some important for SPRM, additional accessories such as polarizers, filters, beam expanders, focusing lenses, rotating stage, etc., similar to several imaging methods are installed and used in the instrumentation for an effective microscopic technique as demanded by the application. Figure 12 shows a typical SPRM. Depending on the applications, and to optimize the imaging technique, the researchers modify this basic instrumentation with some design changes that even include altering the source beam. One of such design changes that resulted in a different SPRM is an objective-type as shown in Figure 11 with some modification in the optical configuration. [ 47 ] The SPRi systems are currently manufactured by well known biomedical instrumentation manufacturers such as GE Life Sciences, HORIBA, Biosensing USA, etc. The cost of SPRi's ranger from, USD 100k-250k, although simple demonstration prototypes can be made for USD2000. [ 50 ] To perform measurements for SPRM, the sample preparation is a critical step. There are two factors that can be affected by the immobilization step: one is the reliability and reproducibility of the acquire data. It is important to ensure stability to the recognition element; such as antibodies, proteins, enzymes, under the experiment conditions. Moreover, the stability of the immobilized specimens will affect the sensitivity, and/or the limit of detection (LOD). [ 51 ] [ 52 ] One of the most popular immobilization methods used is Self-Assembled Monolayer (SAM) on gold surface. Jenkins and collaborators 2001, used mercaptoethanol patches surrounded by SAM composed of octadecanethiol (ODT) to study the adsorption of egg-phosphatidylcholine on the ODT SAM. [ 5 ] A pattern of ODT-mercaptoethanol was made onto a 50 nm gold film. The gold film was obtained through thermal evaporation on a LaSFN 9 glass. The lipid vesicles were deposited on the ODT SAM through adsorption, giving a final multilayer thickness greater than 80 Å. [ citation needed ] 11-Mercaptoundecanoic acid-Self assembled monolayer (MUA-SAM) were formed on Gold coated BK7 slides. A PDMS plate was masked on the MUA-SAM chip. Clenbuterol (CLEN) was attached to BSA molecules through amide bond, between the carboxylic group of BSA and the amine group of CLEN molecules. In order to immobilize BSA on the gold surface, the spots created through PDMS making were functionalized with sulfo-NHS and EDC, subsequently 1% BSA solution was poured in the spots and incubated for 1 hour. Non-immobilized BSA was rinsed out with PBS and CLEN solution was poured on the spots, unimmobilized CLEN was removed through PBS rinse. [ 53 ] An alkanethiol-SAM was prepared in order to simultaneously measure the concentration of horseradish peroxidase (Px), Human Immunoglobulin E (IgE), Human choriogonadotropin (hCG) and Human immunoglobulin G (IgG), through SPR. The alkanethiols made of carbon chains composed by 11 and 16 carbons were self-assembled on the sensor chip. The antibodies were attached to the C16 alkanethiol, which had a terminal carboxylic group. [ 54 ] The micro patterned electrode was fabricated by gold deposition on microscope slides. PDMS stamping was used to produce an array of hydrophilic/hydrophobic surface; ODT treatment followed by immersion in 2-mercaptoethanol solutions rendered a functionalized surface for lipid membranes deposition. The patterned electrode was characterized through SPRM. In the Figure 14 B, the SPRM image reveals the size of the pockets, which was 100 um x 100 um, and they were 200 um apart. As is seen in the image the remarkable contrast of the image is due to the high sensitivity of the technique. [ citation needed ] SPRM is a useful technique for measuring concentration of biomolecules in the solution, detection of binding molecules and real time monitoring of molecular interactions. It can be used as biosensor for surface interactions of biological molecules: antigen-antibody binding, mapping and sorption kinetics. For example, one of the possible reason of Type 1 diabetes of children is the high-level presence of Cow's milk antibodies IgG, IgA, IgM (mainly due to IgA) in their serum. [ 55 ] Cow's milk antibodies can be detected in the milk and serum sample using SPRM. [ 56 ] SPRM is also advantageous to detect the site-specific attachment of lymphocyte B or T on antibody array. This technique is convenient to study the label free and real time interactions of cells on the surface. So SPRM can be served as diagnostic tool for cell surface adhesion kinetics. [ 57 ] Besides its merits, there are limitations of SPRM though. It's not applicable for detecting low molecular weight molecules. Although it's label free but will need to have crystal clean experimental conditions. Sensitivity of SPRM can be improved with coupling of MALDI-MS. [ 58 ] There are a number of applications of SPRM from which some of them are being described here. Membrane proteins are responsible for the regulation of cellular responses to extracellular signals. It has been the challenging thing to investigate the involvement of membrane proteins in disease biomarkers and therapeutic targets and its binding kinetics with their ligands. Traditional approaches could not reflect clear structures and functions of membrane proteins. [ 13 ] In order to understand the structural details of membrane proteins, there is a need of alternate analytical tool, which can provide three-dimensional and sequential resolutions that can monitor membrane proteins. Atomic force microscopy (AFM) is an excellent method for obtaining high spatial resolution images of membrane proteins, [ 59 ] but it might not be helpful to investigate its binding kinetics. Fluorescence-based microscopy (FLM) can be used to study the interactions of membrane proteins in individual cells but it requires development of proper labels and needs tactics for different target proteins. [ 60 ] Furthermore, host protein may be affected by the labeling. [ 61 ] Binding kinetics of MP's in the single living cells can be studied via label free imaging method based on SPR Microscopy without extracting the proteins from the cell membranes, which help scientists to work with the actual conformations of the membrane proteins. Furthermore, distribution and local binding activities of membrane proteins in each cell can be mapped and calculated. SPR microscopy (SPRM) makes possible to simultaneously optical and fluorescence imaging of the same sample, which prove to get the advantages of both label-based and label-free detection methods in the single setup. [ 47 ] [ 62 ] SPR imaging is used to study the multiple adsorption interactions in an array format under same experimental conditions. Nelson and his coworkers introduced a multistep procedure to create DNA arrays on gold surfaces for use with SPR imaging. [ 63 ] Affinity interactions can be studied for a variety of target molecules e.g. proteins and nucleic acids. Mismatching of bases in the DNA sequence leads to the number of lethal diseases like lynch syndrome which has high risk of colon cancer. [ 64 ] SPR imaging is useful to monitor adsorption of molecules on the gold surface which is possible because of the change in the reflectivity from the surface. First G-G mismatch pair is stabilized by attaching it with the ligand, naphthyridine dimer, through hydrogen bonding which make the hairpin structures in double stranded DNA on gold surface. Binding of Dimer with DNA enhances the free energy of hybridization, which causes change in index of refraction. [ 65 ] DNA array is fabricated to test the G–G mismatch stabilizing properties of the naphthyridine dimer. Each of the four immobilized sequences in the array differed by one base. The position of this base is indicated by an X in sequence 1 as shown in Figure 16. The SPR difference image is only detected for the sequence having cytosine (C) base at the X position in sequence 1, the complementary sequence to sequence 2. However, the SPR difference image corresponding to the addition of sequence 2 in the presence of the naphthyridine dimer shows that, in addition to its complement, sequence 2 also hybridizes to the sequence that forms a G–G mismatch. These results demonstrate that SPR imaging is a promising tool for monitoring single base mismatches and screen out the hybridized molecules. [ 65 ] SPR imaging can be used to study the binding of antibodies to protein array. [ 66 ] Amine functionalities on the gold surface with proteins array, is used to study binding of antibodies. Immobilization of the protein was done by flowing protein solutions through the PDMS micro channels. Then PDMS was removed from the surface and solutions of antibody were flowed over the array. Three-component protein array containing the proteins human fibrinogen, ovalbumin, and bovine IgG is shown in Figure 17, SPR images obtained by Kariuki and co-workers. This contrast in the array is due to difference of refractive index which is outcome of local binding of antibodies. These images show that there is a high degree of antibody binding specificity and a small degree of non-specific adsorption of the antibody to the array background, which can be improved to modify the array background. Based on these results, SPR imaging technique can be opted as diagnostic tool for studying the antibody interactions to protein arrays. [ 66 ] [ 67 ] Discovery and validation of protein biomarkers are crucial for diseases diagnosis. Coupling of SPRM with MALDI-mass spectrometer (SUPRA-MS) enables the multiplex quantification of binding and molecular characterization on the basis of different masses. SUPRA-MS is used to detect, identify and characterize the potential breast cancer biomarker, LAG3 protein, introduced in the human plasma. Glass slides were taken to prepare gold chips via coating with thin layers of chromium and gold by sputtering process. Gold surface was functionalized using solution of 11-Mercapto-1-undecanol (11-MUOH) and 16-mercapto-1-hexadecanoic acid (16-MHA). This self-assembled monolayer was activated with sulfo-NHS and EDC. Pattern of sixteen droplets was deposited on the macroarray. Immunoglobin G antibodies were spotted against Lymphocyte activation gene 3 (α-LAG3) and rat serum albumin (α-RSA). After placing biochip in the SPRi and running buffer solution in the flow cell, α-LAG3 was injected. Special image station was used on the proteins that are attached. This station can also be placed on the MALDI. Before placing on the MALDI, captured proteins were reduced, digested and loaded with matrix in order to avoid contamination. [ 58 ] Antigen density is directly proportional to change in reflectivity ΔR because evanescent wave penetration depth Lzc is larger than thickness of immobilized antigen layer. [ 68 ] where ∂ n ∂ c {\displaystyle {\frac {\partial n}{\partial c}}} is the index increment of the molecule and S P R {\displaystyle S_{PR}} is the sensitivity prism, reflectivity. Clean mass spectrum was obtained for LAG3 protein due to good tryptic digestion and homogeneity of the matrix (α-cyano-4-hydroxycinnamic acid). Relatively high intensity m/z peak of LAG3 protein was found at 1,422.70amu with average mascot score of 87.9 ± 2.4. Validation of MS results was further confirmed by MS-MS analysis. These results are similar to classical analytical method in-gel digestion. [ 58 ] Greater S / N > 10, 100% reliability and detection at femtomole level on chip proves the credibility of this coupling technique. One can find protein-protein interaction and on-chip peptide distribution with high spatial resolution using subjected technique. [ 58 ] Aptamers are particular DNA ligands that target biomolecules such as proteins. SPR imaging platform would be a good choice to characterize aptamer -protein interactions. To study the aptamer-protein interaction, first oligonucleotides are grafted through formation of thiol Self Assembling Monolayer (SAM) on gold substrate using piezoelectric dispensing system. Thiol groups are introduced on DNA nucleotides by N-hydroxysuccinimide (NHS). Target oligonucleotides having a primary amine group at their 59th end are conjugated to HS-C (11)-NHS in phosphate buffer solution at pH 8.0 for one hour at room temperature. [ citation needed ] Aptamer grafting biosensor is placed on SPRM after rinsing. Then Thrombin is co-injected with excess of cytochrome C for signal specificity. Concentration of free thrombin is determined by calibration curve obtained by plotting initial slope of the signal at the beginning of injection against concentration. The interaction of thrombin and the aptamer can be monitored on microarray in real-time during injections of thrombin at different concentrations. Solution phase dissociation constant KDsol (3.16 ± 1.16 nM) is calculated from the measured concentrations of free thrombin. [ citation needed ] [THR---APT] = cTHR – [THR], the equilibrium concentration of thrombin attached to aptamers in solution and [APT] = cAPT – [THR---APT], the concentration of free aptamers in solution. Surface phase dissociation constant KDsurf (3.84 ± 0.68) is obtained by fitting Langmuir adsorption isotherm on equilibrium signals. Both dissociation constants are significantly different because KDsurf is dependent on the surface grafting density as shown in Figure 19. This dependence extrapolates linearly at low sigma to solution-phase affinity. [ citation needed ] The difference in SPRi image can gives us information regarding the presence of binding and specificity but not suitable for quantification of free protein in case of multiple affinity sites. The real time monitoring of the interaction is possible by using SPRM to study the kinetics and the affinity of the interactions. [ 69 ] Despite using surface plasmon resonance imaging (SPRi) in biology to characterize interactions between two biological molecules, it is also useful to monitor the interactions between two polymers. In this approach, one polymer, called as host protein HP, is immobilized on the surface of a biochip and the other polymer designated as guest polymer GP is inserted on the SPRi-Biochip to study the interactions. For example, a host protein of amine-functionalized poly(β-cyclodextrin) and guest protein of PEG (ada)4. [ citation needed ] SPRi biochip was used for immobilization of HP of different concentrations. An array of HP active sites was produced on the chip. The attachment of HP was done through its amino groups to N-hydroxy succinimide functionalities on the gold surface. First SPRi system was filled running buffer solution followed by placing of SPRi –biochip into the analysis chamber. Two solutions of different concentrations of GP was 1g/L and 0.1 g/L were injected in the flow cell. The association and the dissociation of both polymers can be monitored in real-time on the basis of change in reflectivity and images from SPRM can be differentiated on the basis of white spots (association phase) and black spots (dissociation phase). PEG without adamantyl groups didn’t show adsorption on β-cyclodextrin cavities. On the other hand, there wasn’t any adsorption of GP without HP on the chip. Change in SPRi response on the reaction sites is provided by the capturing of kinetic curves and real time images from the CCD camera. Local changes in light reflectivity are directly related to quantity of target molecules on each point. Variation at the surface of the chip provide comprehensive knowledge on molecular binding and kinetic processes. [ 70 ] One of the important class of biomaterials is polymer hydroxyapatite that is remarkably useful in the field of bone regeneration because of its resemblance with natural bone material. The advantage of hydroxyapatite, (Ca10(PO4)6(OH)2, is being started to form inside the bone tissue through mineralization which also advocate the enhancement of osteointegration. Biomineralization is also called calcification, in which calcium cations come from cells and physiological fluids while phosphate anions are produced from hydrolysis of phosphoesters and phosphoproteins as well as from the body fluids. This phenomenon is also tested in vitro studies. [ citation needed ] For in vitro studies, Polyamidoamine (PAMAM) dendrimers with amino- and carboxylic-acid external reactive shells are considered as sensing phase. These dendrimers are required to immobilized on the gold surface and inactive to gold surface. Hence, thiols groups have to be introduced at the terminals of dendrimers so that dendrimers can be attached on the gold surface. Carboxylic groups are functionalized by N,N-(3-dimethylaminopropyl)-N’-ethyl-carbodiimide hydrochloride (EDC) and N-hydroxysuccinimide (NHS) solutions in phosphate buffer. Functional groups (amide, amino and carboxyl) act as ionic pumps capturing calcium ions from the test fluids; then calcium cations bind with phosphate anions to generate calcium-phosphate mineral nuclei on the dendrimer surface. [ citation needed ] SPRM is expected to be sensitive enough to provide important quantitative information on mineralization's occurrence and kinetics. This detection of the mineralization is based on the specific mass change induced by the mineral nuclei formation and growth. Nucleation and progress in mineralization can be monitored by SPRM as shown in Figure 20. PAMAM-containing sensors are fixed on the SPRi analysis platform and then exposed to experimental fluids in the flow cell as shown in Figure 21. SPRM is not adapted to sense the origin and nature of mass change but it detects the modification of refractive index due to mineral precipitation. [ citation needed ]
https://en.wikipedia.org/wiki/Surface_plasmon_resonance_microscopy
In physics and engineering , surface power density is power per unit area . Surface power density is an important factor in comparison of industrial energy sources. [ 1 ] The concept was popularised by geographer Vaclav Smil . The term is usually shortened to "power density" in the relevant literature, which can lead to confusion with homonymous or related terms. Measured in W/m 2 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system , including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning. [ 2 ] , [ 3 ] Fossil fuels and nuclear power are characterized by high power density which means large power can be drawn from power plants occupying relatively small area. Renewable energy sources have power density at least three orders of magnitude smaller and for the same energy output they need to occupy accordingly larger area, which has been already highlighted as a limiting factor of renewable energy in German Energiewende . [ 4 ] The following table shows median surface power density of renewable and non-renewable energy sources. [ 5 ] [W/m 2 ] As an electromagnetic wave travels through space, energy is transferred from the source to other objects (receivers). The rate of this energy transfer depends on the strength of the EM field components. Simply put, the rate of energy transfer per unit area (power density) is the product of the electric field strength (E) times the magnetic field strength (H). [ 6 ] The above equation yields units of W/m 2 . In the USA the units of mW/cm 2 , are more often used when making surveys. One mW/cm 2 is the same power density as 10 W/m 2 . The following equation can be used to obtain these units directly: [ 6 ] The simplified relationships stated above apply at distances of about two or more wavelengths from the radiating source. This distance can be a far distance at low frequencies, and is called the far field. Here the ratio between E and H becomes a fixed constant (377 Ohms) and is called the characteristic impedance of free space . Under these conditions we can determine the power density by measuring only the E field component (or H field component, if you prefer) and calculating the power density from it. [ 6 ] This fixed relationship is useful for measuring radio frequency or microwave (electromagnetic) fields. Since power is the rate of energy transfer, and the squares of E and H are proportional to power, E 2 and H 2 are proportional to the energy transfer rate and the energy absorption of a given material. [??? This would imply that with no absorption, E and H are both zero, i.e. light or radio waves cannot travel in a vacuum. The intended meaning of this statement is unclear.] [ 6 ] The region extending farther than about 2 wavelengths away from the source is called the far field . As the source emits electromagnetic radiation of a given wavelength, the far-field electric component of the wave E , the far-field magnetic component H , and power density are related by the equations: E = H × 377 and Pd = E × H.
https://en.wikipedia.org/wiki/Surface_power_density
In immunology , surface probability is the amount of reflection of an antigen 's secondary or tertiary structure to the outside of the molecule . [ 1 ] A greater surface probability means that an antigen is more likely to be immunogenic (i.e. induce the formation of antibodies). This immunology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Surface_probability
Transition metal oxides are compounds composed of oxygen atoms bound to transition metals . They are commonly utilized for their catalytic activity and semiconducting properties. Transition metal oxides are also frequently used as pigments in paints and plastics , most notably titanium dioxide . Transition metal oxides have a wide variety of surface structures which affect the surface energy of these compounds and influence their chemical properties. The relative acidity and basicity of the atoms present on the surface of metal oxides are also affected by the coordination of the metal cation and oxygen anion , which alter the catalytic properties of these compounds. For this reason, structural defects in transition metal oxides greatly influence their catalytic properties. The acidic and basic sites on the surface of metal oxides are commonly characterized via infrared spectroscopy , calorimetry among other techniques. Transition metal oxides can also undergo photo-assisted adsorption and desorption that alter their electrical conductivity. One of the more researched properties of these compounds is their response to electromagnetic radiation , which makes them useful catalysts for redox reactions, isotope exchange and specialized surfaces. The surface structures of transition metal oxides are not as well determined as their bulk crystal structures are well researched. A common approach is to assume the oxides are ideal crystal, where the bulk atomic arrangement is maintained up to and including the surface plane. The surfaces will be generated by cleavages along the planes of the bulk crystal structure. [ 1 ] However, when a crystal is cleaved along a particular plane, the position of surface ions will differ from the bulk structure. Newly created surfaces will tend to minimize the surface Gibbs energy , through reconstruction , to obtain the most thermodynamically stable surface. [ 2 ] The stability of these surface structures are evaluated by surface polarity , the degree of coordinative unsaturation and defect sites. The oxide crystal structure is based on a close-pack array of oxygen anions, with metal cations occupying interstitial sites. [ 1 ] The close-packed arrays, such as face-centered-cubic (fcc) and hexagonal-close packed (hcp) , have both octahedral and tetrahedral interstices. [ 3 ] Many compounds from first row of transition metal monoxides (MO), from TiO to NiO , have a rocksalt structure . The rock salt structure [ 4 ] is generated by filling all octahedral sites with cations in an oxygen anion fcc array . [ 5 ] [ 6 ] The majority of transition metal dioxides (MO 2 ) have the rutile structure , seen to the right. Materials of this stoichiometry exist for Ti, Cr, V and Mn in the first row transition metal and for Zr to Pd in the second. The rutile structure is generated by filling half of the octahedral sites with cations of the hcp oxygen anion array . [ 5 ] [ 6 ] Few transition metals can achieve the +6 oxidation state in an oxide, so oxides with the stoichiometry MO 3 are rare. [ 7 ] The structure of binary oxides can be predicted on the basis of the relative sizes of the metal and oxide ions and the filling of holes in a close packed oxide lattice. However, the predictions of structure are more difficult for ternary oxides. The combination of two or more metals in an oxide creates a lot of structural possibilities. Also, the stoichiometry of ternary oxide may be changed by varying the proportions of the two components and their oxidation states . For example, at least twenty ternary oxide phases are formed between strontium and vanadium including SrV 2 O 6 , Sr 2 V 2 O 5 , SrVO 3 and Sr 2 VO 4 . [ 7 ] The structural chemistry of ternary and more complex oxides is an extensive subject, but there are a few structures that are widely adopted by ternary oxides, such as the perovskite structure. The perovskite structure , ABO 3 , is the most widespread ternary phase. The perovskite structure is frequently found for ternary oxides formed with one large (A) and one small cation (B). In this structure, there is a simple cubic array of B cations, with the A cations occupying the center of the cube, and the oxide atoms are sited at the center of the 12 edges of the simple cube. [ 8 ] [ 5 ] [ 6 ] [ 7 ] Since very little is known about the surface Gibbs energy of transition metal oxides, polarity of the surface and the degree of coordinative unsaturation of a surface cation are used to compare the stabilities of different surface structures. [ 2 ] Also, defect sites can have a huge impact on the surface stability. When a crystal of a binary oxide is cleaved to generate two new surfaces, each solid's charge remains neutral. However, the structure of the two newly created surfaces may or may not be the same. If the structures are identical, the surface will be dipoleless and is considered a nonpolar surface. If the structures are different, the surface will have a strong dipole and is considered a polar surface. Examples of nonpolar surfaces include the rocksalt (100) surface, the rutile (100), (110) and (001) surfaces and the pervoskite (100) surface. [ 2 ] An example of a polar surface is the rocksalt (111) surface. [ 2 ] In general, a polar surface is less stable than a nonpolar surface because a dipole moment increases the surface Gibbs energy. Also, oxygen polar surfaces are more stable than metal polar surfaces because oxygen ions are more polarizable , which lowers the surface energy. [ 9 ] The degree of coordinative unsaturation of a surface cation measures the number of bonds involving the cation that have to be broken to form a surface. [ 2 ] As the degree of coordinative unsaturation increases, more bonds are broken and the metal cation becomes destabilized. The destabilization of the cation increases the surface Gibbs energy, which decreases the overall stability. For example, the rutile (110) surface is more stable than the rutile (100) and (001) surfaces because it has a lower degree of coordinative unsaturation. [ 2 ] Defect sites can interfere with the stability of metal oxide surfaces, so it is important to locate and determine methods to control these sites. Oxides exhibit an abundance of point defect sites . In rocksalt surfaces, oxygen and metal cation vacancies are the most common point defects. The vacancies are produced by electron bombardment and annealing to extremely high temperatures. However, oxygen vacancies are more common and have a greater impact than metal cation vacancies. Oxygen vacancies cause reduction in between surface cations, which significantly affect the electronic energy levels. [ 10 ] Steps and kinks are two other defects that impact rocksalt surfaces. These structural defects reduce the coordination environment of the four adjacent surface cations from 5 to 4. [ 11 ] In rutile surfaces, the most common type of defect is oxygen vacancies. There are two types of oxygen vacancies, which result from either the removal of a bridging O 2− ions or the removal of an inplane O 2− ion. Both of these will reduce the coordination of the surface cations. [ 12 ] [ 10 ] [ 13 ] The surface of a metal oxide consists of ordered arrays of acid–base centres. The cationic metal centres act as Lewis acid sites while the anionic oxygen centres act as Lewis bases. Surface hydroxyl groups can serve as Brønsted acid or base sites as they can give up or accept a proton. [ 14 ] The surface of most metal oxides will be, to some extent, hydroxylated under normal conditions when water vapor is present. [ 15 ] The strength and the amount of Lewis And Brønsted acid–base sites will determine the catalytic activity of many metal oxides. Due to this there is a great need to develop standard methods for the characterization of the strength, concentration, and distribution of surface acid–base sites. [ 14 ] The concepts of Lewis acid–base theory and Brønsted–Lowry acid–base theory may be applied to surfaces, however there is no general theory that serves to determine surface acidity or basicity. [ 16 ] The qualitative treatment of Brønsted acid base theory is based on the thermodynamic equilibrium constant (K a ) of acid–base reactions between individual molecules in homogeneous systems. This treatment requires measurement of equilibrium concentrations of reactants and products. The presence of two phases also provides a problem for the quantitative acid–base determination of solids. When an acid or base is adsorbed on to an oxide surface it will perturb neighbouring acid–base sites. [ 17 ] This perturbation will inevitably influence the relaxation of the surface and make it impossible to have acid–base reactions at the surface which only involve a single surface site. For metal oxides acidity and basicity are dependent on the charge and the radius of the metal ions as well as the character of the metal oxygen bond. The bond between oxygen and the metal is influenced by the coordination of the metal cations and the oxygen anions as well as the filling of the metal d-orbitals. [ 16 ] The surface coordination is controlled by the face that is exposed and by the surface relaxation. Structural defects can greatly contribute to the acidity or basicity as sites of high unsaturation can occur from oxygen or metal ion vacancies. Adsorption of an indicator molecule was first proposed by Hammett for ordering the strength of solid acids and bases. [ 14 ] This technique is only applicable to surface Brønsted sites on metal oxides. According to Hammett, the strength of a Brønsted surface site can be determined by the Hammett acidity function , where B is the basic indicator molecule. The concentration of Brønsted acid sites can be determined by titrating a suspension of the oxide with an acid/base indicator present. [ 14 ] However, this method is subject to many problems. For instance only Bronsted acid sites can be quantified with this method. Metal oxide surfaces can have both Brønsted and Lewis acid sites present at the same time which leads to a nonspecific interaction between the oxide and the indicator. [ 16 ] Also, as outlined in the theory section, the perturbation of neighboring sites upon adsorption of indicator molecules compromises the integrity of this model. [ 17 ] The adsorption of a very weakly basic or acidic probe molecule can serve to give a picture of Brønsted and Lewis acid–base sites. Infrared spectroscopy of surface sites and adsorbed molecules can then be used to monitor the change in the vibrational frequencies upon adsorption. [ 14 ] A very weakly acidic probe molecule can be used to minimize disturbing neighboring sites so that a more accurate measure of surface acidity or basicity can be obtained. A variety of probe molecules can be used including: ammonia , pyridine , acetonitrile , carbon monoxide , [ 18 ] and carbon dioxide . [ 14 ] [ 16 ] Two promising methods for the description of the acid–base properties of metal oxides are Calorimetric measurements of adsorption enthalpies and Temperature Programmed desorption . [ 16 ] The measurement of the heat of adsorption of basic or acidic probe molecules can give a description of acidic and basic sites on metal oxide surfaces. Temperature programmed desorption provides information about acid–base properties by saturating the surface with a probe molecule and measuring the amount that desorbs from the surface as a function of temperature. The calorimetric method provides a quantitative thermodynamic scale of acetate properties by measuring the heat of adsorption. Calorimetric methods can be considered to give a measure of the total acidity or basicity as it is not discriminate to either Lewis or Brønsted sites. However, when differential heats of adsorption are combined with other techniques, such as IR spectroscopy, the nature and distribution of acid–base adsorption sites can be obtained. [ 19 ] Zirconia exists in the monoclinic , tetragonal or cubic crystal system depending on the temperature. The surface acidity and basicity of the oxide depends on the crystal structure and surface orientation. [ 20 ] The surfaces of zirconia have hydroxyl groups, which can act as Brønsted acids or bases, and coordination-unsaturated Zr 4+ O 2− acid base pairs which contribute to its overall acid–base properties. [ 20 ] Adsorption studies have shown that monoclinic zirconia is more basic than tetragonal, as it forms stronger bonds with CO 2 . Adsorption of CO shows that the tetragonal phase has more acidic Lewis acid sites than the monoclinic phase, but that it has a lower concentration of Lewis acid sites. [ 20 ] The bulk electronic band structure of transition metal oxides consists of overlapping 2p orbitals from oxygen atoms, forming the lower energy, highly populated valence band, while the sparsely populated, higher energy conduction band consists of overlapping d orbitals of the transition metal cation. [ 21 ] In contrast to metals, having a continuous band of electronic states, semiconductors have a band gap that prevents the recombination of electron/hole pairs that have been separated into the conduction band/ valence band. The nanosecond scale life times of these electron / hole separations allows for charge transfer to occur with an adsorbed species on the semiconductor surface. The Potential of an acceptor must be more positive than the conduction band potential of the semiconductor in order for reduction of the species to commence. Conversely, the potential of the donor species must be more negative than that of the valence band of the semiconductor for oxidation of the species to occur. [ 22 ] Near the surface of a semi-conducting metal oxide the valence and conduction bands are of higher energy, causing the upward bending of the band energy as shown in the band energy diagram, such that promotion of an electron from the valence band to the conduction band by light of energy greater than the band gap results in migration of the electron towards the bulk of the solid or to a counter electrode, while the hole left in the valence band moves towards the surface. The increased concentration of holes near the surface facilitates electron transfer to the solid, such as the example shown in the figure of the oxidation of redox couple D-/D. [ 2 ] In the absence of any mechanism to remove electrons from the bulk of the solid irradiation continues to excite electrons to the conduction band producing holes in the valence band. This leads to the reduction of the upward bending of the band energies near the surface, and the subsequent increase in excited electron availability for reduction reactions. [ 2 ] The following equations are useful in describing the populations of valence and conduction bands in terms of holes and electrons for the bulk metal. n {\displaystyle n} is the density of electrons in the bulk metal conduction band, and p {\displaystyle p} is the density of holes in the bulk metal valence band. E c is the lowest energy of the conduction band, E f is the Fermi energy (electrochemical energy of the electrons), E v is the highest energy of the valence band, N c is the effective mass and mobility of an electron in the conduction band (constant), and N v is the effective mass and mobility of a valence band hole (constant). [ 2 ] where k is the Boltzmann constant and T is the absolute temperature in kelvins . The use of quantum mechanics perturbation theory can aid in calculating the probability of an electronic transition taking place. The probability is proportional to the square of the amplitude of the radiation field, E 0 , and the square of the transition dipole moment |μ if |. [ 22 ] The quantum yield for an ideal system undergoing photocatalytic events is measured as the number of events occurring per photon absorbed. The typical assumption in determining the quantum yield is that all photons are absorbed on the semiconductor surface, and the quantum yield is referred to as the apparent quantum yield. This assumption is necessary due to the difficulty in measuring the actual photons absorbed by the solid surface. The relation between the quantum yield, the rate of charge transfer, k CT , and the electron/hole recombination rate, k R , is given by the following equation. [ 22 ] Photoinduced molecular transformations at transition metal oxide surfaces can be organized in two general classes. Photoexcitation of the adsorbate which then reacts with the catalyst substrate is classified as a catalyzed photoreaction. Photoexcitation of the catalyst followed by interaction of the catalyst with a ground state reactant is classified as a sensitized photoreaction. [ 22 ] Adsorption and desorption can both be promoted by exposure of trans metal oxides to light, the predominant process being controlled by experimental conditions. Adsorption of oxygen by illumination of TiO 2 or ZnO at room temperature with low pressure results in the adsorption of oxygen, while at high pressures illumination leads to photo-assisted desorption. At high temperatures the opposite effect is observed, with low pressure leading to desorption, and high pressure causing adsorption. [ 2 ] [ 23 ] Kase et al. conducted a study of the photo-assisted chemisorption of NO on ZnO, finding that under dark conditions a negligible amount of NO was adsorbed to the metal oxide, however under illumination ZnO irreversibly adsorbs NO, their sample showing no desorption after irradiation was stopped. [ 23 ] The process by which adsorption and desorption on metal oxide surfaces takes place is related to the photo generation of holes on the solid surface, which are believed to be trapped by hydroxyl groups on the surface of a transition metal oxides. [ 22 ] [ 23 ] These trapped holes allow photo-excited electrons to be available for chemisorption. [ 23 ] Doping of a cation of either higher or lower valence can change the electronic properties of the metal oxide. Doping with a higher valence cation typically results in an increase in n -type semi-conductivity, or raises its Fermi energy, while doping with a lower valence cation should lower the Fermi energy level and reduce the metal oxide's n -type semi-conductivity. [ 24 ] The process of doping indicates that a cation other than the transition metal cation experienced in the majority of the bulk is incorporated into the crystal structure of the semiconductor, either by replacing the cation, or interstitially adding to the matrix. [ 24 ] Doping of ZnO with Li leads to greater photo-adsorption of oxygen, while doping with Ga or Al suppresses photo-adsorption of oxygen. Trends in photo-adsorption tend to follow trends in photo-oxidative catalysis, as shown by the high degree of photo-oxidative catalytic activity of TiO 2 and ZnO, while other transition elements like V 2 O 5 shows no photo-oxidative catalytic response as well as no photo-activated adsorption of oxygen. [ 2 ] One of the most exciting and most studied uses of photocatalysis is the photo-oxidation of organics as it applies to environmental decontamination. [ 21 ] In contrast to gas phase interactions with the solid surface, the vast number of variables associated with the liquid solid interface (i.e. solution pH, photocatalyst concentration, solvent effects, diffusion rate, etc.) calls for greater care to be taken to control these variables to produce consistent experimental results. [ 21 ] [ 22 ] A greater variety of reactions also become possible due to the ability of solutions to stabilize charged species, making it possible to add an electron from the metal to a neutral species producing an anion that can go on to further react, or a hole to remove an electron, producing a cation that goes on to further react in solution. [ 2 ] One mechanism proposed for the oxidation of adsorbed organics from solution is the production of hydroxyl radical by the valence holes migrating to the surface and reacting with adsorbed hydroxyl groups, resulting in a very strong oxidizing radical. Identification of hydroxylated oxidation intermediates and hydroxyl radicals supports this proposed mechanism, however this does not negate the possibility of the direct oxidation of the organic reactant by the valence holes because similar intermediates would be expected in either case. [ 22 ] Some photo-oxidation reactions are shown below. In photo reduction the promoted electron of the metal oxide is accepted by an acceptor species. In the case of CO 2 reduction, shown in the table below, the absence of dissolved oxygen in the aqueous system favors reduction of protons to form hydrogen radicals which then go on to reduce CO 2 to HCOOH. HCOOH can then be further reduced to HCOH and water. Further reduction leads to the production of CH 3 • that can combine in a number of ways to produce CH 4 or C 2 H 6 , etc. [ 25 ] Metal oxides excel at catalyzing gas phase reactions by photo-activation, as well as thermal activation of the catalyst. Oxidation of hydrocarbons, alcohols, carbon monoxide, and ammonia occurs when stimulated with light of greater energy than the band gap of the metal oxide. [ 22 ] [ 2 ] Homophasic and heterophasic light-induced oxygen isotope exchange has also been observed over TiO 2 and ZnO. Homophasic isotope exchange is the production of 2 16 O 18 O (g) from 16 O 2 (g) and 18 O 2 (g) . Heterophasic isotope exchange is the chemisorption of an oxygen isotope to the lattice of the metal oxide (lat), and replacement of one of the oxygens in the gas phase with the lattice oxygen as shown in the following reaction. [ 2 ]
https://en.wikipedia.org/wiki/Surface_properties_of_transition_metal_oxides
Surface reconstruction refers to the process by which atoms at the surface of a crystal assume a different structure than that of the bulk. Surface reconstructions are important in that they help in the understanding of surface chemistry for various materials, especially in the case where another material is adsorbed onto the surface. In an ideal infinite crystal, the equilibrium position of each individual atom is determined by the forces exerted by all the other atoms in the crystal, resulting in a periodic structure. If a surface is introduced to the surroundings by terminating the crystal along a given plane, then these forces are altered, changing the equilibrium positions of the remaining atoms. This is most noticeable for the atoms at or near the surface plane, as they now only experience inter-atomic forces from one direction. This imbalance results in the atoms near the surface assuming positions with different spacing and/or symmetry from the bulk atoms, creating a different surface structure. This change in equilibrium positions near the surface can be categorized as either a relaxation or a reconstruction. Relaxation refers to a change in the position of surface atoms relative to the bulk positions, while the bulk unit cell is preserved at the surface. Often this is a purely normal relaxation: that is, the surface atoms move in a direction normal to the surface plane, usually resulting in a smaller-than-usual inter-layer spacing. This makes intuitive sense, as a surface layer that experiences no forces from the open region can be expected to contract towards the bulk. Most metals experience this type of relaxation. [ 1 ] Some surfaces also experience relaxations in the lateral direction as well as the normal, so that the upper layers become shifted relative to layers further in, in order to minimize the positional energy. Reconstruction refers to a change in the two-dimensional structure of the surface layers, in addition to changes in the position of the entire layer. For example, in a cubic material the surface layer might re-structure itself to assume a smaller two-dimensional spacing between the atoms, as lateral forces from adjacent layers are reduced. The general symmetry of a layer might also change, as in the case of the Pt ( 100 ) surface, which reconstructs from a cubic to a hexagonal structure. [ 2 ] A reconstruction can affect one or more layers at the surface and can either conserve the total number of atoms in a layer (a conservative reconstruction) or have a greater or lesser number than in the bulk (a non-conservative reconstruction). The relaxations and reconstructions considered above would describe the ideal case of atomically clean surfaces in vacuum, in which the interaction with another medium is not considered. However, reconstructions can also be induced or affected by the adsorption of other atoms onto the surface, as the interatomic forces are changed. These reconstructions can assume a variety of forms when the detailed interactions between different types of atoms are taken into account, but some general principles can be identified. The reconstruction of a surface with adsorption will depend on the following factors: Composition plays an important role in that it determines the form that the adsorption process takes, whether by relatively weak physisorption through van der Waals interactions or stronger chemisorption through the formation of chemical bonds between the substrate and adsorbate atoms. Surfaces that undergo chemisorption generally result in more extensive reconstructions than those that undergo physisorption, as the breaking and formation of bonds between the surface atoms alter the interaction of the substrate atoms as well as the adsorbate. Different reconstructions can also occur depending on the substrate and adsorbate coverages and the ambient conditions, as the equilibrium positions of the atoms are changed depending on the forces exerted. One example of this occurs in the case of In adsorbed on the Si (111) surface, in which the two differently reconstructed phases of Si(111) 3 × 3 {\displaystyle {\sqrt {3}}\times {\sqrt {3}}} -In and Si(111) 31 × 31 {\displaystyle {\sqrt {31}}\times {\sqrt {31}}} -In (in Wood's notation, see below) can actually coexist under certain conditions. These phases are distinguished by the In coverage in the different regions and occur for certain ranges of the average In coverage. [ 3 ] In general, the change in a surface layer's structure due to a reconstruction can be completely specified by a matrix notation proposed by Park and Madden. [ 4 ] If a {\displaystyle a} and b {\displaystyle b} are the basic translation vectors of the two-dimensional structure in the bulk, and a s {\displaystyle a_{s}} and b s {\displaystyle b_{s}} are the basic translation vectors of the superstructure or reconstructed plane, then the relationship between the two sets of vectors can be described by the following equations: so that the two-dimensional reconstruction can be described by the matrix [ 4 ] Note that this system does not describe any relaxation of the surface layers relative to the bulk inter-layer spacing, but only describes the change in the individual layer's structure. Surface reconstructions are more commonly given in Wood's notation, which reduces the matrix above into a more compact notation [ 5 ] which describes the reconstruction of the ( hkl ) plane (given by its Miller indices ). In this notation, the surface unit cell is given as multiples of the nonreconstructed surface unit cell with the unit cell vectors a and b . For example, a calcite(104) (2×1) reconstruction means that the unit cell is twice as long in direction a and has the same length in direction b . If the unit cell is rotated with respect to the unit cell of the nonreconstructed surface, the angle φ is given in addition (usually in degrees). This notation is often used to describe reconstructions concisely, but does not directly indicate changes in the layer symmetry (for example, square to hexagonal). Determination of a material's surface reconstruction requires a measurement of the positions of the surface atoms that can be compared to a measurement of the bulk structure. While the bulk structure of crystalline materials can usually be determined by using a diffraction experiment to determine the Bragg peaks , any signal from a reconstructed surface is obscured due to the relatively tiny number of atoms involved. Special techniques are thus required to measure the positions of the surface atoms, and these generally fall into two categories: diffraction-based methods adapted for surface science, such as low-energy electron diffraction (LEED) or Rutherford backscattering spectroscopy , and atomic-scale probe techniques such as scanning tunneling microscopy (STM) or atomic force microscopy . Of these, STM has been most commonly used in recent history due to its very high resolution and ability to resolve aperiodic features. To allow a better understanding of the variety of reconstructions in different systems, examine the following examples of reconstructions in metallic, semiconducting and insulating materials. A very well known example of surface reconstruction occurs in silicon , a semiconductor commonly used in a variety of computing and microelectronics applications. With a diamond-like face-centered cubic (fcc) lattice, it exhibits several different well-ordered reconstructions depending on temperature and on which crystal face is exposed. When Si is cleaved along the (100) surface, the ideal diamond-like structure is interrupted and results in a 1×1 square array of surface Si atoms. Each of these has two dangling bonds remaining from the diamond structure, creating a surface that can obviously be reconstructed into a lower-energy structure. The observed reconstruction is a 2×1 periodicity, explained by the formation of dimers , which consist of paired surface atoms, decreasing the number of dangling bonds by a factor of two. These dimers reconstruct in rows with a high long-range order, resulting in a surface of filled and empty rows. LEED studies and calculations also indicate that relaxations as deep as five layers into the bulk are also likely to occur. [ 6 ] The Si (111) structure, by comparison, exhibits a much more complex reconstruction. Cleavage along the (111) surface at low temperatures results in another 2×1 reconstruction, differing from the (100) surface by forming long π-bonded chains in the first and second surface layers. However, when heated above 400 °C, this structure converts irreversibly to the more complicated 7×7 reconstruction. In addition, a disordered 1×1 structure is regained at temperatures above 850 °C, which can be converted back to the 7×7 reconstruction by slow cooling. The 7×7 reconstruction is modeled according to a dimer-adatom-stacking fault (DAS) model constructed by many research groups over a period of 25 years. Extending through the five top layers of the surface, the unit cell of the reconstruction contains 12 adatoms and 2 triangular subunits, 9 dimers, and a deep corner hole that extends to the fourth and fifth layers. This structure was gradually inferred from LEED and RHEED measurements and calculation, and was finally resolved in real space by Gerd Binnig , Heinrich Rohrer , Ch. Gerber and E. Weibel as a demonstration of the STM, which was developed by Binnig and Rohrer at IBM's Zurich Research Laboratory. [ 7 ] The full structure with positions of all reconstructed atoms has also been confirmed by massively parallel computation. [ 8 ] A number of similar DAS reconstructions have also been observed on Si (111) in non-equilibrium conditions in a (2 n + 1)×(2 n + 1) pattern and include 3×3, 5×5 and 9×9 reconstructions. The preference for the 7×7 reconstruction is attributed to an optimal balance of charge transfer and stress, but the other DAS-type reconstructions can be obtained under conditions such as rapid quenching from the disordered 1×1 structure. [ 9 ] The structure of the Au (100) surface is an interesting example of how a cubic structure can be reconstructed into a different symmetry, as well as the temperature dependence of a reconstruction. In the bulk gold is an (fcc) metal, with a surface structure reconstructed into a distorted hexagonal phase. This hexagonal phase is often referred to as a (28×5) structure, distorted and rotated by about 0.81° relative to the [011] crystal direction. Molecular-dynamics simulations indicate that this rotation occurs to partly relieve a compressive strain developed in the formation of this hexagonal reconstruction, which is nevertheless favored thermodynamically over the unreconstructed structure. However, this rotation disappears in a phase transition at approximately T = 970 K, above which an un-rotated hexagonal structure is observed. [ 10 ] A second phase transition is observed at T = 1170 K, in which an order–disorder transition occurs, as entropic effects dominate at high temperature. The high-temperature disordered phase is explained as a quasi-melted phase in which only the surface becomes disordered between 1170 K and the bulk melting temperature of 1337 K. This phase is not completely disordered, however, as this melting process allows the effects of the substrate interactions to become important again in determining the surface structure. This results in a recovery of the square (1×1) structure within the disordered phase and makes sense as at high temperatures the energy reduction allowed by the hexagonal reconstruction can be presumed to be less significant. [ 10 ]
https://en.wikipedia.org/wiki/Surface_reconstruction
Surface roughness or simply roughness is the quality of a surface of not being smooth and it is hence linked to human ( haptic ) perception of the surface texture. From a mathematical perspective it is related to the spatial variability structure of surfaces, and inherently it is a multiscale property. It has different interpretations and definitions depending on the disciplines considered. In surface metrology , surface roughness is a component of surface finish (surface texture) . It is quantified by the deviations in the direction of the normal vector of a real surface from its ideal form. If these deviations are large, the surface is rough; if they are small, the surface is smooth. Roughness is typically assumed to be the high-frequency, short-wavelength component of a measured surface. However, in practice it is often necessary to know both the amplitude and frequency to ensure that a surface is fit for a purpose. Roughness plays an important role in determining how a real object will interact with its environment. In tribology , rough surfaces usually wear more quickly and have higher friction coefficients than smooth surfaces. Roughness is often a good predictor of the performance of a mechanical component, since irregularities on the surface may form nucleation sites for cracks or corrosion. On the other hand, roughness may promote adhesion . Generally speaking, rather than scale specific descriptors, cross-scale descriptors such as surface fractality provide more meaningful predictions of mechanical interactions at surfaces including contact stiffness [ 1 ] and static friction . [ 2 ] Although a high roughness value is often undesirable, it can be difficult and expensive to control in manufacturing . For example, it is difficult and expensive to control surface roughness of fused deposition modelling (FDM) manufactured parts. [ 3 ] Decreasing the roughness of a surface usually increases its manufacturing cost. This often results in a trade-off between the manufacturing cost of a component and its performance in application. Roughness can be measured by manual comparison against a "surface roughness comparator" (a sample of known surface roughness), but more generally a surface profile measurement is made with a profilometer . These can be of the contact variety (typically a diamond stylus) or optical (e.g.: a white light interferometer or laser scanning confocal microscope ). However, controlled roughness can often be desirable. For example, a gloss surface can be too shiny to the eye and too slippery to the finger (a touchpad is a good example) so a controlled roughness is required. This is a case where both amplitude and frequency are very important. Surface structure plays a key role in governing contact mechanics , [ 1 ] that is to say the mechanical behavior exhibited at an interface between two solid objects as they approach each other and transition from conditions of non-contact to full contact. In particular, normal contact stiffness is governed predominantly by asperity structures (roughness, surface slope and fractality) and material properties. In terms of engineering surfaces, roughness is considered to be detrimental to part performance. As a consequence, most manufacturing prints establish an upper limit on roughness, but not a lower limit. An exception is in cylinder bores where oil is retained in the surface profile and a minimum roughness is required. [ 4 ] Surface structure is often closely related to the friction and wear properties of a surface. [ 2 ] A surface with a higher fractal dimension , large R a {\displaystyle Ra} value, or a positive R s k {\displaystyle Rsk} , will usually have somewhat higher friction and wear quickly. The peaks in the roughness profile are not always the points of contact. The form and waviness (i.e. both amplitude and frequency) must also be considered. A roughness value can either be calculated on a profile (line) or on a surface (area). The profile roughness parameter ( R a {\displaystyle Ra} , R q {\displaystyle Rq} , ...) are more common. The area roughness parameters ( S a {\displaystyle Sa} , S q {\displaystyle Sq} , ...) give more significant values. The profile roughness parameters [ 5 ] are included in BS EN ISO 4287:2000 British standard, identical with the ISO 4287:1997 standard. [ 6 ] The standard is based on the ″M″ (mean line) system. There are many different roughness parameters in use, but R a {\displaystyle Ra} is by far the most common, though this is often for historical reasons and not for particular merit, as the early roughness meters could only measure R a {\displaystyle Ra} . Other common parameters include R z {\displaystyle Rz} , R q {\displaystyle Rq} , and R s k {\displaystyle Rsk} . Some parameters are used only in certain industries or within certain countries. For example, the R k {\displaystyle Rk} family of parameters is used mainly for cylinder bore linings, and the Motif parameters are used primarily in the French automotive industry. [ 7 ] The MOTIF method provides a graphical evaluation of a surface profile without filtering waviness from roughness. A motif consists of the portion of a profile between two peaks and the final combinations of these motifs eliminate ″insignificant″ peaks and retains ″significant″ ones. Please note that R a {\displaystyle Ra} is a dimensional unit that can be micrometer or microinch . Since these parameters reduce all of the information in a profile to a single number, great care must be taken in applying and interpreting them. Small changes in how the raw profile data is filtered, how the mean line is calculated, and the physics of the measurement can greatly affect the calculated parameter. With modern digital equipment, the scan can be evaluated to make sure there are no obvious glitches that skew the values. Because it may not be obvious to many users what each of the measurements really mean, a simulation tool allows a user to adjust key parameters, visualizing how surfaces which are obviously different to the human eye are differentiated by the measurements. For example, R a {\displaystyle Ra} fails to distinguish between two surfaces where one is composed of peaks on an otherwise smooth surface and the other is composed of troughs of the same amplitude. Such tools can be found in app format. [ 8 ] By convention every 2D roughness parameter is a capital R {\displaystyle R} followed by additional characters in the subscript. The subscript identifies the formula that was used, and the R {\displaystyle R} means that the formula was applied to a 2D roughness profile. Different capital letters imply that the formula was applied to a different profile. For example, R a {\displaystyle Ra} is the arithmetic average of the roughness profile, P a {\displaystyle Pa} is the arithmetic average of the unfiltered raw profile, and S a {\displaystyle Sa} is the arithmetic average of the 3D roughness. Each of the formulas listed in the tables assumes that the roughness profile has been filtered from the raw profile data and the mean line has been calculated. The roughness profile contains n {\displaystyle n} ordered, equally spaced points along the trace, and y i {\displaystyle y_{i}} is the vertical distance from the mean line to the i th {\displaystyle i^{\text{th}}} data point. Height is assumed to be positive in the up direction, away from the bulk material. Amplitude parameters characterize the surface based on the vertical deviations of the roughness profile from the mean line. Many of them are closely related to the parameters found in statistics for characterizing population samples. For example, R a {\displaystyle Ra} is the arithmetic average value of filtered roughness profile determined from deviations about the center line within the evaluation length and R t {\displaystyle Rt} is the range of the collected roughness data points. The arithmetic average roughness, R a {\displaystyle Ra} , is the most widely used one-dimensional roughness parameter. Here is a common conversion table with roughness grade numbers: Slope parameters describe characteristics of the slope of the roughness profile. Spacing and counting parameters describe how often the profile crosses certain thresholds. These parameters are often used to describe repetitive roughness profiles, such as those produced by turning on a lathe . Other "frequency" parameters are S m , λ {\displaystyle \lambda } a and λ {\displaystyle \lambda } q . S m is the mean spacing between peaks. Just as with real mountains it is important to define a "peak". For S m the surface must have dipped below the mean surface before rising again to a new peak. The average wavelength λ {\displaystyle \lambda } a and the root mean square wavelength λ {\displaystyle \lambda } q are derived from Δ {\displaystyle \Delta } a . When trying to understand a surface that depends on both amplitude and frequency it is not obvious which pair of metrics optimally describes the balance, so a statistical analysis of pairs of measurements can be performed (e.g.: R z and λ {\displaystyle \lambda } a or R a and Sm) to find the strongest correlation. These parameters are based on the bearing ratio curve (also known as the Abbott-Firestone curve.) This includes the Rk family of parameters. The mathematician Benoît Mandelbrot has pointed out the connection between surface roughness and fractal dimension . [ 11 ] The description provided by a fractal at the microroughness level may allow the control of the material properties and the type of the occurring chip formation. But fractals cannot provide a full-scale representation of a typical machined surface affected by tool feed marks; it ignores the geometry of the cutting edge. (J. Paulo Davim, 2010, op.cit .). Fractal descriptors of surfaces have an important role to play in correlating physical surface properties with surface structure. Across multiple fields, connecting physical, electrical and mechanical behavior with conventional surface descriptors of roughness or slope has been challenging. By employing measures of surface fractality together with measures of roughness or surface shape, certain interfacial phenomena including contact mechanics, friction and electrical contact resistance, can be better interpreted with respect to surface structure. [ 12 ] Areal roughness parameters are defined in the ISO 25178 series. The resulting values are Sa, Sq, Sz,... Many optical measurement instruments are able to measure the surface roughness over an area. Area measurements are also possible with contact measurement systems. Multiple, closely spaced 2D scans are taken of the target area. These are then digitally stitched together using relevant software, resulting in a 3D image and accompanying areal roughness parameters.
https://en.wikipedia.org/wiki/Surface_roughness
Surface runoff (also known as overland flow or terrestrial runoff ) is the unconfined flow of water over the ground surface, in contrast to channel runoff (or stream flow ). It occurs when excess rainwater , stormwater , meltwater , or other sources, can no longer sufficiently rapidly infiltrate in the soil . This can occur when the soil is saturated by water to its full capacity, and the rain arrives more quickly than the soil can absorb it. Surface runoff often occurs because impervious areas (such as roofs and pavement ) do not allow water to soak into the ground. Furthermore, runoff can occur either through natural or human-made processes. [ 1 ] Surface runoff is a major component of the water cycle . It is the primary agent of soil erosion by water . [ 2 ] [ 3 ] The land area producing runoff that drains to a common point is called a drainage basin . Runoff that occurs on the ground surface before reaching a channel can be a nonpoint source of pollution , as it can carry human-made contaminants or natural forms of pollution (such as rotting leaves). Human-made contaminants in runoff include petroleum , pesticides , fertilizers and others. [ 4 ] Much agricultural pollution is exacerbated by surface runoff, leading to a number of down stream impacts, including nutrient pollution that causes eutrophication . In addition to causing water erosion and pollution, surface runoff in urban areas is a primary cause of urban flooding , which can result in property damage, damp and mold in basements , and street flooding. Surface runoff is defined as precipitation (rain, snow, sleet, or hail [ 5 ] ) that reaches a surface stream without ever passing below the soil surface. [ 6 ] It is distinct from direct runoff , which is runoff that reaches surface streams immediately after rainfall or melting snowfall and excludes runoff generated by the melting of snowpack or glaciers. [ 7 ] Snow and glacier melt occur only in areas cold enough for these to form permanently. Typically snowmelt will peak in the spring [ 8 ] and glacier melt in the summer, [ 9 ] leading to pronounced flow maxima in rivers affected by them. [ 10 ] The determining factor of the rate of melting of snow or glaciers is both air temperature and the duration of sunlight. [ 11 ] In high mountain regions, streams frequently rise on sunny days and fall on cloudy ones for this reason. In areas where there is no snow, runoff will come from rainfall. However, not all rainfall will produce runoff because storage from soils can absorb light showers. On the extremely ancient soils of Australia and Southern Africa , [ 12 ] proteoid roots with their extremely dense networks of root hairs can absorb so much rainwater as to prevent runoff even with substantial amounts of rainfall. In these regions, even on less infertile cracking clay soils , high amounts of rainfall and potential evaporation are needed to generate any surface runoff, leading to specialised adaptations to extremely variable (usually ephemeral) streams. This occurs when the rate of rainfall on a surface exceeds the rate at which water can infiltrate the ground, and any depression storage has already been filled. This is also called Hortonian overland flow (after Robert E. Horton ), [ 13 ] or unsaturated overland flow. [ 14 ] This more commonly occurs in arid and semi-arid regions, where rainfall intensities are high and the soil infiltration capacity is reduced because of surface sealing , or in urban areas where pavements prevent water from infiltrating. [ 15 ] When the soil is saturated and the depression storage filled, and rain continues to fall, the rainfall will immediately produce surface runoff. The level of antecedent soil moisture is one factor affecting the time until soil becomes saturated. This runoff is called saturation excess overland flow, [ 15 ] saturated overland flow, [ 16 ] or Dunne runoff. [ 17 ] Soil retains a degree of moisture after a rainfall . This residual water moisture affects the soil's infiltration capacity . During the next rainfall event, the infiltration capacity will cause the soil to be saturated at a different rate. The higher the level of antecedent soil moisture, the more quickly the soil becomes saturated. Once the soil is saturated, runoff occurs. Therefore, surface runoff is a significantly factor in the controlling of soil moisture after medium and low intensity storms. [ 18 ] After water infiltrates the soil on an up-slope portion of a hill, the water may flow laterally through the soil, and exfiltrate (flow out of the soil) closer to a channel. This is called subsurface return flow or throughflow . As it flows, the amount of runoff may be reduced in a number of possible ways: a small portion of it may evapotranspire ; water may become temporarily stored in microtopographic depressions; and a portion of it may infiltrate as it flows overland. Any remaining surface water eventually flows into a receiving water body such as a river , lake , estuary or ocean . [ 19 ] Urbanization increases surface runoff by creating more impervious surfaces such as pavement and buildings that do not allow percolation of the water down through the soil to the aquifer . It is instead forced directly into streams or storm water runoff drains , where erosion and siltation can be major problems, even when flooding is not. Increased runoff reduces groundwater recharge, thus lowering the water table and making droughts worse, especially for agricultural farmers and others who depend on the water wells . [ 20 ] When anthropogenic contaminants are dissolved or suspended in runoff, the human impact is expanded to create water pollution . This pollutant load can reach various receiving waters such as streams, rivers, lakes, estuaries and oceans with resultant water chemistry changes to these water systems and their related ecosystems. [ 21 ] As humans continue to alter the climate through the addition of greenhouse gases to the atmosphere, precipitation patterns are expected to change as the atmospheric capacity for water vapor increases. This will have direct consequences on runoff amounts. [ 22 ] Urban runoff is surface runoff of rainwater, landscape irrigation, and car washing [ 23 ] created by urbanization . Impervious surfaces ( roads , parking lots and sidewalks ) are constructed during land development . During rain , storms, and other precipitation events, these surfaces (built from materials such as asphalt and concrete ), along with rooftops , carry polluted stormwater to storm drains , instead of allowing the water to percolate through soil . [ 24 ] This causes lowering of the water table (because groundwater recharge is lessened) and flooding since the amount of water that remains on the surface is greater. [ 25 ] [ 26 ] Most municipal storm sewer systems discharge untreated stormwater to streams , rivers , and bays . This excess water can also make its way into people's properties through basement backups and seepage through building wall and floors. Industrial stormwater is runoff from precipitation (rain, snow, sleet, freezing rain, or hail) that lands on industrial sites (e.g. manufacturing facilities, mines, airports). This runoff is often polluted by materials that are handled or stored on the sites, and the facilities are subject to regulations to control the discharges. [ 27 ] [ 28 ] To manage industrial stormwater effectively, facilities use best management practices (BMPs) that aim to both prevent pollutants from entering the runoff and treat water before it's released from the site. Common preventive steps include maintaining clean workspaces, conducting routine equipment checks, storing materials properly, preventing spills, and training staff on pollution prevention techniques. [ 29 ] Surface runoff can cause erosion of the Earth's surface; eroded material may be deposited a considerable distance away. There are four main types of soil erosion by water : splash erosion, sheet erosion, rill erosion and gully erosion. Splash erosion is the result of mechanical collision of raindrops with the soil surface: soil particles which are dislodged by the impact then move with the surface runoff. Sheet erosion is the overland transport of sediment by runoff without a well defined channel. Soil surface roughness causes may cause runoff to become concentrated into narrower flow paths: as these incise, the small but well-defined channels which are formed are known as rills. These channels can be as small as one centimeter wide or as large as several meters. If runoff continue to incise and enlarge rills, they may eventually grow to become gullies. Gully erosion can transport large amounts of eroded material in a small time period. Reduced crop productivity usually results from erosion, and these effects are studied in the field of soil conservation . The soil particles carried in runoff vary in size from about 0.001 millimeter to 1.0 millimeter in diameter. Larger particles settle over short transport distances, whereas small particles can be carried over long distances suspended in the water column . Erosion of silty soils that contain smaller particles generates turbidity and diminishes light transmission, which disrupts aquatic ecosystems . Entire sections of countries have been rendered unproductive by erosion. On the high central plateau of Madagascar , approximately ten percent of that country's land area, virtually the entire landscape is devoid of vegetation , with erosive gully furrows typically in excess of 50 meters deep and one kilometer wide. Shifting cultivation is a farming system which sometimes incorporates the slash and burn method in some regions of the world. Erosion causes loss of the fertile top soil and reduces its fertility and quality of the agricultural produce. Modern industrial farming is another major cause of erosion. Over a third of the U.S. Corn Belt has completely lost its topsoil . [ 31 ] Switching to no-till practices would reduce soil erosion from U.S. agricultural fields by more than 70 percent. [ 32 ] The principal environmental issues associated with runoff are the impacts to surface water, groundwater and soil through transport of water pollutants to these systems. Ultimately these consequences translate into human health risk, ecosystem disturbance and aesthetic impact to water resources. Some of the contaminants that create the greatest impact to surface waters arising from runoff are petroleum substances, herbicides and fertilizers . Quantitative uptake by surface runoff of pesticides and other contaminants has been studied since the 1960s, and early on contact of pesticides with water was known to enhance phytotoxicity . [ 33 ] In the case of surface waters, the impacts translate to water pollution , since the streams and rivers have received runoff carrying various chemicals or sediments. When surface waters are used as potable water supplies, they can be compromised regarding health risks and drinking water aesthetics (that is, odor, color and turbidity effects). Contaminated surface waters risk altering the metabolic processes of the aquatic species that they host; these alterations can lead to death, such as fish kills , or alter the balance of populations present. Other specific impacts are on animal mating, spawning, egg and larvae viability, juvenile survival and plant productivity. Some research shows surface runoff of pesticides, such as DDT , can alter the gender of fish species genetically, which transforms male into female fish. [ 34 ] Surface runoff occurring within forests can supply lakes with high loads of mineral nitrogen and phosphorus leading to eutrophication . Runoff waters within coniferous forests are also enriched with humic acids and can lead to humification of water bodies [ 35 ] Additionally, high standing and young islands in the tropics and subtropics can undergo high soil erosion rates and also contribute large material fluxes to the coastal ocean. Such land derived runoff of sediment nutrients, carbon, and contaminants can have large impacts on global biogeochemical cycles and marine and coastal ecosystems. [ 36 ] In the case of groundwater, the main issue is contamination of drinking water, if the aquifer is abstracted for human use. Regarding soil contamination , runoff waters can have two important pathways of concern. Firstly, runoff water can extract soil contaminants and carry them in the form of water pollution to even more sensitive aquatic habitats. Secondly, runoff can deposit contaminants on pristine soils, creating health or ecological consequences. The other context of agricultural issues involves the transport of agricultural chemicals (nitrates, phosphates, pesticides , herbicides, etc.) via surface runoff. This result occurs when chemical use is excessive or poorly timed with respect to high precipitation. The resulting contaminated runoff represents not only a waste of agricultural chemicals, but also an environmental threat to downstream ecosystems. Pine straws are often used to protect soil from soil erosion and weed growth. [ 37 ] However, harvesting these crops may result in the increase of soil erosion. Surface run-off results in a significant amount of economic effects. Pine straws are cost effective ways of dealing with surface run-off. Moreover, Surface run-off can be reused through the growth of elephant mass. In Nigeria , elephant grass is considered to be an economical way in which surface run-off and erosion can be reduced. [ 38 ] Also, China has suffered significant impact from surface run-off to most of their economical crops such as vegetables. Therefore, they are known to have implemented a system which reduced loss of nutrients (nitrogen and phosphorus) in soil. [ 39 ] Flooding occurs when a watercourse is unable to convey the quantity of runoff flowing downstream. The frequency with which this occurs is described by a return period . Flooding is a natural process, which maintains ecosystem composition and processes, but it can also be altered by land use changes such as river engineering. Floods can be both beneficial to societies or cause damage. Agriculture along the Nile floodplain took advantage of the seasonal flooding that deposited nutrients beneficial for crops. However, as the number and susceptibility of settlements increase, flooding increasingly becomes a natural hazard. In urban areas, surface runoff is the primary cause of urban flooding , known for its repetitive and costly impact on communities. [ 40 ] Adverse impacts span loss of life, property damage, contamination of water supplies, loss of crops, and social dislocation and temporary homelessness. Floods are among the most devastating of natural disasters. The use of supplemental irrigation is also recognized as a significant way in which crops such as maize can retain nitrogen fertilizers in soil, resulting in improvement of crop water availability. [ 41 ] Mitigation of adverse impacts of runoff can take several forms: Land use controls. Many world regulatory agencies have encouraged research on methods of minimizing total surface runoff by avoiding unnecessary hardscape . [ 42 ] Many municipalities have produced guidelines and codes ( zoning and related ordinances ) for land developers that encourage minimum width sidewalks, use of pavers set in earth for driveways and walkways and other design techniques to allow maximum water infiltration in urban settings. An example of a local program specifying design requirements, construction practices and maintenance requirements for buildings and properties is in Santa Monica, California . [ 43 ] Erosion controls have appeared since medieval times when farmers realized the importance of contour farming to protect soil resources. Beginning in the 1950s these agricultural methods became increasingly more sophisticated. In the 1960s some state and local governments began to focus their efforts on mitigation of construction runoff by requiring builders to implement erosion and sediment controls (ESCs). This included such techniques as: use of straw bales and barriers to slow runoff on slopes, installation of silt fences , programming construction for months that have less rainfall and minimizing extent and duration of exposed graded areas. Montgomery County , Maryland implemented the first local government sediment control program in 1965, and this was followed by a statewide program in Maryland in 1970. [ 44 ] Flood control programs as early as the first half of the twentieth century became quantitative in predicting peak flows of riverine systems. Progressively strategies have been developed to minimize peak flows and also to reduce channel velocities. Some of the techniques commonly applied are: provision of holding ponds (also called detention basins or balancing lakes ) to buffer riverine peak flows, use of energy dissipators in channels to reduce stream velocity and land use controls to minimize runoff. [ 45 ] Chemical use and handling. Following enactment of the U.S. Resource Conservation and Recovery Act (RCRA) in 1976, and later the Water Quality Act of 1987 , states and cities have become more vigilant in controlling the containment and storage of toxic chemicals, thus preventing releases and leakage. Methods commonly applied are: requirements for double containment of underground storage tanks , registration of hazardous materials usage, reduction in numbers of allowed pesticides and more stringent regulation of fertilizers and herbicides in landscape maintenance. In many industrial cases, pretreatment of wastes is required, to minimize escape of pollutants into sanitary or stormwater sewers . The U.S. Clean Water Act (CWA) requires that local governments in urbanized areas (as defined by the Census Bureau ) obtain stormwater discharge permits for their drainage systems. [ 46 ] [ 47 ] Essentially this means that the locality must operate a stormwater management program for all surface runoff that enters the municipal separate storm sewer system ("MS4"). EPA and state regulations and related publications outline six basic components that each local program must contain: Other property owners which operate storm drain systems similar to municipalities, such as state highway systems, universities, military bases and prisons, are also subject to the MS4 permit requirements. Runoff is analyzed by using mathematical models in combination with various water quality sampling methods. Measurements can be made using continuous automated water quality analysis instruments targeted on pollutants such as specific organic or inorganic chemicals , pH , turbidity, etc., or targeted on secondary indicators such as dissolved oxygen . Measurements can also be made in batch form by extracting a single water sample and conducting chemical or physical tests on that sample. In the 1950s or earlier, hydrology transport models appeared to calculate quantities of runoff, primarily for flood forecasting . Beginning in the early 1970s, computer models were developed to analyze the transport of runoff carrying water pollutants. These models considered dissolution rates of various chemicals, infiltration into soils, and the ultimate pollutant load delivered to receiving waters . One of the earliest models addressing chemical dissolution in runoff and resulting transport was developed in the early 1970s under contract to the United States Environmental Protection Agency (EPA). [ 48 ] This computer model formed the basis of much of the mitigation study that led to strategies for land use and chemical handling controls. Increasingly, stormwater practitioners have recognized the need for Monte Carlo models to simulate stormwater processes because of natural variations in multiple variables affecting runoff quality and quantity. The benefit of the Monte Carlo analysis is not to decrease uncertainty in the input statistics but to represent the different combinations of the variables that determine potential risks of water-quality excursions. One example of this type of stormwater model is the stochastic empirical loading and dilution model (SELDM) [ 49 ] [ 50 ] is a stormwater quality model. SELDM is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the effectiveness of such management measures for reducing these risks. SELDM provides a method for rapid assessment of information that is otherwise difficult or impossible to obtain because it models the interactions among hydrologic variables (with different probability distributions), resulting in a population of values representing likely long-term outcomes from runoff processes and the potential effects of various mitigation measures. SELDM also provides the means for rapidly doing sensitivity analyses to determine the possible effects of varying input assumptions on the risks for water-quality excursions. Other computer models have been developed (such as the DSSAM Model ) that allow surface runoff to be tracked through a river course as reactive water pollutants. In this case, the surface runoff may be considered to be a line source of water pollution to the receiving waters. [ 51 ]
https://en.wikipedia.org/wiki/Surface_runoff
Surface science is the study of physical and chemical phenomena that occur at the interface of two phases , including solid – liquid interfaces, solid– gas interfaces, solid– vacuum interfaces, and liquid – gas interfaces. It includes the fields of surface chemistry and surface physics . [ 1 ] Some related practical applications are classed as surface engineering . The science encompasses concepts such as heterogeneous catalysis , semiconductor device fabrication , fuel cells , self-assembled monolayers , and adhesives . Surface science is closely related to interface and colloid science . [ 2 ] Interfacial chemistry and physics are common subjects for both. The methods are different. In addition, interface and colloid science studies macroscopic phenomena that occur in heterogeneous systems due to peculiarities of interfaces. The field of surface chemistry started with heterogeneous catalysis pioneered by Paul Sabatier on hydrogenation and Fritz Haber on the Haber process . [ 3 ] Irving Langmuir was also one of the founders of this field, and the scientific journal on surface science, Langmuir , bears his name. The Langmuir adsorption equation is used to model monolayer adsorption where all surface adsorption sites have the same affinity for the adsorbing species and do not interact with each other. Gerhard Ertl in 1974 described for the first time the adsorption of hydrogen on a palladium surface using a novel technique called LEED . [ 4 ] Similar studies with platinum , [ 5 ] nickel , [ 6 ] [ 7 ] and iron [ 8 ] followed. Most recent developments in surface sciences include the 2007 Nobel prize of Chemistry winner Gerhard Ertl 's advancements in surface chemistry, specifically his investigation of the interaction between carbon monoxide molecules and platinum surfaces. Surface chemistry can be roughly defined as the study of chemical reactions at interfaces. It is closely related to surface engineering , which aims at modifying the chemical composition of a surface by incorporation of selected elements or functional groups that produce various desired effects or improvements in the properties of the surface or interface. Surface science is of particular importance to the fields of heterogeneous catalysis , electrochemistry , and geochemistry . The adhesion of gas or liquid molecules to the surface is known as adsorption . This can be due to either chemisorption or physisorption , and the strength of molecular adsorption to a catalyst surface is critically important to the catalyst's performance (see Sabatier principle ). However, it is difficult to study these phenomena in real catalyst particles, which have complex structures. Instead, well-defined single crystal surfaces of catalytically active materials such as platinum are often used as model catalysts. Multi-component materials systems are used to study interactions between catalytically active metal particles and supporting oxides; these are produced by growing ultra-thin films or particles on a single crystal surface. [ 9 ] Relationships between the composition, structure, and chemical behavior of these surfaces are studied using ultra-high vacuum techniques, including adsorption and temperature-programmed desorption of molecules, scanning tunneling microscopy , low energy electron diffraction , and Auger electron spectroscopy . Results can be fed into chemical models or used toward the rational design of new catalysts. Reaction mechanisms can also be clarified due to the atomic-scale precision of surface science measurements. [ 10 ] Electrochemistry is the study of processes driven through an applied potential at a solid–liquid or liquid–liquid interface. The behavior of an electrode–electrolyte interface is affected by the distribution of ions in the liquid phase next to the interface forming the electrical double layer . Adsorption and desorption events can be studied at atomically flat single-crystal surfaces as a function of applied potential, time and solution conditions using spectroscopy, scanning probe microscopy [ 11 ] and surface X-ray scattering . [ 12 ] [ 13 ] These studies link traditional electrochemical techniques such as cyclic voltammetry to direct observations of interfacial processes. Geological phenomena such as iron cycling and soil contamination are controlled by the interfaces between minerals and their environment. The atomic-scale structure and chemical properties of mineral–solution interfaces are studied using in situ synchrotron X-ray techniques such as X-ray reflectivity , X-ray standing waves , and X-ray absorption spectroscopy as well as scanning probe microscopy. For example, studies of heavy metal or actinide adsorption onto mineral surfaces reveal molecular-scale details of adsorption, enabling more accurate predictions of how these contaminants travel through soils [ 14 ] or disrupt natural dissolution–precipitation cycles. [ 15 ] Surface physics can be roughly defined as the study of physical interactions that occur at interfaces. It overlaps with surface chemistry. Some of the topics investigated in surface physics include friction , surface states , surface diffusion , surface reconstruction , surface phonons and plasmons , epitaxy , the emission and tunneling of electrons, spintronics , and the self-assembly of nanostructures on surfaces. Techniques to investigate processes at surfaces include surface X-ray scattering , scanning probe microscopy , surface-enhanced Raman spectroscopy and X-ray photoelectron spectroscopy . The study and analysis of surfaces involves both physical and chemical analysis techniques. Several modern methods probe the topmost 1–10 nm of surfaces exposed to vacuum. These include angle-resolved photoemission spectroscopy (ARPES), X-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy (AES), low-energy electron diffraction (LEED), electron energy loss spectroscopy (EELS), thermal desorption spectroscopy (TPD), ion scattering spectroscopy (ISS), secondary ion mass spectrometry , dual-polarization interferometry , and other surface analysis methods included in the list of materials analysis methods . Many of these techniques require vacuum as they rely on the detection of electrons or ions emitted from the surface under study. Moreover, in general ultra-high vacuum , in the range of 10 −7 pascal pressure or better, it is necessary to reduce surface contamination by residual gas, by reducing the number of molecules reaching the sample over a given time period. At 0.1 mPa (10 −6 torr) partial pressure of a contaminant and standard temperature , it only takes on the order of 1 second to cover a surface with a one-to-one monolayer of contaminant to surface atoms, so much lower pressures are needed for measurements. This is found by an order of magnitude estimate for the (number) specific surface area of materials and the impingement rate formula from the kinetic theory of gases . Purely optical techniques can be used to study interfaces under a wide variety of conditions. Reflection-absorption infrared, dual polarisation interferometry, surface-enhanced Raman spectroscopy and sum frequency generation spectroscopy can be used to probe solid–vacuum as well as solid–gas, solid–liquid, and liquid–gas surfaces. Multi-parametric surface plasmon resonance works in solid–gas, solid–liquid, liquid–gas surfaces and can detect even sub-nanometer layers. [ 16 ] It probes the interaction kinetics as well as dynamic structural changes such as liposome collapse [ 17 ] or swelling of layers in different pH. Dual-polarization interferometry is used to quantify the order and disruption in birefringent thin films. [ 18 ] This has been used, for example, to study the formation of lipid bilayers and their interaction with membrane proteins. Acoustic techniques, such as quartz crystal microbalance with dissipation monitoring , is used for time-resolved measurements of solid–vacuum, solid–gas and solid–liquid interfaces. The method allows for analysis of molecule–surface interactions as well as structural changes and viscoelastic properties of the adlayer. X-ray scattering and spectroscopy techniques are also used to characterize surfaces and interfaces. While some of these measurements can be performed using laboratory X-ray sources , many require the high intensity and energy tunability of synchrotron radiation . X-ray crystal truncation rods (CTR) and X-ray standing wave (XSW) measurements probe changes in surface and adsorbate structures with sub-Ångström resolution. Surface-extended X-ray absorption fine structure (SEXAFS) measurements reveal the coordination structure and chemical state of adsorbates. Grazing-incidence small angle X-ray scattering (GISAXS) yields the size, shape, and orientation of nanoparticles on surfaces. [ 19 ] The crystal structure and texture of thin films can be investigated using grazing-incidence X-ray diffraction (GIXD, GIXRD). X-ray photoelectron spectroscopy (XPS) is a standard tool for measuring the chemical states of surface species and for detecting the presence of surface contamination. Surface sensitivity is achieved by detecting photoelectrons with kinetic energies of about 10–1000 eV , which have corresponding inelastic mean free paths of only a few nanometers. This technique has been extended to operate at near-ambient pressures (ambient pressure XPS, AP-XPS) to probe more realistic gas–solid and liquid–solid interfaces. [ 20 ] Performing XPS with hard X-rays at synchrotron light sources yields photoelectrons with kinetic energies of several keV (hard X-ray photoelectron spectroscopy, HAXPES), enabling access to chemical information from buried interfaces. [ 21 ] Modern physical analysis methods include scanning-tunneling microscopy (STM) and a family of methods descended from it, including atomic force microscopy (AFM). These microscopies have considerably increased the ability of surface scientists to measure the physical structure of many surfaces. For example, they make it possible to follow reactions at the solid–gas interface in real space, if those proceed on a time scale accessible by the instrument. [ 22 ] [ 23 ]
https://en.wikipedia.org/wiki/Surface_science
Surface states are electronic states found at the surface of materials. They are formed due to the sharp transition from solid material that ends with a surface and are found only at the atom layers closest to the surface. The termination of a material with a surface leads to a change of the electronic band structure from the bulk material to the vacuum . In the weakened potential at the surface, new electronic states can be formed, so called surface states. [ 1 ] As stated by Bloch's theorem , eigenstates of the single-electron Schrödinger equation with a perfectly periodic potential, a crystal, are Bloch waves [ 2 ] Here u n k ( r ) {\displaystyle u_{n{\boldsymbol {k}}}({\boldsymbol {r}})} is a function with the same periodicity as the crystal, n is the band index and k is the wave number. The allowed wave numbers for a given potential are found by applying the usual Born–von Karman cyclic boundary conditions. [ 2 ] The termination of a crystal, i.e. the formation of a surface, obviously causes deviation from perfect periodicity. Consequently, if the cyclic boundary conditions are abandoned in the direction normal to the surface the behavior of electrons will deviate from the behavior in the bulk and some modifications of the electronic structure has to be expected. A simplified model of the crystal potential in one dimension can be sketched as shown in Figure 1 . [ 3 ] In the crystal, the potential has the periodicity, a , of the lattice while close to the surface it has to somehow attain the value of the vacuum level. The step potential (solid line) shown in Figure 1 is an oversimplification which is mostly convenient for simple model calculations. At a real surface the potential is influenced by image charges and the formation of surface dipoles and it rather looks as indicated by the dashed line. Given the potential in Figure 1 , it can be shown that the one-dimensional single-electron Schrödinger equation gives two qualitatively different types of solutions. [ 4 ] The first type of solution can be obtained for both metals and semiconductors . In semiconductors though, the associated eigenenergies have to belong to one of the allowed energy bands. The second type of solution exists in forbidden energy gap of semiconductors as well as in local gaps of the projected band structure of metals. It can be shown that the energies of these states all lie within the band gap. As a consequence, in the crystal these states are characterized by an imaginary wavenumber leading to an exponential decay into the bulk. In the discussion of surface states, one generally distinguishes between Shockley states [ 5 ] and Tamm states, [ 6 ] named after the American physicist William Shockley and the Russian physicist Igor Tamm . There is no strict physical distinction between the two types of states, but the qualitative character and the mathematical approach used in describing them is different. All materials can be classified by a single number, a topological invariant; this is constructed out of the bulk electronic wave functions, which are integrated in over the Brillouin zone, in a similar way that the genus is calculated in geometric topology . In certain materials the topological invariant can be changed when certain bulk energy bands invert due to strong spin-orbital coupling. At the interface between an insulator with non-trivial topology, a so-called topological insulator , and one with a trivial topology, the interface must become metallic. More over, the surface state must have linear Dirac-like dispersion with a crossing point which is protected by time reversal symmetry. Such a state is predicted to be robust under disorder, and therefore cannot be easily localized. [ 7 ] A simple model for the derivation of the basic properties of states at a metal surface is a semi-infinite periodic chain of identical atoms. [ 1 ] In this model, the termination of the chain represents the surface, where the potential attains the value V 0 of the vacuum in the form of a step function , figure 1 . Within the crystal the potential is assumed periodic with the periodicity a of the lattice. The Shockley states are then found as solutions to the one-dimensional single electron Schrödinger equation with the periodic potential where l is an integer, and P is the normalization factor. The solution must be obtained independently for the two domains z <0 and z>0 , where at the domain boundary (z=0) the usual conditions on continuity of the wave function and its derivatives are applied. Since the potential is periodic deep inside the crystal, the electronic wave functions must be Bloch waves here. The solution in the crystal is then a linear combination of an incoming wave and a wave reflected from the surface. For z >0 the solution will be required to decrease exponentially into the vacuum The wave function for a state at a metal surface is qualitatively shown in figure 2 . It is an extended Bloch wave within the crystal with an exponentially decaying tail outside the surface. The consequence of the tail is a deficiency of negative charge density just inside the crystal and an increased negative charge density just outside the surface, leading to the formation of a dipole double layer . The dipole perturbs the potential at the surface leading, for example, to a change of the metal work function . The nearly free electron approximation can be used to derive the basic properties of surface states for narrow gap semiconductors. The semi-infinite linear chain model is also useful in this case. [ 4 ] However, now the potential along the atomic chain is assumed to vary as a cosine function V ( z ) = V [ exp ⁡ ( i 2 π z a ) + exp ⁡ ( − i 2 π z a ) ] = 2 V cos ⁡ ( 2 π z a ) , {\displaystyle {\begin{alignedat}{2}V(z)&=V\left[\exp \left(i{\frac {2\pi z}{a}}\right)+\exp \left(-i{\frac {2\pi z}{a}}\right)\right]\\&=2V\cos \left({\frac {2\pi z}{a}}\right),\\\end{alignedat}}} whereas at the surface the potential is modeled as a step function of height V 0 . The solutions to the Schrödinger equation must be obtained separately for the two domains z < 0 and z > 0. In the sense of the nearly free electron approximation, the solutions obtained for z < 0 will have plane wave character for wave vectors away from the Brillouin zone boundary k = ± π / a {\displaystyle k=\pm \pi /a} , where the dispersion relation will be parabolic, as shown in figure 4 . At the Brillouin zone boundaries, Bragg reflection occurs resulting in a standing wave consisting of a wave with wave vector k = π / a {\displaystyle k=\pi /a} and wave vector k = − π / a {\displaystyle k=-\pi /a} . Here G = 2 π / a {\displaystyle G=2\pi /a} is a lattice vector of the reciprocal lattice (see figure 4 ). Since the solutions of interest are close to the Brillouin zone boundary, we set k ⊥ = ( π / a ) + κ {\displaystyle k_{\perp }={\bigl (}\pi /a{\bigr )}+\kappa } , where κ is a small quantity. The arbitrary constants A , B are found by substitution into the Schrödinger equation. This leads to the following eigenvalues demonstrating the band splitting at the edges of the Brillouin zone , where the width of the forbidden gap is given by 2V. The electronic wave functions deep inside the crystal, attributed to the different bands are given by Where C is a normalization constant. Near the surface at z = 0 , the bulk solution has to be fitted to an exponentially decaying solution, which is compatible with the constant potential V 0 . It can be shown that the matching conditions can be fulfilled for every possible energy eigenvalue which lies in the allowed band. As in the case for metals, this type of solution represents standing Bloch waves extending into the crystal which spill over into the vacuum at the surface. A qualitative plot of the wave function is shown in figure 2. If imaginary values of κ are considered, i.e. κ = - i·q for z ≤ 0 and one defines one obtains solutions with a decaying amplitude into the crystal The energy eigenvalues are given by E is real for large negative z, as required. Also in the range 0 ≤ q ≤ q m a x = m a V ℏ 2 π {\displaystyle 0\leq q\leq q_{max}={\frac {maV}{\hbar ^{2}\pi }}} all energies of the surface states fall into the forbidden gap. The complete solution is again found by matching the bulk solution to the exponentially decaying vacuum solution. The result is a state localized at the surface decaying both into the crystal and the vacuum. A qualitative plot is shown in figure 3 . The results for surface states of a monatomic linear chain can readily be generalized to the case of a three-dimensional crystal. Because of the two-dimensional periodicity of the surface lattice, Bloch's theorem must hold for translations parallel to the surface. As a result, the surface states can be written as the product of a Bloch waves with k-values k | | = ( k x , k y ) {\displaystyle {\textbf {k}}_{||}=(k_{x},k_{y})} parallel to the surface and a function representing a one-dimensional surface state The energy of this state is increased by a term E | | {\displaystyle E_{||}} so that we have where m * is the effective mass of the electron. The matching conditions at the crystal surface, i.e. at z=0, have to be satisfied for each k | | {\displaystyle {\textbf {k}}_{||}} separately and for each k | | {\displaystyle {\textbf {k}}_{||}} a single, but generally different energy level for the surface state is obtained. A surface state is described by the energy E s {\displaystyle E_{s}} and its wave vector k | | {\displaystyle {\textbf {k}}_{||}} parallel to the surface, while a bulk state is characterized by both k | | {\displaystyle \mathbf {k} _{||}} and k ⊥ {\displaystyle \mathbf {k} _{\perp }} wave numbers. In the two-dimensional Brillouin zone of the surface, for each value of k | | {\displaystyle \mathbf {k} _{||}} therefore a rod of k ⊥ {\displaystyle \mathbf {k} _{\perp }} is extending into the three-dimensional Brillouin zone of the Bulk. Bulk energy bands that are being cut by these rods allow states that penetrate deep into the crystal. One therefore generally distinguishes between true surface states and surface resonances. True surface states are characterized by energy bands that are not degenerate with bulk energy bands. These states exist in the forbidden energy gap only and are therefore localized at the surface, similar to the picture given in figure 3 . At energies where a surface and a bulk state are degenerate, the surface and the bulk state can mix, forming a surface resonance . Such a state can propagate deep into the bulk, similar to Bloch waves , while retaining an enhanced amplitude close to the surface. Surface states that are calculated in the framework of a tight-binding model are often called Tamm states. In the tight binding approach, the electronic wave functions are usually expressed as a linear combination of atomic orbitals (LCAO), see figure 5. In this picture, it is easy to comprehend that the existence of a surface will give rise to surface states with energies different from the energies of the bulk states: Since the atoms residing in the topmost surface layer are missing their bonding partners on one side, their orbitals have less overlap with the orbitals of neighboring atoms. The splitting and shifting of energy levels of the atoms forming the crystal is therefore smaller at the surface than in the bulk. If a particular orbital is responsible for the chemical bonding, e.g. the sp 3 hybrid in Si or Ge, it is strongly affected by the presence of the surface, bonds are broken, and the remaining lobes of the orbital stick out from the surface. They are called dangling bonds . The energy levels of such states are expected to significantly shift from the bulk values. In contrast to the nearly free electron model used to describe the Shockley states, the Tamm states are suitable to describe also transition metals and wide-bandgap semiconductors . Surface states originating from clean and well ordered surfaces are usually called intrinsic . These states include states originating from reconstructed surfaces, where the two-dimensional translational symmetry gives rise to the band structure in the k space of the surface. Extrinsic surface states are usually defined as states not originating from a clean and well ordered surface. Surfaces that fit into the category extrinsic are: [ 8 ] Generally, extrinsic surface states cannot easily be characterized in terms of their chemical, physical or structural properties. An experimental technique to measure the dispersion of surface states is angle resolved photoemission spectroscopy ( ARPES ) or angle resolved ultraviolet photoelectron spectroscopy (ARUPS). The surface state dispersion can be measured using a scanning tunneling microscope ; in these experiments, periodic modulations in the surface state density, which arise from scattering off of surface impurities or step edges, are measured by an STM tip at a given bias voltage. The wavevector versus bias (energy) of the surface state electrons can be fit to a free-electron model with effective mass and surface state onset energy. [ 9 ] A naturally simple but fundamental question is how many surface states are in a band gap in a one-dimensional crystal of length N a {\displaystyle Na} ( a {\displaystyle a} is the potential period, and N {\displaystyle N} is a positive integer)? A well-accepted concept proposed by Fowler [ 10 ] first in 1933, then written in Seitz's classic book [ 11 ] that "in a finite one-dimensional crystal the surface states occur in pairs, one state being associated with each end of the crystal." Such a concept seemly was never doubted since then for nearly a century, as shown, for example, in. [ 12 ] However, a recent new investigation [ 13 ] [ 14 ] [ 15 ] gives an entirely different answer. The investigation tries to understand electronic states in ideal crystals of finite size based on the mathematical theory of periodic differential equations. [ 16 ] This theory provides some fundamental new understandings of those electronic states, including surface states. The theory found that a one-dimensional finite crystal with two ends at τ {\displaystyle \tau } and N a + τ {\displaystyle Na+\tau } always has one and only one state whose energy and properties depend on τ {\displaystyle \tau } but not N {\displaystyle N} for each band gap. This state is either a band-edge state or a surface state in the band gap(see, Particle in a one-dimensional lattice , Particle in a box ). Numerical calculations have confirmed such findings. [ 14 ] [ 15 ] Further, these behaviors have been seen in different one-dimensional systems, such as in. [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] Therefore: Further investigations extended to multi-dimensional cases found that
https://en.wikipedia.org/wiki/Surface_states
Surface stress was first defined by Josiah Willard Gibbs [ 1 ] (1839–1903) as the amount of the reversible work per unit area needed to elastically stretch a pre-existing surface . Depending upon the convention used, the area is either the original, unstretched one which represents a constant number of atoms, or sometimes is the final area; these are atomistic versus continuum definitions. Some care is needed to ensure that the definition used is also consistent with the elastic strain energy , and misinterpretations and disagreements have occurred in the literature. A similar term called "surface free energy", the excess free energy per unit area needed to create a new surface, is sometimes confused with "surface stress". Although surface stress and surface free energy of liquid–gas or liquid–liquid interface are the same, [ 2 ] they are very different in solid–gas or solid–solid interface. Both terms represent an energy per unit area, equivalent to a force per unit length , so are sometimes referred to as " surface tension ", which contributes further to the confusion in the literature. The continuum definition of surface free energy is the amount of reversible work d w {\displaystyle dw} performed to create new area d A {\displaystyle dA} of surface, expressed as: In this definition the number of atoms at the surface is proportional to the area. Gibbs was the first to define another surface quantity, different from the surface free energy γ {\displaystyle \gamma } , that is associated with the reversible work per unit area needed to elastically stretch a pre-existing surface. In a continuum approach one can define a surface stress tensor f i j {\displaystyle f_{ij}} that relates the work associated with the variation in γ A {\displaystyle \gamma A} , the total excess free energy of the surface due to a strain tensor e i j {\displaystyle e_{ij}} [ 3 ] [ 4 ] In general there is no change in area for shear, which means that for the second term on the right i = j {\displaystyle i=j} and d A / d e i j = A δ i j {\displaystyle dA/de_{ij}=A\delta _{ij}} , using the Kronecker delta . Cancelling the area then gives called the Shuttleworth equation. [ 5 ] [ 2 ] An alternative approach is an atomistic one, which defines all quantities in terms of the number of atoms, not continuum measures such as areas. This is related to the ideal of using Gibb's equimolar quantities rather than continuum numbers such as area, that is keeping the number of surface atoms constant. In this case the surface stress is defined as the derivative of the surface energy with strain, that is (deliberately using a different symbol) This second definition is more convenient in many cases. [ 6 ] A conventional liquid cannot sustain strains, [ 2 ] so in the continuum definition the surface stress and surface energies are the same, whereas in the atomistic approach the surface stress is zero for a liquid. So long as care is taken [ 6 ] the choice of the two does not matter, although this has been a little contentious in the literature. [ 7 ] [ 8 ] [ 9 ] The origin of surface stress is the difference between bonding in the bulk and at a surface. The bulk spacings set the values of the in-plane surface spacings, and consequently the in-plane distance between atoms. However, the atoms at the surface have a different bonding, so would prefer to be at a different spacing, often (but not always) closer together. If they want to be closer, then d γ / d e i j {\displaystyle d\gamma /de_{ij}} will be positive—a tensile or expansive strain will increase the surface energy. For many metals the derivative is positive, but in other cases it is negative, for instance solid argon and some semiconductors. The sign can also strongly depend upon molecules adsorbed on the surface. If these want to be further apart that will introduce a negative component. [ 10 ] The most common method to calculate the surface stresses is by calculating the surface free energy and its derivative with respect to elastic strain. Different methods have been used such as first principles , atomistic potential calculations and molecular dynamics simulations, with density functional theory most common. [ 11 ] [ 12 ] [ 13 ] A large tabulation of calculated values for metals has been given by Lee et al. [ 14 ] Typical values of the surface energies are 1-2 Joule per metre squared ( J m − 2 {\displaystyle Jm^{-2}} ), with the trace of the surface stress tensor g i j {\displaystyle g_{ij}} in the range of -1 to 1 J m − 2 {\displaystyle Jm^{-2}} . Some metals such as aluminum are calculated to have fairly high, positive values (e.g. 0.82) indicating a strong propensity to contract, whereas others such as calcium are quite negative at -1.25, and others are close to zero such as cesium (-0.02). [ 13 ] Whenever there is a balance between a bulk elastic energy contribution and a surface energy term, surface stresses can be important. Surface contributions are more important at small sizes, so surface stress effects are often important at the nanoscale. As mentioned above, often the atoms at a surface would like to be either closer together or further apart. Countering this, the atoms below (substrate) have a fixed in-plane spacing onto which the surface has to register. One way to reduce the total energy is to have extra atoms in the surface, or remove some. [ 3 ] This occurs for the gold (111) surface where there is approximately a 5% higher surface density when it has reconstructed. [ 15 ] The misregistry with the underlying bulk is accommodated by having partial partial dislocations between the first two layers. The silicon (111) is similar, with a 7x7 reconstruction with both more atoms in the plane and some added atoms (called adatoms) on top. [ 16 ] [ 17 ] Different is the case for anatase (001) surfaces. [ 18 ] Here the atoms want to be further apart, so one row "pops out" and sits further from the bulk. When atoms or molecules are adsorbed on a surface, two phenomena can lead to a change in the surface stress. One is a change in the electron density of the atoms in the surface, which changes the in-plane bonding and thus the surface stress. A second is due to interactions between the adsorbed atoms or molecules themselves, which may want to be further apart (or closer) than is possible with the atomic spacings in the surface. Note that since adsorption often depends strongly upon the environment, for instance gas pressure and temperature, the surface stress tensor will show a similar dependence. [ 10 ] For a spherical particle the surface area will scale as the square of the size, while the volume scales as the cube. Therefore surface contributions to the energy can become important at small sizes in nanoparticles . If the energy of the surface atoms is lower when they are closer, this can be accomplished by shrinking the whole particle. The gain in energy from the surface stress will scale as the area, balanced by an energy cost for the shrinking (deformation) that scales as the volume. Combined these lead to a change in the lattice parameter that scales inversely with size. This has been measured for many materials using either electron diffraction [ 19 ] [ 20 ] or x-ray diffraction . [ 21 ] [ 22 ] This phenomenon has sometimes been written as equivalent to the Laplace pressure , also called the capillary pressure , in both cases with a surface tension . This is not correct since these are terms that apply to liquids. One complication is that the changes in lattice parameter lead to more involved forms for nanoparticles with more complex shapes or when surface segregation can occur. [ 23 ] Also in the area of nanoparticles, surface stress can play a significant role in the stabilization of decahedral nanoparticle and icosahedral twins . In both cases an arrangement of internal twin boundaries leads to lower energy surface energy facets. [ 24 ] Balancing this there are nominal angular gaps ( disclinations ) which are removed by an elastic deformation . [ 25 ] While the main energy contributions are the external surface energy and the strain energy, the surface stress couples the two and can have an important role in the overall stability. [ 26 ] During thin film growth, there can be a balance between surface energy and internal strain, with surface stress a coupling term combining the two. Instead of growing as a continuous thin film, a morphological instability can occur and the film can start to become very uneven, in many cases due to a breakdown of a balance between elastic and surface energies. [ 27 ] [ 28 ] [ 4 ] The surface stress can lead to comparable wrinkling in nanowires , [ 29 ] and also a morphological instability in a thin film. [ 30 ]
https://en.wikipedia.org/wiki/Surface_stress
Surface tension is the tendency of liquid surfaces at rest to shrink into the minimum surface area possible. Surface tension is what allows objects with a higher density than water such as razor blades and insects (e.g. water striders ) to float on a water surface without becoming even partly submerged. At liquid–air interfaces, surface tension results from the greater attraction of liquid molecules to each other (due to cohesion ) than to the molecules in the air (due to adhesion ). [ citation needed ] There are two primary mechanisms in play. One is an inward force on the surface molecules causing the liquid to contract. [ 1 ] [ 2 ] Second is a tangential force parallel to the surface of the liquid. [ 2 ] This tangential force is generally referred to as the surface tension. The net effect is the liquid behaves as if its surface were covered with a stretched elastic membrane. But this analogy must not be taken too far as the tension in an elastic membrane is dependent on the amount of deformation of the membrane while surface tension is an inherent property of the liquid – air or liquid – vapour interface. [ 3 ] Because of the relatively high attraction of water molecules to each other through a web of hydrogen bonds , water has a higher surface tension (72.8 millinewtons (mN) per meter at 20 °C) than most other liquids. Surface tension is an important factor in the phenomenon of capillarity . Surface tension has the dimension of force per unit length , or of energy per unit area . [ 3 ] The two are equivalent, but when referring to energy per unit of area, it is common to use the term surface energy , which is a more general term in the sense that it applies also to solids . In materials science , surface tension is used for either surface stress or surface energy . Due to the cohesive forces , a molecule located away from the surface is pulled equally in every direction by neighboring liquid molecules, resulting in a net force of zero. The molecules at the surface do not have the same molecules on all sides of them and therefore are pulled inward. This creates some internal pressure and forces liquid surfaces to contract to the minimum area. [ 1 ] There is also a tension parallel to the surface at the liquid-air interface which will resist an external force, due to the cohesive nature of water molecules. [ 1 ] [ 2 ] The forces of attraction acting between molecules of the same type are called cohesive forces, while those acting between molecules of different types are called adhesive forces. The balance between the cohesion of the liquid and its adhesion to the material of the container determines the degree of wetting , the contact angle , and the shape of meniscus . When cohesion dominates (specifically, adhesion energy is less than half of cohesion energy) the wetting is low and the meniscus is convex at a vertical wall (as for mercury in a glass container). On the other hand, when adhesion dominates (when adhesion energy is more than half of cohesion energy) the wetting is high and the similar meniscus is concave (as in water in a glass). Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled into a spherical shape by the imbalance in cohesive forces of the surface layer. In the absence of other forces, drops of virtually all liquids would be approximately spherical. The spherical shape minimizes the necessary "wall tension" of the surface layer according to Laplace's law . Another way to view surface tension is in terms of energy. A molecule in contact with a neighbor is in a lower state of energy than if it were alone. The interior molecules have as many neighbors as they can possibly have, but the boundary molecules are missing neighbors (compared to interior molecules) and therefore have higher energy. For the liquid to minimize its energy state, the number of higher energy boundary molecules must be minimized. The minimized number of boundary molecules results in a minimal surface area. [ 4 ] As a result of surface area minimization, a surface will assume a smooth shape. Surface tension, represented by the symbol γ (alternatively σ or T ), is measured in force per unit length . Its SI unit is newton per meter but the cgs unit of dyne per centimeter is also used. For example, [ 5 ] γ = 1 d y n c m = 1 e r g c m 2 = 1 10 − 7 m ⋅ N 10 − 4 m 2 = 0.001 N m = 0.001 J m 2 . {\displaystyle \gamma =1~\mathrm {\frac {dyn}{cm}} =1~\mathrm {\frac {erg}{cm^{2}}} =1~\mathrm {\frac {10^{-7}\,m\cdot N}{10^{-4}\,m^{2}}} =0.001~\mathrm {\frac {N}{m}} =0.001~\mathrm {\frac {J}{m^{2}}} .} Surface tension can be defined in terms of force or energy. Surface tension γ of a liquid is the force per unit length. In the illustration on the right, the rectangular frame, composed of three unmovable sides (black) that form a "U" shape, and a fourth movable side (blue) that can slide to the right. Surface tension will pull the blue bar to the left; the force F required to hold the movable side is proportional to the length L of the immobile side. Thus the ratio ⁠ F / L ⁠ depends only on the intrinsic properties of the liquid (composition, temperature, etc.), not on its geometry. For example, if the frame had a more complicated shape, the ratio ⁠ F / L ⁠ , with L the length of the movable side and F the force required to stop it from sliding, is found to be the same for all shapes. We therefore define the surface tension as γ = F 2 L . {\displaystyle \gamma ={\frac {F}{2L}}.} The reason for the ⁠ 1 / 2 ⁠ is that the film has two sides (two surfaces), each of which contributes equally to the force; so the force contributed by a single side is γL = ⁠ F / 2 ⁠ . Surface tension γ of a liquid is the ratio of the change in the energy of the liquid to the change in the surface area of the liquid (that led to the change in energy). This can be easily related to the previous definition in terms of force: [ 6 ] if F is the force required to stop the side from starting to slide, then this is also the force that would keep the side in the state of sliding at a constant speed (by Newton's second law). But if the side is moving to the right (in the direction the force is applied), then the surface area of the stretched liquid is increasing while the applied force is doing work on the liquid. This means that increasing the surface area increases the energy of the film. The work done by the force F in moving the side by distance Δ x is W = F Δ x ; at the same time the total area of the film increases by Δ A = 2 L Δ x (the factor of 2 is here because the liquid has two sides, two surfaces). Thus, multiplying both the numerator and the denominator of γ = ⁠ 1 / 2 ⁠ ⁠ F / L ⁠ by Δ x , we get γ = F 2 L = F Δ x 2 L Δ x = W Δ A . {\displaystyle \gamma ={\frac {F}{2L}}={\frac {F\Delta x}{2L\Delta x}}={\frac {W}{\Delta A}}.} This work W is, by the usual arguments , interpreted as being stored as potential energy. Consequently, surface tension can be also measured in SI system as joules per square meter and in the cgs system as ergs per cm 2 . Since mechanical systems try to find a state of minimum potential energy , a free droplet of liquid naturally assumes a spherical shape, which has the minimum surface area for a given volume. The equivalence of measurement of energy per unit area to force per unit length can be proven by dimensional analysis . [ 7 ] Several effects of surface tension can be seen with ordinary water: Surface tension is visible in other common phenomena, especially when surfactants are used to decrease it: If no force acts normal to a tensioned surface, the surface must remain flat. But if the pressure on one side of the surface differs from pressure on the other side, the pressure difference times surface area results in a normal force. In order for the surface tension forces to cancel the force due to pressure, the surface must be curved. The diagram shows how surface curvature of a tiny patch of surface leads to a net component of surface tension forces acting normal to the center of the patch. When all the forces are balanced, the resulting equation is known as the Young–Laplace equation : [ 10 ] Δ p = γ ( 1 R x + 1 R y ) {\displaystyle \Delta p=\gamma \left({\frac {1}{R_{x}}}+{\frac {1}{R_{y}}}\right)} where: The quantity in parentheses on the right hand side is in fact (twice) the mean curvature of the surface (depending on normalisation). Solutions to this equation determine the shape of water drops, puddles, menisci, soap bubbles, and all other shapes determined by surface tension (such as the shape of the impressions that a water strider 's feet make on the surface of a pond). The table below shows how the internal pressure of a water droplet increases with decreasing radius. For not very small drops the effect is subtle, but the pressure difference becomes enormous when the drop sizes approach the molecular size. (In the limit of a single molecule the concept becomes meaningless.) When an object is placed on a liquid, its weight F w depresses the surface, and if surface tension and downward force become equal then it is balanced by the surface tension forces on either side F s , which are each parallel to the water's surface at the points where it contacts the object. Notice that small movement in the body may cause the object to sink. As the angle of contact decreases, surface tension decreases. The horizontal components of the two F s arrows point in opposite directions, so they cancel each other, but the vertical components point in the same direction and therefore add up [ 4 ] to balance F w . The object's surface must not be wettable for this to happen, and its weight must be low enough for the surface tension to support it. If m denotes the mass of the needle and g acceleration due to gravity, we have F w = 2 F s sin ⁡ θ ⇔ m g = 2 γ L sin ⁡ θ {\displaystyle F_{\mathrm {w} }=2F_{\mathrm {s} }\sin \theta \quad \Leftrightarrow \quad mg=2\gamma L\sin \theta } To find the shape of the minimal surface bounded by some arbitrary shaped frame using strictly mathematical means can be a daunting task. Yet by fashioning the frame out of wire and dipping it in soap-solution, a locally minimal surface will appear in the resulting soap-film within seconds. [ 7 ] [ 12 ] The reason for this is that the pressure difference across a fluid interface is proportional to the mean curvature , as seen in the Young–Laplace equation . For an open soap film, the pressure difference is zero, hence the mean curvature is zero, and minimal surfaces have the property of zero mean curvature. The surface of any liquid is an interface between that liquid and some other medium. [ note 1 ] The top surface of a pond, for example, is an interface between the pond water and the air. Surface tension, then, is not a property of the liquid alone, but a property of the liquid's interface with another medium. If a liquid is in a container, then besides the liquid/air interface at its top surface, there is also an interface between the liquid and the walls of the container. The surface tension between the liquid and air is usually different (greater) than its surface tension with the walls of a container. And where the two surfaces meet, their geometry must be such that all forces balance. [ 7 ] [ 10 ] Where the two surfaces meet, they form a contact angle , θ , which is the angle the tangent to the surface makes with the solid surface. Note that the angle is measured through the liquid , as shown in the diagrams above. The diagram to the right shows two examples. Tension forces are shown for the liquid–air interface, the liquid–solid interface, and the solid–air interface. The example on the left is where the difference between the liquid–solid and solid–air surface tension, γ ls − γ sa , is less than the liquid–air surface tension, γ la , but is nevertheless positive, that is γ l a > γ l s − γ s a > 0 {\displaystyle \gamma _{\mathrm {la} }>\gamma _{\mathrm {ls} }-\gamma _{\mathrm {sa} }>0} In the diagram, both the vertical and horizontal forces must cancel exactly at the contact point, known as equilibrium . The horizontal component of f la is canceled by the adhesive force, f A . [ 7 ] f A = f l a sin ⁡ θ {\displaystyle f_{\mathrm {A} }=f_{\mathrm {la} }\sin \theta } The more telling balance of forces, though, is in the vertical direction. The vertical component of f la must exactly cancel the difference of the forces along the solid surface, f ls − f sa . [ 7 ] f l s − f s a = − f l a cos ⁡ θ {\displaystyle f_{\mathrm {ls} }-f_{\mathrm {sa} }=-f_{\mathrm {la} }\cos \theta } Since the forces are in direct proportion to their respective surface tensions, we also have: [ 10 ] γ l s − γ s a = − γ l a cos ⁡ θ {\displaystyle \gamma _{\mathrm {ls} }-\gamma _{\mathrm {sa} }=-\gamma _{\mathrm {la} }\cos \theta } where This means that although the difference between the liquid–solid and solid–air surface tension, γ ls − γ sa , is difficult to measure directly, it can be inferred from the liquid–air surface tension, γ la , and the equilibrium contact angle, θ , which is a function of the easily measurable advancing and receding contact angles (see main article contact angle ). This same relationship exists in the diagram on the right. But in this case we see that because the contact angle is less than 90°, the liquid–solid/solid–air surface tension difference must be negative: γ l a > 0 > γ l s − γ s a {\displaystyle \gamma _{\mathrm {la} }>0>\gamma _{\mathrm {ls} }-\gamma _{\mathrm {sa} }} Observe that in the special case of a water–silver interface where the contact angle is equal to 90°, the liquid–solid/solid–air surface tension difference is exactly zero. Another special case is where the contact angle is exactly 180°. Water with specially prepared Teflon approaches this. [ 10 ] Contact angle of 180° occurs when the liquid–solid surface tension is exactly equal to the liquid–air surface tension. γ l a = γ l s − γ s a > 0 θ = 180 ∘ {\displaystyle \gamma _{\mathrm {la} }=\gamma _{\mathrm {ls} }-\gamma _{\mathrm {sa} }>0\qquad \theta =180^{\circ }} An old style mercury barometer consists of a vertical glass tube about 1 cm in diameter partially filled with mercury, and with a vacuum (called Torricelli 's vacuum) in the unfilled volume (see diagram to the right). Notice that the mercury level at the center of the tube is higher than at the edges, making the upper surface of the mercury dome-shaped. The center of mass of the entire column of mercury would be slightly lower if the top surface of the mercury were flat over the entire cross-section of the tube. But the dome-shaped top gives slightly less surface area to the entire mass of mercury. Again the two effects combine to minimize the total potential energy. Such a surface shape is known as a convex meniscus. We consider the surface area of the entire mass of mercury, including the part of the surface that is in contact with the glass, because mercury does not adhere to glass at all. So the surface tension of the mercury acts over its entire surface area, including where it is in contact with the glass. If instead of glass, the tube was made out of copper, the situation would be very different. Mercury aggressively adheres to copper. So in a copper tube, the level of mercury at the center of the tube will be lower than at the edges (that is, it would be a concave meniscus). In a situation where the liquid adheres to the walls of its container, we consider the part of the fluid's surface area that is in contact with the container to have negative surface tension. The fluid then works to maximize the contact surface area. So in this case increasing the area in contact with the container decreases rather than increases the potential energy. That decrease is enough to compensate for the increased potential energy associated with lifting the fluid near the walls of the container. If a tube is sufficiently narrow and the liquid adhesion to its walls is sufficiently strong, surface tension can draw liquid up the tube in a phenomenon known as capillary action . The height to which the column is lifted is given by Jurin's law : [ 7 ] h = 2 γ l a cos ⁡ θ ρ g r {\displaystyle h={\frac {2\gamma _{\mathrm {la} }\cos \theta }{\rho gr}}} where Pouring mercury onto a horizontal flat sheet of glass results in a puddle that has a perceptible thickness. The puddle will spread out only to the point where it is a little under half a centimetre thick, and no thinner. Again this is due to the action of mercury's strong surface tension. The liquid mass flattens out because that brings as much of the mercury to as low a level as possible, but the surface tension, at the same time, is acting to reduce the total surface area. The result of the compromise is a puddle of a nearly fixed thickness. The same surface tension demonstration can be done with water, lime water or even saline, but only on a surface made of a substance to which water does not adhere. Wax is such a substance. Water poured onto a smooth, flat, horizontal wax surface, say a waxed sheet of glass, will behave similarly to the mercury poured onto glass. The thickness of a puddle of liquid on a surface whose contact angle is 180° is given by: [ 10 ] h = 2 γ g ρ {\displaystyle h=2{\sqrt {\frac {\gamma }{g\rho }}}} where In reality, the thicknesses of the puddles will be slightly less than what is predicted by the above formula because very few surfaces have a contact angle of 180° with any liquid. When the contact angle is less than 180°, the thickness is given by: [ 10 ] h = 2 γ l a ( 1 − cos ⁡ θ ) g ρ . {\displaystyle h={\sqrt {\frac {2\gamma _{\mathrm {la} }\left(1-\cos \theta \right)}{g\rho }}}.} For mercury on glass, γ Hg = 487 dyn/cm, ρ Hg = 13.5 g/cm 3 and θ = 140°, which gives h Hg = 0.36 cm. For water on paraffin at 25 °C, γ = 72 dyn/cm, ρ = 1.0 g/cm 3 , and θ = 107° which gives h H 2 O = 0.44 cm. The formula also predicts that when the contact angle is 0°, the liquid will spread out into a micro-thin layer over the surface. Such a surface is said to be fully wettable by the liquid. In day-to-day life all of us observe that a stream of water emerging from a faucet will break up into droplets, no matter how smoothly the stream is emitted from the faucet. This is due to a phenomenon called the Plateau–Rayleigh instability , [ 10 ] which is entirely a consequence of the effects of surface tension. The explanation of this instability begins with the existence of tiny perturbations in the stream. These are always present, no matter how smooth the stream is. If the perturbations are resolved into sinusoidal components, we find that some components grow with time while others decay with time. Among those that grow with time, some grow at faster rates than others. Whether a component decays or grows, and how fast it grows is entirely a function of its wave number (a measure of how many peaks and troughs per centimeter) and the radii of the original cylindrical stream. J.W. Gibbs developed the thermodynamic theory of capillarity based on the idea of surfaces of discontinuity. [ 13 ] Gibbs considered the case of a sharp mathematical surface being placed somewhere within the microscopically fuzzy physical interface that exists between two homogeneous substances. Realizing that the exact choice of the surface's location was somewhat arbitrary, he left it flexible. Since the interface exists in thermal and chemical equilibrium with the substances around it (having temperature T and chemical potentials μ i ), Gibbs considered the case where the surface may have excess energy, excess entropy, and excess particles, finding the natural free energy function in this case to be U − T S − μ 1 N 1 − μ 2 N 2 ⋯ {\displaystyle U-TS-\mu _{1}N_{1}-\mu _{2}N_{2}\cdots } , a quantity later named as the grand potential and given the symbol Ω {\displaystyle \Omega } . Considering a given subvolume V {\displaystyle V} containing a surface of discontinuity, the volume is divided by the mathematical surface into two parts A and B, with volumes V A {\displaystyle V_{\text{A}}} and V B {\displaystyle V_{\text{B}}} , with V = V A + V B {\displaystyle V=V_{\text{A}}+V_{\text{B}}} exactly. Now, if the two parts A and B were homogeneous fluids (with pressures p A {\displaystyle p_{\text{A}}} , p B {\displaystyle p_{\text{B}}} ) and remained perfectly homogeneous right up to the mathematical boundary, without any surface effects, the total grand potential of this volume would be simply − p A V A − p B V B {\displaystyle -p_{\text{A}}V_{\text{A}}-p_{\text{B}}V_{\text{B}}} . The surface effects of interest are a modification to this, and they can be all collected into a surface free energy term Ω S {\displaystyle \Omega _{\text{S}}} so the total grand potential of the volume becomes: Ω = − p A V A − p B V B + Ω S . {\displaystyle \Omega =-p_{\text{A}}V_{\text{A}}-p_{\text{B}}V_{\text{B}}+\Omega _{\text{S}}.} For sufficiently macroscopic and gently curved surfaces, the surface free energy must simply be proportional to the surface area: [ 13 ] [ 14 ] Ω S = γ A , {\displaystyle \Omega _{\text{S}}=\gamma A,} for surface tension γ {\displaystyle \gamma } and surface area A {\displaystyle A} . As stated above, this implies the mechanical work needed to increase a surface area A is dW = γ dA , assuming the volumes on each side do not change. Thermodynamics requires that for systems held at constant chemical potential and temperature, all spontaneous changes of state are accompanied by a decrease in this free energy Ω {\displaystyle \Omega } , that is, an increase in total entropy taking into account the possible movement of energy and particles from the surface into the surrounding fluids. From this it is easy to understand why decreasing the surface area of a mass of liquid is always spontaneous , provided it is not coupled to any other energy changes. It follows that in order to increase surface area, a certain amount of energy must be added. Gibbs and other scientists have wrestled with the arbitrariness in the exact microscopic placement of the surface. [ 15 ] For microscopic surfaces with very tight curvatures, it is not correct to assume the surface tension is independent of size, and topics like the Tolman length come into play. For a macroscopic-sized surface (and planar surfaces), the surface placement does not have a significant effect on γ ; however, it does have a very strong effect on the values of the surface entropy, surface excess mass densities, and surface internal energy, [ 13 ] : 237 which are the partial derivatives of the surface tension function γ ( T , μ 1 , μ 2 , ⋯ ) {\displaystyle \gamma (T,\mu _{1},\mu _{2},\cdots )} . Gibbs emphasized that for solids, the surface free energy may be completely different from surface stress (what he called surface tension): [ 13 ] : 315 the surface free energy is the work required to form the surface, while surface stress is the work required to stretch the surface. In the case of a two-fluid interface, there is no distinction between forming and stretching because the fluids and the surface completely replenish their nature when the surface is stretched. For a solid, stretching the surface, even elastically, results in a fundamentally changed surface. Further, the surface stress on a solid is a directional quantity (a stress tensor ) while surface energy is scalar. Fifteen years after Gibbs, J.D. van der Waals developed the theory of capillarity effects based on the hypothesis of a continuous variation of density. [ 16 ] He added to the energy density the term c ( ∇ ρ ) 2 , {\displaystyle c(\nabla \rho )^{2},} where c is the capillarity coefficient and ρ is the density. For the multiphase equilibria , the results of the van der Waals approach practically coincide with the Gibbs formulae, but for modelling of the dynamics of phase transitions the van der Waals approach is much more convenient. [ 17 ] [ 18 ] The van der Waals capillarity energy is now widely used in the phase field models of multiphase flows. Such terms are also discovered in the dynamics of non-equilibrium gases. [ 19 ] The pressure inside an ideal spherical bubble can be derived from thermodynamic free energy considerations. [ 14 ] The above free energy can be written as: Ω = − Δ P V A − p B V + γ A {\displaystyle \Omega =-\Delta PV_{\text{A}}-p_{\text{B}}V+\gamma A} where Δ P = p A − p B {\displaystyle \Delta P=p_{\text{A}}-p_{\text{B}}} is the pressure difference between the inside (A) and outside (B) of the bubble, and V A {\displaystyle V_{\text{A}}} is the bubble volume. In equilibrium, d Ω = 0 , and so, Δ P d V A = γ d A . {\displaystyle \Delta P\,dV_{\text{A}}=\gamma \,dA.} For a spherical bubble, the volume and surface area are given simply by V A = 4 3 π R 3 → d V A = 4 π R 2 d R , {\displaystyle V_{\text{A}}={\tfrac {4}{3}}\pi R^{3}\quad \rightarrow \quad dV_{\text{A}}=4\pi R^{2}\,dR,} and A = 4 π R 2 → d A = 8 π R d R . {\displaystyle A=4\pi R^{2}\quad \rightarrow \quad dA=8\pi R\,dR.} Substituting these relations into the previous expression, we find Δ P = 2 R γ , {\displaystyle \Delta P={\frac {2}{R}}\gamma ,} which is equivalent to the Young–Laplace equation when R x = R y . Surface tension is dependent on temperature. For that reason, when a value is given for the surface tension of an interface, temperature must be explicitly stated. The general trend is that surface tension decreases with the increase of temperature, reaching a value of 0 at the critical temperature . For further details see Eötvös rule . There are only empirical equations to relate surface tension and temperature: Both Guggenheim–Katayama and Eötvös take into account the fact that surface tension reaches 0 at the critical temperature, whereas Ramay and Shields fails to match reality at this endpoint. Solutes can have different effects on surface tension depending on the nature of the surface and the solute: What complicates the effect is that a solute can exist in a different concentration at the surface of a solvent than in its bulk. This difference varies from one solute–solvent combination to another. Gibbs isotherm states that: Γ = − 1 R T ( ∂ γ ∂ ln ⁡ C ) T , P {\displaystyle \Gamma =-{\frac {1}{RT}}\left({\frac {\partial \gamma }{\partial \ln C}}\right)_{T,P}} Certain assumptions are taken in its deduction, therefore Gibbs isotherm can only be applied to ideal (very dilute) solutions with two components. The Clausius–Clapeyron relation leads to another equation also attributed to Kelvin, as the Kelvin equation . It explains why, because of surface tension, the vapor pressure for small droplets of liquid in suspension is greater than standard vapor pressure of that same liquid when the interface is flat. That is to say that when a liquid is forming small droplets, the equilibrium concentration of its vapor in its surroundings is greater. This arises because the pressure inside the droplet is greater than outside. [ 24 ] P v f o g = P v ∘ e V 2 γ / ( R T r k ) {\displaystyle P_{\mathrm {v} }^{\mathrm {fog} }=P_{\mathrm {v} }^{\circ }e^{V2\gamma /(RTr_{\mathrm {k} })}} The effect explains supersaturation of vapors. In the absence of nucleation sites, tiny droplets must form before they can evolve into larger droplets. This requires a vapor pressure many times the vapor pressure at the phase transition point. [ 24 ] This equation is also used in catalyst chemistry to assess mesoporosity for solids. [ 25 ] The effect can be viewed in terms of the average number of molecular neighbors of surface molecules (see diagram). The table shows some calculated values of this effect for water at different drop sizes: The effect becomes clear for very small drop sizes, as a drop of 1 nm radius has about 100 molecules inside, which is a quantity small enough to require a quantum mechanics analysis. Because surface tension manifests itself in various effects, it offers a number of paths to its measurement. Which method is optimal depends upon the nature of the liquid being measured, the conditions under which its tension is to be measured, and the stability of its surface when it is deformed. An instrument that measures surface tension is called tensiometer. The surface tension of pure liquid water in contact with its vapor has been given by IAPWS [ 38 ] as γ w = 235.8 ( 1 − T T C ) 1.256 [ 1 − 0.625 ( 1 − T T C ) ] mN/m , {\displaystyle \gamma _{\text{w}}=235.8\left(1-{\frac {T}{T_{\text{C}}}}\right)^{1.256}\left[1-0.625\left(1-{\frac {T}{T_{\text{C}}}}\right)\right]~{\text{mN/m}},} where both T and the critical temperature T C = 647.096 K are expressed in kelvins . The region of validity the entire vapor–liquid saturation curve, from the triple point (0.01 °C) to the critical point. It also provides reasonable results when extrapolated to metastable (supercooled) conditions, down to at least −25 °C. This formulation was originally adopted by IAPWS in 1976 and was adjusted in 1994 to conform to the International Temperature Scale of 1990. The uncertainty of this formulation is given over the full range of temperature by IAPWS. [ 38 ] For temperatures below 100 °C, the uncertainty is ±0.5%. Nayar et al. [ 39 ] published reference data for the surface tension of seawater over the salinity range of 20 ≤ S ≤ 131 g/kg and a temperature range of 1 ≤ t ≤ 92 °C at atmospheric pressure. The range of temperature and salinity encompasses both the oceanographic range and the range of conditions encountered in thermal desalination technologies. The uncertainty of the measurements varied from 0.18 to 0.37 mN/m with the average uncertainty being 0.22 mN/m. Nayar et al. correlated the data with the following equation γ s w = γ w ( 1 + 3.766 × 10 − 4 S + 2.347 × 10 − 6 S t ) {\displaystyle \gamma _{\mathrm {sw} }=\gamma _{\mathrm {w} }\left(1+3.766\times 10^{-4}S+2.347\times 10^{-6}St\right)} where γ sw is the surface tension of seawater in mN/m, γ w is the surface tension of water in mN/m, S is the reference salinity [ 40 ] in g/kg, and t is temperature in degrees Celsius. The average absolute percentage deviation between measurements and the correlation was 0.19% while the maximum deviation is 0.60%. The International Association for the Properties of Water and Steam (IAPWS) has adopted this correlation as an international standard guideline. [ 41 ]
https://en.wikipedia.org/wiki/Surface_tension
Surface tension is one of the areas of interest in biomimetics research. Surface tension forces will only begin to dominate gravitational forces below length scales on the order of the fluid's capillary length , which for water is about 2 millimeters. Because of this scaling, biomimetic devices that utilize surface tension will generally be very small, however there are many ways in which such devices could be used. A lotus leaf is well known for its ability to repel water and self-clean. Yuan [ 1 ] and his colleagues fabricated a negative mold of alotus leaf from polydimethylsiloxane (PDMS) to capture the tiny hierarchical structures integral for the leaf's ability to repel water, known as the lotus effect . The lotus leaf's surface was then replicated by allowing a copper sheet to flow into the negative mold with the assistance of ferric chloride and pressure. The result was a lotus leaf-like surface inherent on the copper sheet. Static water contact angle measurements of the biomimetic surface were taken to be 132° after etching the copper and 153° after a stearic acid surface treatment to mimic the lotus leaf's waxy coating. A surface that mimics the lotus leaf could have numerous applications by providing water repellent outdoor gear. Various species of floating fern are able to sustain a liquid-solid barrier of air between the fern and the surrounding water when they are submerged. Like the lotus leaf, floating fern species have tiny hierarchical structures that prevent water from wetting the plant surface. Mayser and Barthlott [ 2 ] demonstrated this ability by submerging different species of the floating fern salvinia in water inside a pressure vessel to study how the air barrier between the leaf and surrounding water react to changes in pressure that would be similar to those experienced by the hull of a ship. Much other research is ongoing using these hierarchical structures in coatings on ship hulls to reduce viscous drag effects. A lung is composed of many small sacks called alveoli that allow oxygen and carbon dioxide to diffuse in and out of the blood respectively as the blood is passed through small capillaries that surround these alveoli. Surface tension is exploited by alveoli by means of a surfactant that is produced by one of the cells and released to lower the surface tension of the fluid coating the inside of the alveoli to prevent these sacks from collapsing. Huh [ 3 ] and his fellow researchers created a lung mimic that replicated the function of native alveolar cells . An extracellular matrix of gel, human alveolar epithelial cells, and human pulmonary microvascular endothelial cells were cultured on a polydimethylsiloxane membrane that was bound in a flexible vacuum diaphragm. Pressurization cycles of the vacuum diaphragm, which simulated breathing, showed similar form and function to an actual lung. The type II cells were also shown to emit the same surfactant that lowered the surface tension of the fluid coating the lung mimic. This research will hopefully some day lead to the creation of lungs that could be grown for patients that need to have a transplant or repair performed. Microvelia exploit surface tension by creating a surface tension gradient that propels them forward by releasing a surfactant behind them through a tongue-like protrusion. Biomimetic engineering was used in a creative and fun way to make and edible cocktail boat that mimicked the ability of microvelia to propel themselves on the surface of water by means of a phenomenon called the marangoni effect . Burton [ 4 ] and her colleagues used 3D printing to make small plastic boats that released different types of alcohols behind the boat to lower the surface tension and create a surface tension gradient that propelled each boat. This type of propulsion could one day be used to make sea vessels more efficient. Fern sporangia consist of hygroscopic ribs that protrude from a spine on the part of the plant that encapsulate spores in a sack ( diagram ). A capillary bridge is formed when water condenses on to the surface of these spines. When this water evaporates, surface tension forces between each rib cause the spine to retract and rip open the sack, spilling the spores. Borno [ 5 ] and her fellow researchers fabricated a biomimetic device from polydimethylsiloxane using standard photolithography techniques. The devices used the same hygroscopic ribs and spine that resemble fern sporangia. The researchers varied the dimensions and spacing of the features of the device and were able to fine-tune and predict movements of the device as a whole in hopes of using a similar device as a microactuator that can perform functions using free energy from a humid atmosphere. A leaf beetle has an incredible ability to adhere to dry surfaces by using numerous capillary bridges between the tiny hair-like setae on its feet. Vogel and Steen [ 6 ] noted this and designed and constructed a switchable wet adhesion mechanism that mimics this ability. They used standard photolithography techniques to fabricate a switchable adhesion gripper that used a pump driven by electro-osmosis to create many capillary bridges that would hold on to just about any surface. The leaf beetle can also reverse this effect by trapping air bubbles between its setae to walk on wet surfaces or under water. This effect was demonstrated by Hosoda and Gorb [ 7 ] when they constructed a biomimetic surface that could adhere objects to surfaces under water. Using this technology could help to create autonomous robots that would be able to explore treacherous terrain that is otherwise too dangerous to explore. Various life forms found in nature exploit surface tension in different ways. Hu [ 8 ] and his colleagues looked at a few examples to create devices that mimic the abilities of their natural counterparts to walk on water, jump off the liquid interface, and climb menisci . Two such devices were a rendition of the water strider . Both devices mimicked the form and function of a water strider by incorporating a rowing motion of one pair of legs to propel the device, however one was powered with elastic energy and the other was powered by electrical energy. This research compared the various biomimetic devices to their natural counterparts by showing the difference between many physical and dimensionless parameters . This research could one day lead to small, energy efficient water walking robots that could be used to clean up spills in waterways. The Stenocara beetle , a native of the Namib Desert has a unique structure on its body that allows it to capture water from a humid atmosphere. In the Namib Desert, rain is not a very common occurrence, but on some mornings a dense fog will roll over the desert. The stenocara beetle uses tiny raised hydrophilic spots on its hydrophobic body to collect water droplets from the fog. Once these droplets are large enough, they can detach from these spots and roll down the beetle's back and into its mouth. Garrod et al. [ 9 ] has demonstrated a biomimetic surface that was created using standard photolithography and plasma etching to create hydrophilic spots on a hydrophobic substrate for water collection. The optimal sizing and spacing of these spots that allowed the most water to be collected was similar to the spacing of the spots on the body of the stenocara beetle. Currently, this surface technology is being studied to implement as a coating on the inside of a water bottle the will allow the water bottle to self fill if left open in a humid environment, and could help to provide aid where water is scarce.
https://en.wikipedia.org/wiki/Surface_tension_biomimetics
Surface water is water located on top of land , forming terrestrial (surrounding by land on all sides) waterbodies , and may also be referred to as blue water , opposed to the seawater and waterbodies like the ocean . The vast majority of surface water is produced by precipitation . As the climate warms in the spring, snowmelt runs off towards nearby streams and rivers contributing towards a large portion of human drinking water . Levels of surface water lessen as a result of evaporation as well as water moving into the ground becoming ground-water . Alongside being used for drinking water, surface water is also used for irrigation, wastewater treatment , livestock , industrial uses, hydropower , and recreation. [ 1 ] For USGS water-use reports, surface water is considered freshwater when it contains less than 1,000 milligrams per liter (mg/L) of dissolved solids. [ 2 ] There are three major types of surface water. Permanent (perennial) surface waters are present year round, and includes lakes , rivers and wetlands ( marshes and swamps ). Semi-permanent (ephemeral) surface water refers to bodies of water that are only present at certain times of the year including seasonally dry channels such as creeks , lagoons and waterholes . Human-made surface water is water that can be continued by infrastructures that humans have assembled. This would be dammed artificial lakes , canals and artificial ponds (e.g. garden ponds ) or swamps. [ 3 ] The surface water held by dams can be used for renewable energy in the form of hydropower. Hydropower is the forcing of surface water sourced from rivers and streams to produce energy. [ 4 ] Surface water can be measured as annual runoff. This includes the amount of rain and snowmelt drainage left after the uptake of nature, evaporation from land, and transpiration from vegetation. In areas such as California , the California Water Science Center records the flow of surface water and annual runoff by utilizing a network of approximately 500 stream gages collecting real time data from all across the state. This then contributes to the 8,000 stream gage stations that are overseen by the USGS national stream gage record. This in turn has provided to date records and documents of water data over the years. Management teams that oversee the distribution of water are then able to make decisions of adequate water supply to sectors. These include municipal, industrial, agricultural, renewable energy (hydropower), and storage in reservoirs. [ 5 ] Due to climate change , sea ice and glaciers are melting, contributing to the rise in sea levels. As a result, salt water from the ocean is beginning to infiltrate our freshwater aquifers contaminating water used for urban and agricultural services. It is also affecting surrounding ecosystems as it places stress on the wildlife inhabiting those areas. It was recorded by the NOAA in the years 2012 to 2016, ice sheets in Greenland and the Antarctic reduced by 247 billion tons per year. [ 6 ] This number will continue to increase as global warming persists. Climate change has a direct connection with the water cycle . It has increased evaporation yet decreased precipitation, runoff, groundwater, and soil moisture. This has altered surface water levels. Climate change also enhances the existing challenges we face in water quality. The quality of surface water is based on the chemical inputs from the surrounding elements such as the air and the nearby landscape. When these elements are polluted due to human activity, it alters the chemistry of the water. [ 7 ] Surface and groundwater are two separate entities, so they must be regarded as such. However, there is an ever-increasing need for management of the two as they are part of an interrelated system that is paramount when the demand for water exceeds the available supply (Fetter 464). Depletion of surface and ground water sources for public consumption (including industrial, commercial, and residential) is caused by over-pumping. Aquifers near river systems that are over-pumped have been known to deplete surface water sources as well. Research supporting this has been found in numerous water budgets for a multitude of cities. Response times for an aquifer are long (Young & Bredehoeft 1972). However, a total ban on ground water usage during water recessions would allow surface water to retain better levels required for sustainable aquatic life . By reducing ground water pumping, the surface water supplies will be able to maintain their levels, as they recharge from direct precipitation , surface runoff , etc. It is recorded by the Environmental Protection Agency (EPA), that approximately 68 percent of water provided to communities in the United States comes from surface water. [ 8 ]
https://en.wikipedia.org/wiki/Surface_water
Surfactants are chemical compounds that decrease the surface tension or interfacial tension between two liquids , a liquid and a gas , or a liquid and a solid . The word surfactant is a blend of "surface-active agent", [ 1 ] coined in 1950. [ 2 ] As they consist of a water-repellent and a water-attracting part, they enable water and oil to mix; they can form foam and facilitate the detachment of dirt. Surfactants are among the most widespread and commercially important chemicals. Private households as well as many industries use them in large quantities as detergents and cleaning agents , but also for example as emulsifiers , wetting agents, foaming agents , antistatic additives, or dispersants . Surfactants occur naturally in traditional plant-based detergents, e.g. horse chestnuts or soap nuts ; they can also be found in the secretions of some caterpillars. Today one of the most commonly used anionic surfactants, linear alkylbenzene sulfates (LAS), are produced from petroleum products . However, surfactants are increasingly produced in whole or in part from renewable biomass , like sugar, fatty alcohol from vegetable oils, by-products of biofuel production, or other biogenic material. [ 3 ] Most surfactants are organic compounds with hydrophilic "heads" and hydrophobic "tails." The "heads" of surfactants are polar and may or may not carry an electrical charge. The "tails" of most surfactants are fairly similar, consisting of a hydrocarbon chain, which can be branched, linear, or aromatic. Fluorosurfactants have fluorocarbon chains. Siloxane surfactants have siloxane chains. Many important surfactants include a polyether chain terminating in a highly polar anionic group. The polyether groups often comprise ethoxylated ( polyethylene oxide -like) sequences inserted to increase the hydrophilic character of a surfactant. Polypropylene oxides conversely, may be inserted to increase the lipophilic character of a surfactant. Surfactant molecules have either one tail or two; those with two tails are said to be double-chained . [ 4 ] Most commonly, surfactants are classified according to polar head group. A non-ionic surfactant has no charged groups in its head. The head of an ionic surfactant carries a net positive, or negative, charge. If the charge is negative, the surfactant is more specifically called anionic ; if the charge is positive, it is called cationic . If a surfactant contains a head with two oppositely charged groups, it is termed zwitterionic , or amphoteric . Commonly encountered surfactants of each type include: Anionic surfactants contain anionic functional groups at their head, such as sulfate , sulfonate , phosphate , and carboxylates . Prominent alkyl sulfates include ammonium lauryl sulfate , sodium lauryl sulfate (sodium dodecyl sulfate, SLS, or SDS), and the related alkyl-ether sulfates sodium laureth sulfate (sodium lauryl ether sulfate or SLES), and sodium myreth sulfate . Others include: Carboxylates are the most common surfactants and comprise the carboxylate salts (soaps), such as sodium stearate . More specialized species include sodium lauroyl sarcosinate and carboxylate-based fluorosurfactants such as perfluorononanoate , perfluorooctanoate (PFOA or PFO). pH-dependent primary, secondary, or tertiary amines ; primary and secondary amines become positively charged at pH < 10: [ 5 ] octenidine dihydrochloride . Permanently charged quaternary ammonium salts : cetrimonium bromide (CTAB), cetylpyridinium chloride (CPC), benzalkonium chloride (BAC), benzethonium chloride (BZT), dimethyldioctadecylammonium chloride , and dioctadecyldimethylammonium bromide (DODAB). Zwitterionic ( ampholytic ) surfactants have both cationic and anionic centers attached to the same molecule. The cationic part is based on primary, secondary, or tertiary amines or quaternary ammonium cations. The anionic part can be more variable and include sulfonates, as in the sultaines CHAPS (3-[(3-cholamidopropyl)dimethylammonio]-1-propanesulfonate) and cocamidopropyl hydroxysultaine . Betaines such as cocamidopropyl betaine have a carboxylate with the ammonium. The most common biological zwitterionic surfactants have a phosphate anion with an amine or ammonium, such as the phospholipids phosphatidylserine , phosphatidylethanolamine , phosphatidylcholine , and sphingomyelins . Lauryldimethylamine oxide and myristamine oxide are two commonly used zwitterionic surfactants of the tertiary amine oxides structural type. Non-ionic surfactants have covalently bonded oxygen-containing hydrophilic groups, which are bonded to hydrophobic parent structures. The water-solubility of the oxygen groups is the result of hydrogen bonding . Hydrogen bonding decreases with increasing temperature, and the water solubility of non-ionic surfactants therefore decreases with increasing temperature. Non-ionic surfactants are less sensitive to water hardness than anionic surfactants, and they foam less strongly. The differences between the individual types of non-ionic surfactants are slight, and the choice is primarily governed having regard to the costs of special properties (e.g., effectiveness and efficiency, toxicity, dermatological compatibility, biodegradability ) or permission for use in food. [ 6 ] Fatty acid ethoxylates are a class of very versatile surfactants, which combine in a single molecule the characteristic of a weakly anionic, pH-responsive head group with the presence of stabilizing and temperature responsive ethyleneoxide units. [ 7 ] Spans : Tweens : Surfactants are usually organic compounds that are akin to amphiphilic , which means that this molecule, being as double-agent, each contains a hydrophilic "water-seeking" group (the head ), and a hydrophobic "water-avoiding" group (the tail ). [ 9 ] As a result, a surfactant contains both a water-soluble component and a water-insoluble component. Surfactants diffuse in water and get adsorbed at interfaces between air and water, or at the interface between oil and water in the case where water is mixed with oil. The water-insoluble hydrophobic group may extend out of the bulk water phase into a non-water phase such as air or oil phase, while the water-soluble head group remains bound in the water phase. The hydrophobic tail may be either lipophilic ("oil-seeking") or lipophobic ("oil-avoiding") depending on its chemistry. Hydrocarbon groups are usually lipophilic, for use in soaps and detergents, while fluorocarbon groups are lipophobic, for use in repelling stains or reducing surface tension. World production of surfactants is estimated at 15 million tons per year, of which about half are soaps . Other surfactants produced on a particularly large scale are linear alkylbenzene sulfonates (1.7 million tons/y), lignin sulfonates (600,000 tons/y), fatty alcohol ethoxylates (700,000 tons/y), and alkylphenol ethoxylates (500,000 tons/y). [ 6 ] In the bulk aqueous phase, surfactants form aggregates, such as micelles , where the hydrophobic tails form the core of the aggregate and the hydrophilic heads are in contact with the surrounding liquid. Other types of aggregates can also be formed, such as spherical or cylindrical micelles or lipid bilayers . The shape of the aggregates depends on the chemical structure of the surfactants, namely the balance in size between the hydrophilic head and hydrophobic tail. A measure of this is the hydrophilic-lipophilic balance (HLB). Surfactants reduce the surface tension of water by adsorbing at the liquid-air interface. The relation that links the surface tension and the surface excess is known as the Gibbs isotherm . The dynamics of surfactant adsorption is of great importance for practical applications such as in foaming, emulsifying or coating processes, where bubbles or drops are rapidly generated and need to be stabilized. The dynamics of absorption depend on the diffusion coefficient of the surfactant. As the interface is created, the adsorption is limited by the diffusion of the surfactant to the interface. In some cases, there can exist an energetic barrier to adsorption or desorption of the surfactant. If such a barrier limits the adsorption rate, the dynamics are said to be ‘kinetically limited'. Such energy barriers can be due to steric or electrostatic repulsions . The surface rheology of surfactant layers, including the elasticity and viscosity of the layer, play an important role in the stability of foams and emulsions. Interfacial and surface tension can be characterized by classical methods such as the -pendant or spinning drop method . Dynamic surface tensions, i.e. surface tension as a function of time, can be obtained by the maximum bubble pressure apparatus The structure of surfactant layers can be studied by ellipsometry or X-ray reflectivity . Surface rheology can be characterized by the oscillating drop method or shear surface rheometers such as double-cone, double-ring or magnetic rod shear surface rheometer. Surfactants play an important role as cleaning, wetting , dispersing , emulsifying , foaming and anti-foaming agents in many practical applications and products, including detergents , fabric softeners , motor oils , emulsions , soaps , paints , adhesives , inks , anti-fogs , ski waxes , snowboard wax, deinking of recycled papers , in flotation, washing and enzymatic processes, and laxatives . Also agrochemical formulations such as herbicides (some), insecticides , biocides (sanitizers), and spermicides ( nonoxynol-9 ). [ 10 ] Personal care products such as cosmetics , shampoos , shower gel , hair conditioners , and toothpastes . Surfactants are used in firefighting (to make "wet water" that more quickly soaks into flammable materials [ 11 ] [ 12 ] ) and pipelines (liquid drag reducing agents). Alkali surfactant polymers are used to mobilize oil in oil wells . Surfactants act to cause the displacement of air from the matrix of cotton pads and bandages so that medicinal solutions can be absorbed for application to various body areas. They also act to displace dirt and debris by the use of detergents in the washing of wounds [ 13 ] and via the application of medicinal lotions and sprays to surface of skin and mucous membranes. [ 14 ] Surfactants enhance remediation via soil washing, bioremediation, and phytoremediation. [ 15 ] In solution, detergents help solubilize a variety of chemical species by dissociating aggregates and unfolding proteins. Popular surfactants in the biochemistry laboratory are sodium lauryl sulfate (SDS) and cetyl trimethylammonium bromide (CTAB). Detergents are key reagents to extract protein by lysis of the cells and tissues: they disorganize the membrane's lipid bilayer (SDS, Triton X-100 , X-114 , CHAPS , DOC , and NP-40 ), and solubilize proteins. Milder detergents such as octyl thioglucoside , octyl glucoside or dodecyl maltoside are used to solubilize membrane proteins such as enzymes and receptors without denaturing them. Non-solubilized material is harvested by centrifugation or other means. For electrophoresis , for example, proteins are classically treated with SDS to denature the native tertiary and quaternary structures , allowing the separation of proteins according to their molecular weight . Detergents have also been used to decellularise organs. This process maintains a matrix of proteins that preserves the structure of the organ and often the microvascular network. The process has been successfully used to prepare organs such as the liver and heart for transplant in rats. [ 16 ] Pulmonary surfactants are also naturally secreted by type II cells of the lung alveoli in mammals . Surfactants are used with quantum dots in order to manipulate their growth, [ 17 ] assembly, and electrical properties, in addition to mediating reactions on their surfaces. Research is ongoing in how surfactants arrange themselves on the surface of the quantum dots. [ 18 ] Surfactants play an important role in droplet-based microfluidics in the stabilization of the droplets, and the prevention of the fusion of droplets during incubation. [ 19 ] Janus-type material is used as a surfactant-like heterogeneous catalyst for the synthesis of adipic acid. [ 20 ] The human body produces diverse surfactants. Pulmonary surfactant is produced in the lungs in order to facilitate breathing by increasing total lung capacity , and lung compliance . In respiratory distress syndrome or RDS, surfactant replacement therapy helps patients have normal respiration by using pharmaceutical forms of the surfactants. One example of a pharmaceutical pulmonary surfactant is Survanta ( beractant ) or its generic form Beraksurf, produced by Abbvie and Tekzima respectively. Bile salts , a surfactant produced in the liver, play an important role in digestion. [ 21 ] Most anionic and non-ionic surfactants are non-toxic, having LD50 comparable to table salt . The toxicity of quaternary ammonium compounds , which are antibacterial and antifungal , varies. Dialkyldimethylammonium chlorides ( DDAC , DSDMAC ) used as fabric softeners have high LD50 (5 g/kg) and are essentially non-toxic, while the disinfectant alkylbenzyldimethylammonium chloride has an LD50 of 0.35 g/kg. Prolonged exposure to surfactants can irritate and damage the skin because surfactants disrupt the lipid membrane that protects skin and other cells. Skin irritancy generally increases in the series non-ionic, amphoteric, anionic, cationic surfactants. [ 6 ] Surfactants are routinely deposited in numerous ways on land and into water systems, whether as part of an intended process or as industrial and household waste. [ 22 ] [ 23 ] [ 24 ] Anionic surfactants can be found in soils as the result of sewage sludge application, wastewater irrigation, and remediation processes. Relatively high concentrations of surfactants together with multimetals can represent an environmental risk. At low concentrations, surfactant application is unlikely to have a significant effect on trace metal mobility. [ 25 ] [ 26 ] In the case of the Deepwater Horizon oil spill , unprecedented amounts of Corexit were sprayed directly into the ocean at the leak and on the sea-water's surface. The apparent theory was that the surfactants isolate droplets of oil, making it easier for petroleum-consuming microbes to digest the oil. The active ingredient in Corexit is dioctyl sodium sulfosuccinate (DOSS), sorbitan monooleate (Span 80), and polyoxyethylenated sorbitan monooleate ( Tween-80 ). [ 27 ] [ 28 ] Because of the volume of surfactants released into the environment, for example laundry detergents in waters, their biodegradation is of great interest. Attracting much attention is the non-biodegradability and extreme persistence of fluorosurfactant , e.g. perfluorooctanoic acid (PFOA). [ 29 ] Strategies to enhance degradation include ozone treatment and biodegradation. [ 30 ] [ 31 ] Two major surfactants, linear alkylbenzene sulfonates (LAS) and the alkyl phenol ethoxylates (APE) break down under aerobic conditions found in sewage treatment plants and in soil to nonylphenol , which is thought to be an endocrine disruptor . [ 32 ] [ 33 ] Interest in biodegradable surfactants has led to much interest in "biosurfactants" such as those derived from amino acids. [ 34 ] Biobased surfactants can offer improved biodegradation. However, whether surfactants damage the cells of fish or cause foam mountains on bodies of water depends primarily on their chemical structure and not on whether the carbon originally used came from fossil sources, carbon dioxide or biomass. [ 3 ]
https://en.wikipedia.org/wiki/Surfactant
Surfactant leaching is a method of water and soil decontamination , [ 1 ] [ 2 ] e.g., for oil recovery in petroleum industry . [ 3 ] [ 2 ] It involves mixing of contaminated water or soil with surfactants with the subsequent leaching of emulsified contaminants. [ 1 ] [ 3 ] In oil recovery, most common surfactant types are ethoxylated alcohols , ethoxylated nonylphenols , sulphates , sulphonates , and biosurfactants. [ 3 ]
https://en.wikipedia.org/wiki/Surfactant_leaching_(decontamination)
Surge control is the use of different techniques and equipment in a hydraulic system to prevent any excessive gain in pressure (also known as a pressure surge) that would cause the hydraulic process pressure to exceed the maximum working pressure of the mechanical equipment used in the system. Hydraulic surges are created when the velocity of a fluid suddenly changes and becomes unsteady or transient. Fluctuations in the fluid's velocity are generated by restrictions like a pump starting/stopping, a valve opening/closing, or a reduction in line size. Hydraulic surges can be generated within a matter of seconds anywhere that the fluid velocity changes and can travel through a pipeline at very high speed, damaging equipment or causing piping failures from over-pressurizing. Surge relief systems absorb and limit high-pressure surges, preventing the pressure surge from traveling through the hydraulic system. Methods for controlling hydraulic surges include utilizing a gas-loaded surge relief valve, spring-loaded pressure safety valves, pilot-operated valves, surge suppressors, and rupture disks. Surge control products have been used in many industries to protect the maximum working pressure of hydraulic system for decades. Typical applications for surge relief equipment is in pipelines at pump stations , receiving manifolds at storage facilities, back pressure control, marine loading/off loading, site specific applications where pressure surges are generated by the automation system, or any location deemed critical by an engineering firm performing a surge analysis. Surge suppressors perform surge relief by acting as a pulsation dampener. Most suppressors have a metal tank with an internal elastic bladder in it. Within the tank they pressurize the top of the bladder with a compressed gas while the product comes in the bottom of the pressure vessel. The gas in the bladder is supplying the system with its set point. During normal operation, as the process conditions begins to build pressure; the internal bladder contracts from the pressure gain allowing liquid to move into the surge suppressor pressure vessel adding volume to the location. This increase in physical volume prevents the pressure from rising to dangerous levels. Advantages: Disadvantages: A rupture disc , also known as a burst disc , bursting disc, or burst diaphragm, is a onetime use, non-resealing pressure relief device that, in most uses, protects a pressure vessel, equipment or system from over pressurization or potentially damaging vacuum conditions. A rupture disc is a sacrificial part because it has a one-time-use membrane that fails at a predetermined differential pressure, either positive or vacuum. The membrane is usually made out of metal, but nearly any material can be used to suit a particular application. Rupture discs provide instant response (within milliseconds) to an increase or decrease in system pressure, but once the disc has ruptured it will not reseal. Due to the one time usage of this disc it requires someone to replace the plate once it has ruptured. One time usage devices are initially cost-effective, but can become time-consuming and labor-intensive to repeatedly change out. Advantages: Disadvantages: Spring-loaded pressure safety valves use a compressed spring to hold the valve closed. The valve will remain closed until the process pressure exceeds the set point of the spring pressure. The valve will open 100% when the set point is reached and will remain open until a certain blow down factor is reached. Oftentimes the blow down is a percentage of the set point, such as 20% of the set point. That means that the valve will remain open until the process pressure decreases to 20% below the set point of the spring-loaded relief valve. Advantages: Disadvantages: Surge relief valves are known for their quick speed of response, excellent flow characteristics, and durability in high pressure applications. Surge relief valves are designed to have an adjustable set point that is directly related to the max pressure of the pipeline/system. When the product on the inlet of the valve exceeds the set point it forces the valve to open and allows the excess surge to be bled out in to a breakout tank or recirculated into a different pipeline. So in the event of the surge, the majority of the pressure is absorbed in the liquid and pipe, and just that quantity of liquid which is necessary to relieve pressures of unsafe proportions is discharged to the surge relief tank. Some valve manufactures use the piston style with a nitrogen control system and external plenums , while others use elastomeric tubes , external pilots , or internal chambers . Pilot operated surge relief valves are typically used to protect pipelines that move low viscosity products like gasoline or diesel . This style of valve is installed downstream of the pump/valve that creates the surge. The valve is controlled by an external, normally closed pilot valve. The pilot will be set to the desired set point of the system, with a sense line that runs up stream of the valve. When the upstream process conditions start to exceed the pilot set point, the valve begins to open and relieve the excess pressure until the correct pressure is met causing the valve to close. Advantages: Disadvantages: Piston-style gas-loaded surge relief valves operate on the balanced piston design and can be used in a variety of applications because it can handle high and low viscosity products while maintaining a fast speed of response. An inert gas, most commonly nitrogen, is loaded on the back side of the piston forcing the valve closed. The nitrogen pressure on the back side of the piston is actually what determines the valves set point. These valves will remain closed until the inlet pressure exceeds the set point/nitrogen pressure, at which time the valve will open from the high pressure and remain open as long as the process pressure is above the nitrogen pressure. Once the process pressure starts to decay, the valve will start to close. Once the process pressure is below the nitrogen pressure, the valve will go closed again. Advantages: Disadvantages: Rubber boot-style gas-loaded relief valves operate by using nitrogen pressure loaded on the outside diameter of a rubber boot that is covering the flow path through the relief valve. As long as the process pressure is below the nitrogen pressure, the valve is closed. As soon as the process pressure raises above the nitrogen pressure, the product in the line forces the rubber boot away from the barrier and allows product to pass through the valve. When the process pressure decreases below the nitrogen pressure, the valve goes closed again. Advantages: Disadvantages: There are many different approaches to controlling surge relief equipment. It all starts with the technology used in the specific application. Spring-loaded pressure safety valves and pilot-operated valves are controlled mechanically using the pressure from a compressed spring. Typically there is an adjustment stem that allows for minor adjustments on the set point by compressing or decompressing the spring. This design is limited by the pressure that can be generated by the spring in the valve. Gas-loaded relief valves are controlled by the nitrogen pressure loaded into the relief valve. If there is no control on the nitrogen pressure, then the nitrogen gas will expand and contract with the changing ambient temperature. As the nitrogen pressure drifts with the temperature so does the set point of the relief valve. The nitrogen pressure has traditionally been controlled using mechanical regulators. Regulators are designed to operate under flowing conditions. When used in the closed end plenum system of a surge relief valve, it must also perform an on/off function to correct for thermal expansion and contraction. Being a pressure control device designed for use under flowing conditions, it is not well suited to perform the on/off function needed in a closed-end system such as a surge relief valve plenum. Another common issue is that regulators are required to operate outside of their design limits when making the corrections needed for thermal expansion and contraction. The volume of gas required to be added or vented from the system is so small that the regulator is required to operate below the minimum threshold of its performance curve. As a result, inconsistent corrections are made to the system pressure which impact the gas-loaded relief valve's set point. A highly accurate and reliable approach to controlling the nitrogen pressure on a gas-loaded surge relief valve is to use an electronic control system to add and vent nitrogen pressure from the gas-loaded surge relief valve. This technique assures the required set point accuracy and repeatability needed in this critical application.
https://en.wikipedia.org/wiki/Surge_control
A surge tank (or surge drum or surge pool ) is a standpipe or storage reservoir at the downstream end of a closed aqueduct , feeder pipe, or dam to absorb sudden rises of pressure , as well as to quickly provide extra water during a brief drop in pressure. In mining technology, ore pulp pumps use a relatively small surge tank to maintain a steady loading on the pump. For hydroelectric power uses, a surge tank is an additional storage space or reservoir fitted between the main storage reservoir and the powerhouse (as close to the powerhouse as possible). Surge tanks are usually provided in high or medium- head plants when there is a considerable distance between the water source and the power unit, necessitating a long penstock . The main functions of the surge tank are: Consider a pipe containing a flowing fluid. When a valve is either fully or partially closed at some point downstream, the fluid will continue to flow at the original velocity. In order to counteract the momentum of the fluid, the pressure will rise significantly (pressure surge) just upstream of the control valve and may result in damage to the pipe system. If a surge chamber is connected to the pipeline just upstream of the valve, on valve closure, the fluid, instead of being stopped suddenly by the valve, will flow upwards into the chamber, hence reducing the surge pressures experienced in the pipeline. Upon closure of the valve, the fluid continues to flow, passing into the surge tank causing the water level in the tank to rise. The level in the tank will continue to rise until the additional head due to the height of fluid in the tank balances the surge pressure in the pipeline. [ 1 ] At this point the flow in the tank and pipeline will reverse causing the level in the tank to drop. This oscillation in tank height and flow will continue for some time but its magnitude will dissipate due to the effects of friction. The surge tank is utilized in automotive applications to ensure that the inlet to the fuel pump is never starved for fuel. [ 2 ] It is typically used in vehicles with electronic fuel injection or that will be sustaining high lateral acceleration loads for extended periods. [ 3 ] Aircraft surge tanks are used on a select few aircraft to ensure that fuel does not spill over onto the ground when the fuel expands. These tanks must be periodically emptied to prevent fuel from spilling, which they do fairly often. [ citation needed ]
https://en.wikipedia.org/wiki/Surge_tank
Preoperative care refers to health care provided before a surgical operation . Preoperative care aims to do whatever is right to increase the success of the surgery. At some point before the operation, the healthcare provider will assess the fitness of the person to have surgery. This assessment should include whatever tests are indicated , but not include screening for conditions without an indication. Immediately before surgery the person's body is prepared, perhaps by washing with an antiseptic , and if needed, their anxiety is addressed to make them comfortable. At some point before surgery a health care provider conducts a preoperative assessment to verify that a person is fit and ready for the surgery. [ 1 ] [ 2 ] For surgeries in which a person receives either general or local anesthesia, this assessment may be done either by a doctor or a nurse trained to do the assessment. [ 2 ] The available research does not give insight about any differences in outcomes depending on whether a doctor or nurse conducts this assessment. [ 2 ] Playing calming music to patients immediately before surgery has a beneficial effect in addressing anxiety about the surgery. [ 3 ] Hair removal at the location where the surgical incision is made is often done before the surgery. [ 4 ] Sufficient evidence does not exist to say that removing hair is a useful way to prevent infections. [ 4 ] When it is done immediately before surgery, the use of hair clippers might be preferable to shaving . [ 4 ] Bathing with an antiseptic like chlorhexidine does not seem to affect incidence of complications after surgery. [ 5 ] However, washing the surgical site with chlorhexidine after surgery does seem helpful for preventing surgical site infection . [ 6 ] Screening is a test to see whether a person has a disease , and screenings are often done before surgery. Screenings should happen when they are indicated and not otherwise as a matter of routine. Screenings which are done without indication carry the risks of having unnecessary health care . Commonly overused screenings include the following: Surgical clearance , or preoperative medical clearance , is an evaluation to determine if a patient is healthy enough to undergo a planned surgery. [ 13 ] The primary objective of this evaluation is to identify any existing medical conditions or risk factors that may lead to complications during or after the surgery. [ 14 ] This allows healthcare providers to take necessary precautions and optimize the patient's health before the operation. The purpose is to lower modifiable risk during and after the surgery along with estimating total risk of undertaking the procedure. [ 15 ] Among children who are at normal risk of pulmonary aspiration or vomiting during anaesthesia, there is no evidence showing that denying them oral liquids before surgery improves outcomes but there is evidence showing that giving liquids prevents anxiety. [ 16 ] Sometimes before a surgery a health care provider will recommend some health intervention to modify some risky behavior which is associated with complications from surgery. Smoking cessation before surgery is likely to reduce the risk of complications from surgery. [ 17 ] In circumstances in which a person's doctor advises them to avoid drinking alcohol before and after the surgery, but in which the person seems likely to drink anyway, intense interventions which direct a person to quit using alcohol have been proven to be helpful in reducing complications from surgery. [ 18 ]
https://en.wikipedia.org/wiki/Surgical_clearance
Surgical stress is the systemic response to surgical injury and is characterized by activation of the sympathetic nervous system, endocrine responses as well as immunological and haematological changes. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Measurement of surgical stress is used in anaesthesia , physiology and surgery . Analysis of the surgical stress response can be used for evaluation of surgical techniques and comparisons of different anaesthetic protocols. Moreover, they can be performed both in the intraoperative or postoperative period. If there is a choice between different techniques for a surgical procedure, one method to evaluate and compare the surgical techniques is to subject one group of patients to one technique, and the other group of patients to another technique, after which the surgical stress responses triggered by the procedures are compared. Absent any other difference, the technique with the least surgical stress response is considered the best for the patient. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ excessive citations ] Similarly, a group of patients can be subjected to a surgical procedure where one anaesthetic protocol is used, and another group of patients are subjected to the same surgical procedure but with a different anaesthetic protocol. The anaesthetic protocol that yields the least stress response is considered the most suitable for that surgical procedure. [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ excessive citations ] It is generally considered or hypothesized that a more invasive surgery, with extensive tissue trauma and noxious stimuli, triggers a more significant stress response. [ 30 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ] [ 35 ] However, duration of surgery may affect the stress response which therefore may make comparisons of procedures that differ in time difficult. [ 36 ] Loss of nitrogen (urea) was observed already in the 1930s in fracture patients by the Scottish physician David Cuthbertson . The reason for the patients' catabolic response was not understood at the time, but later attention was turned to the stress reaction caused by the surgery. [ 37 ] [ 38 ] The evolutionary background is believed to be that a wounded animal increases its chance of survival by using stored energy reserves. The stress reaction thus initiates a catabolic state by an increased release of catabolic hormones. Additionally immunosuppressive hormones are also released. In a surgery patient, the stress reaction is considered detrimental for wound healing. However, surgical stress reduced mortality from endotoxin shock. [ 39 ] Today, development of new surgical techniques and anaesthetic protocols aim to minimise the surgical stress reaction. [ 40 ] [ 41 ] Surgical stress begins with tissue damage that leads to either a neurohormonal or immunologic response [ 42 ] [ 43 ] Part of the neurohormonal response involves the release of catecholamines and the activation of the RAAS system while the other involves the cortisol released and a feedback mechanism on the HPA axis. Both can eventually lead to a specific immune response involving T Helper cells. The immunologic response is categorized as innate or specific – with the innate response releasing acute phase reactants and inflammatory markers such as TNF alpha, IL-1,6,8 CRP and fibrinogen.  The main cells in the specific immune response include TH1, TH2, and, Cytotoxic and B cells. [ 42 ] Examples of used parameters are blood pressure , heart rate , heart rate variability , photoplethysmography and skin conductance . Essentially, physiologic parameters are measured in order to assess sympathetic tone as a surrogate measure of stress. Intraoperative neurophysiological monitoring can also be used. Examples of commonly used biomarkers are adrenaline , cortisol , interleukins , noradrenaline and vasopressin . [ 44 ] [ 45 ] Elements that affect the bodies post surgical stress response can be divided into physiological, pharmacological, and surgical.Enhanced recovery after surgery (ERAS) protocols have been found to improve post surgical outcomes by alleviating the surgical stress response. ERAS protocols include preoperative, perioperative and postoperative considerations. [ 46 ] Preoperative considerations include expanding on patient knowledge through education, improving nutrition and managing the effects of comorbid conditions.Eating before surgery can trigger or amplify the body's stress response due to increased metabolic and digestive activity [ 47 ] [ 48 ] Intraoperatively the use of specific analgesia, [ 49 ] maintenance of temperature, are priority considerations. Lastly, postoperatively, patient return to normal feeding, out of bed protocols, and pain management aid in decreasing post surgical stress response and improve outcomes [ 46 ] .Post operatively it is recommended that patients start early oral feedings to reduce the surgical stress by reestablishing metabolic control, decreasing stress  related catabolism and increasing gastrointestinal function [ 48 ] [ 50 ] .The use of pharmacological such as beta blockers and alpha adrenergic receptors blockers preoperatively can help improve survival for patients ( [ 51 ] )
https://en.wikipedia.org/wiki/Surgical_stress
Surprisal analysis is an information-theoretical analysis technique that integrates and applies principles of thermodynamics and maximal entropy . Surprisal analysis is capable of relating the underlying microscopic properties to the macroscopic bulk properties of a system. It has already been applied to a spectrum of disciplines including engineering, physics , chemistry and biomedical engineering . Recently, it has been extended to characterize the state of living cells, specifically monitoring and characterizing biological processes in real time using transcriptional data. Surprisal analysis was formulated at the Hebrew University of Jerusalem as a joint effort between Raphael David Levine , Richard Barry Bernstein and Avinoam Ben-Shaul in 1972. Levine and colleagues had recognized a need to better understand the dynamics of non-equilibrium systems , particularly of small systems, that are not seemingly applicable to thermodynamic reasoning. [ 1 ] Alhassid and Levine first applied surprisal analysis in nuclear physics, to characterize the distribution of products in heavy ion reactions. Since its formulation, surprisal analysis has become a critical tool for the analysis of reaction dynamics and is an official IUPAC term. [ 2 ] * Maximum entropy methods are at the core of a new view of scientific inference, allowing analysis and interpretation of large and sometimes noisy data. Surprisal analysis extends principles of maximal entropy and of thermodynamics , where both equilibrium thermodynamics and statistical mechanics are assumed to be inferential processes. This enables surprisal analysis to be an effective method of information quantification and compaction and of providing an unbiased characterization of systems. Surprisal analysis is particularly useful to characterize and understand dynamics in small systems, where energy fluxes that are otherwise negligible in large systems heavily influence system behavior. Foremost, surprisal analysis identifies the state of a system when it reaches its maximal entropy, or thermodynamic equilibrium . This is known as balance state of the system because once a system reaches its maximal entropy, it can no longer initiate or participate in spontaneous processes. Following the determination of the balanced state, surprisal analysis then characterizes all the states in which the system deviates away from the balance state. These deviations are caused by constraints; these constraints on the system prevent the system from reaching its maximal entropy. Surprisal analysis is applied to both identify and characterize these constraints. In terms of the constraints, the probability P ( n ) {\displaystyle P(n)} of an event n {\displaystyle n} is quantified by Here P 0 ( n ) {\displaystyle P^{0}(n)} is the probability of the event n {\displaystyle n} in the balanced state. It is usually called the “prior probability” because it is the probability of an event n {\displaystyle n} prior to any constraints. The surprisal itself is defined as The surprisal equals the sum over the constraints and is a measure of the deviation from the balanced state. These deviations are ranked on the degree of deviation from the balance state and ordered on the most to least influential to the system. This ranking is provided through the use of Lagrange multipliers . The most important constraint and usually the constraint sufficient to characterize a system exhibit the largest Lagrange multiplier. The multiplier for constraint α {\displaystyle \alpha } is denoted above as λ α {\displaystyle \lambda _{\alpha }} ; larger multipliers identify more influential constraints. The event variable G α ( n ) {\displaystyle G_{\alpha }(n)} is the value of the constraint α {\displaystyle \alpha } for the event n {\displaystyle n} . Using the method of Lagrange multipliers [ 3 ] requires that the prior probability P 0 ( n ) {\displaystyle P^{0}(n)} and the nature of the constraints be experimentally identified. A numerical algorithm for determining Lagrange multipliers has been introduced by Agmon et al. [ 4 ] Recently, singular value decomposition and principal component analysis of the surprisal was utilized to identify constraints on biological systems, extending surprisal analysis to better understanding biological dynamics as shown in the figure. Surprisal (a term coined [ 5 ] in this context by Myron Tribus [ 6 ] ) was first introduced to better understand the specificity of energy release and selectivity of energy requirements of elementary chemical reactions . [ 1 ] This gave rise to a series of new experiments which demonstrated that in elementary reactions, the nascent products could be probed and that the energy is preferentially released and not statistically distributed. [ 1 ] Surprisal analysis was initially applied to characterize a small three molecule system that did not seemingly conform to principles of thermodynamics and a single dominant constraint was identified that was sufficient to describe the dynamic behavior of the three molecule system. Similar results were then observed in nuclear reactions , where differential states with varying energy partitioning are possible. Often chemical reactions require energy to overcome an activation barrier . Surprisal analysis is applicable to such applications as well. [ 7 ] Later, surprisal analysis was extended to mesoscopic systems, bulk systems [ 3 ] and to dynamical processes. [ 8 ] Surprisal analysis was extended to better characterize and understand cellular processes, [ 9 ] see figure, biological phenomena and human disease with reference to personalized diagnostics . Surprisal analysis was first utilized to identify genes implicated in the balance state of cells in vitro; the genes mostly present in the balance state were genes directly responsible for the maintenance of cellular homeostasis . [ 10 ] Similarly, it has been used to discern two distinct phenotypes during the EMT of cancer cells. [ 11 ]
https://en.wikipedia.org/wiki/Surprisal_analysis
In clinical trials , a surrogate endpoint (or surrogate marker ) is a measure of effect of a specific treatment that may correlate with a real clinical endpoint but does not necessarily have a guaranteed relationship. The National Institutes of Health (USA) defines surrogate endpoint as "a biomarker intended to substitute for a clinical endpoint". [ 1 ] [ 2 ] Surrogate markers are used when the primary endpoint is undesired (e.g., death), or when the number of events is very small, thus making it impractical to conduct a clinical trial to gather a statistically significant number of endpoints. The FDA and other regulatory agencies will often accept evidence from clinical trials that show a direct clinical benefit to surrogate markers. [ 3 ] Surrogate endpoints can be obtained from different modalities, such as, behavioural or cognitive scores, or biomarkers from Electroencephalography ( qEEG ), MRI , PET , or biochemical biomarkers. A correlate does not make a surrogate. It is a common misconception that if an outcome is a correlate (that is, correlated with the true clinical outcome) it can be used as a valid surrogate endpoint (that is, a replacement for the true clinical outcome). However, proper justification for such replacement requires that the effect of the intervention on the surrogate endpoint predicts the effect on the clinical outcome: a much stronger condition than correlation. [ 4 ] [ 5 ] In this context, the term Prentice criteria is used. [ 6 ] The term "surrogate" should not be used in describing endpoints. Instead, descriptions of results and interpretations should be formulated in terms that designate the specific nature and category of variable assessed. [ 7 ] A surrogate endpoint of a clinical trial is a laboratory measurement or a physical sign used as a substitute for a clinically meaningful endpoint that measures directly how a patient feels, functions or survives. Changes induced by a therapy on a surrogate endpoint are expected to reflect changes in a clinically meaningful endpoint. [ 8 ] A commonly used example is cholesterol . While elevated cholesterol levels increase the likelihood for heart disease , the relationship is not linear - many people with normal cholesterol develop heart disease, and many with high cholesterol do not. "Death from heart disease" is the endpoint of interest, but "cholesterol" is the surrogate marker. A clinical trial may show that a particular drug (for example, simvastatin (Zocor)) is effective in reducing cholesterol, without showing directly that simvastatin prevents death. Proof of Zocor's efficacy in reducing cardiovascular disease was only presented five years after its original introduction, and then only for secondary prevention . [ 9 ] In another case, AstraZeneca was accused of marketing rosuvastatin (Crestor) without providing hard endpoint data, relying instead on surrogate endpoints. The company countered that rosuvastatin had been tested on larger groups of patients than any other drug in the class, and that its effects should be comparable to the other statins. [ 10 ] Progression Free Survival is a prominent example in Oncology contexts. There are examples of cancer drugs approved on the basis of progression-free survival failed to show subsequent improvements in overall survival in subsequent studies. In breast cancer, Bevacizumab (Avastin) initially gained approval from the Food and Drug Administration , but subsequently had its license revoked. [ 11 ] [ 12 ] More patient focused surrogate endpoints may offer a more meaningful alternative such as Overall Treatment Utility . [ 13 ] [ 14 ] In HIV/AIDS medicine, CD4 counts and viral loads are used as surrogate markers for drug approval for clinical trials. [ 15 ] In hepatitis C medicine, the surrogate endpoint "Sustained Virological Response" has been used for the approval of expensive drugs known as Direct Acting Antivirals. The validity of this surrogate endpoint for predicting clinical outcomes has been challenged. [ 16 ] [ 17 ] For several vaccines (anthrax, hepatitis A, etc), the induction of detectable antibodies in blood is used as a surrogate marker for vaccine effectiveness, as exposure of individuals to an actual pathogen is considered unethical. [ 18 ] A recent study [ 19 ] showed that plasma biomarkers have the potential to be used as surrogate biomarkers in Alzheimer's disease (AD) clinical trials. More specifically, this study demonstrated that plasma p-tau181 could potentially be used to monitor large-scale population interventions targeting preclinical AD individuals. There have been a number of instances when studies using surrogate markers have been used to show benefit from a particular treatment, but later, a repeat study looking at endpoints has not shown a benefit, or has even shown a harm. [ 20 ] In 2021, the FDA came under heavy criticism for the approval of an alzheimer's drug called Aduhelm based on a surrogate endpoint that was later shown to be based on fraudulent data. [ 21 ] [ 22 ] Reporting surrogate endpoints in randomized controlled trials is an emerging source of concern for clinicians and epidemiologists . This issue has been addressed in two reporting guidelines called CONSORT and SPIRIT , [ 23 ] [ 24 ] which will help researchers report surrogate endpoints in randomized controlled trials .
https://en.wikipedia.org/wiki/Surrogate_endpoint
Surrogation is a psychological phenomenon found in business practices whereby a measure of a construct of interest evolves to replace that construct. Research on performance measurement in management accounting identifies surrogation with "the tendency for managers to lose sight of the strategic construct(s) the measures are intended to represent, and subsequently act as though the measures are the constructs". [ 1 ] An everyday example of surrogation is a manager tasked with increasing customer satisfaction who begins to believe that the customer satisfaction survey score actually is customer satisfaction . Inspired by work by Yuji Ijiri , the term surrogation was coined by Willie Choi, Gary Hecht, and Bill Tayler in their paper, "Lost in Translation: The Effects of Incentive Compensation on Strategy Surrogation". [ 2 ] They show managers tend to use measures as surrogates for strategy , acting as if measures were in fact the strategy when making optimization decisions. This appears to occur even if a measure-maximizing choice ultimately works against the strategy. They also show surrogation is exacerbated by incentive compensation . But, the phenomenon is distinct from wealth -maximizing behavior, since it persists both when incentives are removed and when they are changed to create an opportunity cost for maximizing the surrogate. The additional tendency to surrogate in the presence of incentives is reduced when managers are compensated based on multiple measures of a strategy rather than on a single measure. [ 2 ] Choi, Hecht, and Tayler proposed attribute substitution as a mechanism for surrogation. Attribute substitution in decision-making involves a complex target attribute being replaced by a more easily accessible heuristic attribute . For this to occur, the target attribute must be relatively inaccessible, the heuristic attribute must be readily accessible, and the mental substitution must not be consciously rejected by the person. In the case of surrogation, the two attributes are related in that some party intends the heuristic attribute to serve as proxy for the target attribute. [ 2 ] In a follow-up study, Choi, Hecht, and Tayler demonstrate involving managers in the selection of a strategy reduces their tendency to surrogate. Merely involving managers in the strategy deliberation process does not appear to have the same surrogation-reducing effect as involving them in the actual selection of the strategy. [ 1 ] Jeremiah Bentley shows the effects of incentive compensation on surrogation are partially explained by a mechanism in which measure-based incentive compensation (in this case using a single measure) and wealth-maximizing behavior lead agents to distort their operational decisions (see Campbell's law ). That operational distortion, in turn, leads them to change their beliefs about the compensated measure's causal relationship with the outcome—in other words, to surrogate—possibly as a means of reducing cognitive dissonance arising from inconsistency between beliefs and actions. He demonstrates that allowing people to provide narrative explanations for their decisions reduces the amount of operational distortion observed under an incentive compensation scheme, and also reduces surrogation. He also finds that the effect is larger for people who have a high preference for consistency, which supports the argument that surrogation is due to an attempt to reduce cognitive dissonance. [ 3 ] Robert Bloomfield had proposed a link between cognitive dissonance and surrogation in an earlier paper. [ 4 ] In a subsequent study, Paul Black, Tom Meservy, Bill Tayler, and Jeff Williams show that surrogation can occur simply when a measure is provided to managers, even if they do not receive incentive compensation based on the measure. [ 5 ] That is, if managers know that something is being measured, they will begin to surrogate on that measure, even if they are told that the measure is no more nor less important than other measures when determining their compensation. This implies that firms must be careful in determining what measures are communicated to managers, as managers may surrogate on a measure just because they hear that it is being measured. In a related study, Black, Meservy, Tayler, Williams, and Brock Kirwan (neuroscientist) use fMRI technology to investigate how surrogation happens at a neurocognitive level. Their study provides evidence that the neurocognitive process involved when considering strategies is similar to the processes when considering abstract words, and that the processes involved when considering measures is similar to the processes when considering concrete words. They provide further evidence that surrogation is an involuntary mistake that can be overcome with effort. Other studies have evaluated the intentionality of surrogation among executive management. Jeff Reinking, Vicky Arnol, and Steve G. Sutton demonstrate through an exploratory cross-sectional field study with 27 executive to mid-level managers that executive management intentionally designs dashboards to achieve strategy surrogation. The evidence supports that the impact of this intentional surrogation appears to arise through operational managers' beliefs that dashboard measures align with organizational strategy and lead to improved managerial and organizational performance. However, Reinking, Arnol, and Sutton point out that this relationship between the perceived alignment of performance measures and managerial and organizational performance is mediated by the quality of the dashboard and information. These field tests were followed by another study evaluating the use of KPI (key performance indicator) dashboards by management. The results showed that two primary constructs, strategy alignment and interactive management control, are important factors impacting the extent of dashboard use, perceived managerial performance, and perceived organizational performance. Operational managers perceive that dashboards focused on specifically tailored KPIs lead to both improved managerial and organizational performance. As a result, the study suggests that intentional strategy surrogation may have beneficial effects at the lower operational levels in an organization. Surrogation is conceptually related to Plato's Allegory of the Cave in that people are failing to distinguish the shadow (i.e. the measure) from the form (i.e. the construct ). [ 6 ] Surrogation is also related to Baudrillard's concept of simulacra , in his order-of-simulacra theory. The connection to this concept is discussed in Macintosh, Shearer, Thornton and Welker (2000). [ 7 ] In a fall 2019 article, Tayler and doctoral student Michael Harris discussed how surrogation at Wells Fargo led management to inadvertently replace their "build long-term relationships" strategy with their "cross-selling" metric, resulting in a massive account fraud scandal . They also discuss methods for overcoming surrogation, providing examples from Intermountain Healthcare. [ 8 ] Bill Tayler has discussed everyday examples of surrogation and incentive compensation on BYU News Radio. [ 9 ] In his book entitled When More is Not Better: Overcoming America's Obsession with Economic Efficiency , Roger L. Martin explains the pervasiveness of surrogation through examples in business, public policy, and other areas of every-day life. He demonstrates the prevalent nature of surrogation in our thinking through examples like the modern stock market, where "today’s stock price is considered the true and complete manifestation of the value of a company". Martin suggests that "business executives need to turn their backs on the dominant vector of reductionism, recognize that slack is not the enemy, guard against surrogation by using multiple measures, and appreciate that monopolization is not a sustainable goal". Martin warns that while surrogating in the business domain is a natural tendency, it is a danger that facilitates "gaming" and "makes executives unreflective about how their business really works". To guard against surrogation, Martin suggests using multiple measurements and, in particular, contradictory proxies , helping managers to "think integratively" and mitigate the risk of gaming on proxy measurements.
https://en.wikipedia.org/wiki/Surrogation
Surround view , also called as around view or birds-eye view , is a type of parking assistance system that uses multiple cameras to help drivers monitor their surroundings. It was first introduced in 2007 as the "Around View Monitor" parking assistance option for the Nissan Elgrand and Infiniti EX . [ 1 ] Early vehicle parking assistant products used ultrasonic parking sensors and/or a single rear-view camera to view and obtain distances to objects surrounding the vehicle, providing drivers with an audible alarm or rear-view video through a fisheye lens . There are some drawbacks to these early products: the alarm only provides a proximity warning but not the position of the object(s) relative to the vehicle, and the rear-view camera has a limited field of view. Multiple-camera systems overcome these problems and have seen increasing availability. In most omniview systems, there are four wide-angle cameras: one in the front of the vehicle, one in the back of the vehicle, and one each in the side-mounted rear view mirrors. The four cameras have overlapping fields of view that collectively cover the whole area around the vehicle and serve as an omnidirectional (360-degree) camera . Video from the cameras are sent to the processor, which synthesizes a bird's-eye view from above the vehicle by stitching the video feeds together , correcting distortion, and transforming the perspective. In some cases, ultrasonic sensors are used in combination with the omniview system to provide distance information and highlight the relevant view that may be affected by potential obstacles. [ 2 ] Because the bird's-eye view is a simulated perspective using camera inputs much closer to the ground, objects at ground level will appear relatively undistorted while those above the ground will appear to "lean away" from the vehicle. In addition, if the same object is captured by the overlapping fields of two cameras, it can appear to lean away in two different directions. [ 3 ] : 66 The first vehicle equipped with Nissan 's "Around View Monitor" was the Japanese-market Elgrand, introduced in November 2007. In America, the system was introduced one month later, as an option for the EX35 from Nissan's luxury marque Infiniti . [ 4 ] At about the same time, Mitsubishi Motors and Honda implemented similar functionality as the "Multi-around monitor system" for the Delica [ 5 ] and "Multi-View Camera System" for the Odyssey , respectively. [ 6 ] Third-party automotive component suppliers such as Freescale Semiconductor and Continental AG have developed and marketed modular omniview systems, the latter through the acquisition of Application Solutions Ltd. (ASL Vision). [ 7 ] Nissan have since added moving object detection using the cameras, billing the system as "Intelligent Around View Monitor" (I-AVM). [ 8 ] In 2016, stuntman Paul Swift used the I-AVM system to match the world record for the tightest J-turn in a specially-prepared Nissan Juke , using a space just 18 cm (7.1 in) wider than the vehicle's length to turn it around with the windows completely blacked out. [ 9 ] An omniview system that uses four cameras and displays a three-dimensional rendering of the vehicle and its surroundings has been proposed as a logical next step to increase the driver's awareness. [ 10 ]
https://en.wikipedia.org/wiki/Surround-view_system
Surround optical-fiber immunoassay ( SOFIA ) is an ultrasensitive, in vitro diagnostic platform incorporating a surround optical-fiber assembly that captures fluorescence emissions from an entire sample. The technology's defining characteristics are its extremely high limit of detection , sensitivity , and dynamic range . SOFIA's sensitivity is measured at the attogram level (10 −18 g), making it about one billion times more sensitive than conventional diagnostic techniques. Based on its enhanced dynamic range, SOFIA is able to discriminate levels of analyte in a sample over 10 orders of magnitude , facilitating accurate titering . [ citation needed ] As a diagnostic platform, SOFIA has a broad range of applications. Several studies have already demonstrated SOFIA's unprecedented ability to detect naturally occurring prions in the blood and urine of disease carriers. [ 1 ] [ 2 ] [ 3 ] This is expected to lead to the first reliable ante mortem screening test for vCJD , BSE , scrapie , CWD , and other transmissible spongiform encephalopathies . [ 4 ] Given the technology's extreme sensitivity, additional unique applications are anticipated, including in vitro tests for other neurodegenerative diseases, such as Alzheimer's and Parkinson's disease . [ 3 ] SOFIA was developed as a result of a joint-collaborative research project between Los Alamos National Laboratory and State University of New York , and was supported by the Department of Defense's National Prion Research Program . The conventional method of performing laser-induced fluorescence, as well as other types of spectroscopic measurements, such as infrared , ultraviolet-visible spectroscopy , phosphorescence , etc., is to use a small transparent laboratory vessel, a cuvette , to contain the sample to be analyzed. [ citation needed ] To perform a measurement, the cuvette is filled with the liquid to be investigated and then illuminated with a laser focused through one of the cuvette's faces. A lens is placed in line with one of the faces of the cuvette located at 90° from the input window to collect the laser-induced fluorescent light. Only a small volume of the cuvette is actually illuminated by the laser and produces a detectable spectroscopic emission. The output signal is significantly reduced because the lens picks up only about 10% of the spectroscopic emission due to solid angle considerations. This technique has been used for at least 75 years; even before the laser existed, when conventional light sources were used to excite the fluorescence. [ 5 ] SOFIA solves the problem of low collection efficiency, as it collects nearly all of the fluorescent light produced from the sample being analyzed, increasing the amount of fluorescence signal by around a factor of 10 over conventional apparatus. SOFIA is an apparatus and method for improved optical geometry for enhancement of spectroscopic detection of analytes in a sample. The invention has already demonstrated its proof-of-concept functionality as an apparatus and method for ultrasensitive detection of prions and other low-level analytes. SOFIA combines the specificity inherent in monoclonal antibodies for antigen capture with the sensitivity of surround optical detection technology. To detect extremely low signal levels, a low-noise, photovoltaic diode is used as the detector for the system. SOFIA uses a laser to illuminate a microcapillary tube holding the sample. Then, the light collected from the sample is directed to transfer optics from optical fibers. Next, the light is optically filtered for detection, which is performed as a current measurement amplified against noise by a digital signal processing lock-in amplified. The results are displayed on a computer and software designed for data acquisition. [ citation needed ] The advantages of such a detection array are numerous. Primarily, it permits the use of very small samples at low concentration to be optimally interrogated using the laser-induced fluorescence technique. This fiber-based detection system is adaptable to existing short-pulsed detection hardware that was originally developed for sequencing single DNA molecules. The geometry is also amenable to deployment for short-pulse laser, single-molecule detection schemes. The multiport geometry of the system allows efficient electronic processing of the signals from each arm of the device. Finally, and perhaps most importantly, fiberoptic cables are essentially 100% efficient in optical transmission, having an attenuation less than 10 dB /km. Thus, once deployed for use in a facility, the fluorescence information can be fiberoptically transmitted to a remote location, where data processing and analysis can be performed. SOFIA comprises a multiwell plate sample container, an automated means for successively transporting samples from the multiwell plate sample container to a transparent capillary contained within a sample holder, an excitation source in optical communication with the sample, wherein radiation from the excitation source is directed along the length of the capillary, and wherein the radiation induces a signal which is emitted from the sample, and, at least one linear array. [ citation needed ] After amplifying and then concentrating the target analyte, the samples are labeled with a fluorescent dye using an antibody for specificity and then finally loaded into a microcapillary tube. This tube is placed in a specially constructed apparatus so it is totally surrounded by optical fibers to capture all light emitted once the dye is excited using a laser. [ 6 ] This equipment is a spectroscopic (light gathering) apparatus and corresponding method for rapidly detecting and analyzing analytes in a sample. The sample is irradiated by an excitation source in optical communication with the sample. The excitation source may include, but is not limited to, a laser, a flash lamp, an arc lamp, a light-emitting diode, or the like. Figure 1 depicts the current version of the SOFIA system. Four linear arrays (101) extend from a sample holder (102), which houses an elongated, transparent sample container which is open at both ends, to an end port (103). The distal end of the endport (104) is inserted into an end port assembly (200). The linear arrays (101) comprise a plurality of optical fibers having a first end and a second end, the plurality of optical fibers optionally surrounded by a protective and/or insulating sheath. The optical fibers are linearly arranged, meaning that they are substantially coplanar with respect to one another so as to form an elongated row of fibers. The analyte of interest may be biological or chemical in nature, and by way of example, only may include chemical moieties (toxins, metabolites, drugs and drug residues), peptides , proteins, cellular components, viruses, and combinations thereof. The analyte of interest may be in either a fluid or a supporting medium, such as a gel. SOFIA has demonstrated its potential as a device with a wide range of applications. These include clinical applications, such as detecting diseases, discovering predispositions to pathologies, establishing a diagnosis and tracking the effectiveness of prescribed treatments, and nonclinical applications, such as preventing the entry of toxins and other pathogenic agents into products intended for human consumption: SOFIA has been used to rapidly detect the abnormal form of the prion protein ( PrP Sc ) in samples of bodily fluids, such as blood or urine. PrP Sc is the marker protein used in diagnostics for transmissible spongiform encephalopathies (TSEs), examples of which include bovine spongiform encephalopathy in cattle (i.e. “mad cow” disease), scrapie in sheep, and Creutzfeldt–Jakob disease in humans. Currently, no rapid means exists for the ante mortem detection of PrP Sc in the dilute quantities in which it usually appears in bodily fluids. SOFIA has the advantages of requiring little sample preparation, and allowing for electronic diagnostic equipment to be placed outside the containment area. TSEs, or prion diseases, are infectious neurodegenerative diseases of mammals that include bovine spongiform encephalopathy, chronic wasting disease of deer and elk, scrapie in sheep, and Creutzfeldt–Jakob disease (CJD) in humans. TSEs may be passed from host to host by ingestion of infected tissues or blood transfusions. Clinical symptoms of TSEs include loss of movement and coordination and dementia in humans. They have incubation periods of months to years, but after the appearance of clinical signs, they progress rapidly, are untreatable and invariably are fatal. Attempts at TSE risk-reduction have led to significant changes in the production and trade of agricultural goods, medicines, cosmetics, blood and tissue donations, and biotechnology products. Post mortem neuropathological examination of brain tissue from an animal or human has remained the ‘gold standard’ of TSE diagnosis and is very specific, but not as sensitive as other techniques. [ 7 ] To improve food safety, it would be beneficial to screen all the animals for prion diseases using ante mortem , preclinical testing, i.e., testing prior to presentation of symptoms. However, PrP Sc levels are very low in presymptomatic hosts. In addition, PrP Sc s are generally unevenly distributed in body tissues, with highest concentration consistently found in nervous system tissues and very low concentrations in easily accessible body fluids such as blood or urine. Therefore, any such test would be required to detect extremely small amounts of PrP and would have to differentiate PrP C and PrP Sc . Current PrP Sc detection methods are time-consuming and employ post mortem analysis after suspicious animals manifest one or more symptoms of the disease. Current diagnostic methods are based mainly on detection of physiochemical differences between PrP C and PrP Sc which, to date, are the only reliable markers for TSEs. For example, the most widely used diagnostic tests exploit the relative protease resistance of PrP Sc in brain samples to discriminate between PrP C and PrP Sc , in combination with antibody-based detection of the PK -resistant portion of PrP Sc . It has as yet not been possible to detect prion diseases by using conventional methods, such as polymerase chain reaction, serology, or cell culture assays. An agent-specific nucleic acid has not yet been identified, and the infected host does not elicit an antibody response. The conformationally altered form of PrP C is PrP Sc . Some groups believe PrP Sc is the infectious agent (prion agent) in TSEs, while other groups do not. PrP Sc could be a neuropathological product of the disease process, a component of the infectious agent, the infectious agent itself, or something else altogether. Regardless of what its actual function in the disease state is, PrP Sc is clearly specifically associated with the disease process, and detection of it indicates infection with the agent causing prion diseases. SOFIA provides, among other things, methods to diagnose prion diseases by detection of PrP Sc in biological samples. Samples can be brain tissue, nerve tissue, blood, urine, lymphatic fluid , cerebrospinal fluid , or a combination thereof. Absence of PrP Sc indicates no infection with the infectious agent up to the detection limits of the methods. Detection of a presence of PrP Sc indicates infection with the infectious agent associated with prion disease. Infection with the prion agent may be detected in both presymptomatic and symptomatic stages of disease progression. These and other improvements have been achieved with SOFIA. [ 3 ] SOFIA's sensitivity and specificity eliminates the need for PK digestion to distinguish between the normal and abnormal PrP isoforms. Further detection of PrP Sc in blood plasma has been addressed by limited protein misfolding cyclic amplification (PMCA) followed by SOFIA. Because of the sensitivity of SOFIA, PMCA cycles can be reduced, thus decreasing the chances of spontaneous PrP Sc formation and the detection of false-positive samples. SOFIA meets the needs of increased sensitivity in the detection of prion diseases in both presymptomatic and symptomatic TSE infected animals, including humans, by providing methods of analysis using highly sensitive instrumentation, which requires less sample preparation than previously described methods, in combination with recently developed Mabs against PrP. The method of the present version of SOFIA provides sensitivity levels sufficient to detect PrP Sc in brain tissue. When coupled with limited sPMCA, the methods of the present inventions provide sensitivity levels sufficient to detect PrP Sc in blood plasma, tissue and other fluids collected antemortem [ citation needed ] . The methods combine the specificity of the Mabs for antigen capture and concentration with the sensitivity of a surround optical fiber detection technology. In contrast to previously described methods for detection of PrP Sc in brain homogenates , these techniques, when used to study brain homogenates, do not use seeded polymerization, amplification, or enzymatic digestion (for example, by proteinase K, or “PK”). This is important in that previous reports have indicated the existence of PrP Sc isoforms with varied PK sensitivity, which decreases reliability of the assay. The sensitivity of this assay makes it suitable as a platform for rapid prion detection assay in biological fluids. In addition to prion diseases, the method may provide a means for rapid, high-throughput testing for a wide spectrum of infections and disorders. While about 40 cycles of sPMCA combined with immunoprecipitation were found to be inadequate for PrP Sc detection in plasma by ELISA or western blotting, the PrP Sc has also been found to be readily measured by SOFIA methods. The limited numbers of cycles necessary for the present assay platform virtually eliminates the possibility of obtaining PMCA-related false-positive results such as those previously reported (Thorne and Terry, 2008). [ 8 ] With rapid developments in the field of biomarker research, many infections and disorders that have not been possible to diagnose via in vitro testing, are becoming increasingly possible. SOFIA is predicted to be of broader use in diagnostic assay development for infections and disorders beyond the scope of prion diseases. [ 3 ] A major potential application is for other protein misfolding diseases, in particular Alzheimer's. [ 7 ] A 2011 study reported the detection of prions in urine from naturally and orally infected sheep with clinical scrapie agent and orally infected preclinical and infected white-tailed deer with clinical chronic wasting disease (CWD). This is the first report on prion detection of PrP Sc from the urine of naturally or preclinical prion-diseased ovines or cervids. [ 1 ] A 2010 study demonstrated a moderate amount of protein misfolding cyclic amplification (PMCA) coupled to a novel SOFIA detection scheme, can be used to detect PrP Sc in protease-untreated plasma from preclinical and clinical scrapie sheep, and white-tailed deer with chronic wasting disease, following natural and experimental infection. The disease-associated form of the prion protein (PrP Sc ), resulting from a conformational change of the normal (cellular) form of prion protein (PrP C ), is considered central to neuropathogenesis and serves as the only reliable molecular marker for prion disease diagnosis. While the highest levels of PrP Sc are present in the CNS, the development of a reasonable diagnostic assay requires the use of body fluids which characteristically contains extremely low levels of PrP Sc . PrP Sc has been detected in the blood of sick animals by means of PMCA technology. However, repeated cycling over several days, which is necessary for PMCA of blood material, has been reported to result in decreased specificity (false positives). To generate an assay for PrP Sc in blood that is both highly sensitive and specific, the researchers used limited serial PMCA (sPMCA) with SOFIA. They did not find any enhancement of sPMCA with the addition of polyadenylic acid, nor was it necessary to match the genotypes of the PrP C and PrP Sc sources for efficient amplification. [ 2 ] A 2009 study found SOFIA, in its current format, is capable of detecting less than 10 attogram (ag) of hamster, sheep and deer recombinant PrP. About 10 ag of PrP Sc from 263K-infected hamster brains can be detected with similar lower limits of PrP Sc detection from the brains of scrapie-infected sheep and deer infected with chronic wasting disease. These detection limits allow protease-treated and untreated material to be diluted beyond the point where PrP C , nonspecific proteins or other extraneous material may interfere with PrP Sc signal detection and/or specificity. This not only eliminates the issue of specificity of PrP Sc detection, but also increases sensitivity, since the possibility of partial PrP Sc proteolysis is no longer a concern. SOFIA will likely lead to early ante mortem detection of transmissible encephalopathies and is also amenable for use with additional target amplification protocols. SOFIA represents a sensitive means for detecting specific proteins involved in disease pathogenesis and/or diagnosis that extends beyond the scope of the transmissible spongiform encephalopathies. [ 3 ]
https://en.wikipedia.org/wiki/Surround_optical-fiber_immunoassay
Surround suppression is where the relative firing rate of a neuron may under certain conditions decrease when a particular stimulus is enlarged. It has been observed in electrophysiology studies of the brain and has been noted in many sensory neurons , most notably in the early visual system . Surround suppression is defined as a reduction in the activity of a neuron in response to a stimulus outside its classical receptive field . The necessary functional connections with other neurons influenced by stimulation outside a particular area and by dynamic processes in general, and the absence of a theoretical description of a system state to be treated as a baseline, deprive the term "classical receptive field" of functional meaning. [ 1 ] The descriptor "surround suppression" suffers from a similar problem, as the activities of neurons in the "surround" of the "classical receptive field are similarly determined by connectivities and processes involving neurons beyond it.) This nonlinear effect is one of many that reveals the complexity of biological sensory systems, and the connections of properties of neurons that may cause this effect (or its opposite) are still being studied. The characteristics, mechanisms, and perceptual consequences of this phenomenon are of interest to many communities, including neurobiology , computational neuroscience , psychology , and computer vision . The classical model of early vision presumes that each neuron responds independently to a specific stimulus in a localized area of the visual field . (According to Carandini et al (2005) , this computational model, which may be fit to various datasets, "degrade[s] quickly if we change almost any aspect of the test stimulus.") The stimulus and corresponding location in the visual field are collectively called the classical receptive field . However, not all effects can be explained by via ad hoc independent filters. Surround suppression is one of an infinite number of possible effects in which neurons do not behave according to the classical model. These effects are collectively called non-classical receptive field effects, and have recently become a substantial research area in vision and other sensory systems . During surround suppression, neurons are inhibited by a stimulus outside their classical receptive field, in an area loosely termed deemed the 'surround.' Electrophysiology studies are used to characterize the surround suppression effect. Vision researchers that record neural activity in the primary visual cortex (V1) have seen that spike rates, or neural responses, can be suppressed in as many as 90% of neurons [ 2 ] [ 3 ] by stimuli outside of their surround. In these cells, the spike rates may be reduced by as much as 70%. [ 4 ] The suppressive effect is often dependent on the contrast , orientation , and direction of motion of the stimulus stimulating the surround. These properties are highly dependent on the brain area and the individual neuron being studied. In MT, for instance, cells can be sensitive to the direction and velocity of stimuli up to 50 to 100 times the area of their classical receptive fields. [ 5 ] The statistical properties of the stimuli used to probe these neurons affect the properties of the surround as well. Because these areas are so highly interconnected, stimulation of one cell can affect the response properties of other cells, and therefore researchers have become increasingly aware of the choice of stimuli they use in these experiments. In addition to studies with simple stimuli (dots, bars, sinusoidal gratings), [ 4 ] [ 6 ] [ 7 ] more recent studies have used more realistic stimuli ( natural scenes ) to study these effects. [ 8 ] Stimuli that better represent natural scenes tend to induce higher levels of suppression, indicating this effect is tied closely to the properties of natural scenes such as ideas and local context. Surround suppression is also modulated by attention. By training monkeys to attend to certain areas of their visual field, researchers have studied how directed attention can enhance the suppressive effects of stimuli surrounding the area of attention. [ 9 ] Similar perceptual studies haven’t been performed on human subjects as well. Surround suppression was formally discovered in the visual pathway, which has been noticed first by Hubel and Wiesel [ 6 ] while mapping receptive fields. The earliest parts of the visual pathway: the retina , Lateral Geniculate Nucleus (LGN) , and primary visual cortex (V1) are among the most well-studied. Surround suppression has been studied in later areas as well, including V2 , V3 , V4 , [ 3 ] and MT . [ 10 ] Surround suppression has also been seen in sensory systems other than vision. One example in somatosensation is surround suppression in the barrel cortex of mice, in which bending one whisker can suppress the response of a neuron responding to a whisker nearby. [ 11 ] It has even been seen in the frequency response properties of electoreception in electric fish . [ 12 ] The biological mechanisms behind surround suppression are not known. [ 11 ] Several theories have been proposed for the biological basis of this effect. Based on the diversity of the stimulus characteristics that cause this effect and the variety of responses that are generated, it seems that many mechanisms may be at play. The most known theory is that it is almost a trial and deduction in your brain. Lateral connections are connections between neurons in the same layer. There are many of these connections in all areas of the visual system, which means that a neuron representing one piece of the visual field can influence a neuron representing another piece. Even within lateral connections, there are potentially different mechanisms at play. Monocular mechanisms, requiring stimulation in only one eye, may drive this effect with stimuli with high spatial frequency. When the stimulus frequency is lowered, however, binocular mechanisms come into play, where neurons from different eyes may suppress each other. [ 13 ] Model based on this idea have been shown to reproduce surround suppressive effects. It has been posited that lateral connections are too slow and cover too little of the visual field to fully explain surround suppression. [ 14 ] Feedback from higher areas may explain the discrepancies seen in mechanism for surround suppression based purely on lateral connections. There is evidence that inactivation of higher order areas results in reduced strength of surround suppression. [ 14 ] At least one model of excitatory connections from higher levels has been formed in the effort to more fully explain surround suppression. [ 15 ] However, recurrent feedback is difficult to determine using electrophysiology, and the potential mechanisms at play are not as well studied as feedforward or lateral connections. Surround suppression behavior (and its opposite) gives the sensory system several advantages from both a perceptual and information theory standpoint. Surround suppression likely participates in context-dependent perceptual tasks. Some specific tasks in which surround suppression may aid include: These tasks require the use of inputs over wide regions of visual space, meaning that independent responses to small parts of the visual field (a classical linear model of V1) would not be able to produce these effects. There is evidence that surround suppression participates in these tasks by either adjusting the representation of the classical receptive field or representing entirely different features that include both the classical receptive field and the surround. Direct comparison between physiology and psychophysical experiments have been done on several perceptual effects. These include: (1) the reduced apparent contrast of a grating texture embedded in a surrounding grating, (2) target identification when flanked by other features, (3) saliency of broken contours surrounded by edge segments of different orientations, and (4) orientation discrimination when surrounded by features of different orientations and spatial frequencies. [ 20 ] It has recently been shown that stimulation of the surround may support the efficient coding hypothesis proposed by Horace Barlow in 1961. [ 21 ] This hypothesis suggests that the goal of the sensory system is to create an efficient representation of the stimulus. Recently, this has intersected with the idea of a 'sparse' code, one that is represented using the fewest units possible. It has been shown that surround suppression increases the efficiency of transmitting visual information, and may form a sparse code . [ 22 ] If many cells respond to parts of the same stimulus, for instance, a lot of redundant information is encoded. [ 23 ] The cell needs metabolic energy for each action potential it produces. Therefore, surround suppression likely helps to produce a neural code that is more metabolically efficient. There are additional theoretical advantages, including the removal of statistical redundancy inherent in natural scene statistics , as well as decorrelation of neural responses, [ 8 ] which means less information to process later in the pathway. The goal of computer vision is to perform automated tasks similar to those of the human visual system, quickly and accurately interpreting the world and making decisions based on visual information. Because surround suppression seems to play a role in efficient and accurate perception, there have been a few computer vision algorithms inspired by this happening in human vision: So far, the scientific community has been focused on the response properties of the neurons, but exploration of the relation to inference and learning has begun as well. [ 26 ]
https://en.wikipedia.org/wiki/Surround_suppression
Surroundings , or environs is an area around a given physical or geographical point or place . The exact definition depends on the field. Surroundings can also be used in geography (when it is more precisely known as vicinity, or vicinage) and mathematics, as well as philosophy, with the literal or metaphorically extended definition. In thermodynamics , the term (and its synonym, environment ) is used in a more restricted sense, meaning everything outside the thermodynamic system . Often, the simplifying assumptions are that energy and matter may move freely within the surroundings, and that the surroundings have a uniform composition. The dictionary definition of surroundings at Wiktionary
https://en.wikipedia.org/wiki/Surroundings
Survey camp is a traditional component of civil engineering training, where students do fieldwork to learn about surveying and related practices, such as developing maps. A version of survey camp remains part of the curriculum at schools including Texas A&M University , [ 1 ] University of Toronto , [ 2 ] Aryans College of Engineering ( Rajpura ), [ 3 ] and General Sir John Kotelawala Defence University . [ 4 ] Cornell University started requiring summer survey camp for civil engineering students in the 1870s and conducted it until the early 1960s. [ 5 ] The University of Wisconsin-Madison had a survey camp requirement from 1909 to 1972, because of the importance of surveying to civil engineering in the US military. [ 6 ] [ 7 ] Vanderbilt University had a summer school for surveying from 1927 to 1960. [ 8 ] Tulane University had a summer survey camp from 1918 through the 1970s. [ 9 ] The University of Toronto camp was established in 1919-1920 and has been updated to include water quality sampling, among other topics. [ 10 ] [ 11 ] In survey camp, students obtain hands-on experience in the use of surveying instruments. It can include project-based learning to support development of communication and problem-solving skills. [ 12 ] Students may also learn applications such as AutoCAD and Carlson Survey. [ 1 ] The students use these programs to take data collected from the field to develop topographic maps of the particular area. The basic aim of the survey camp is to know various works carried out in the industrial field by surveying , which includes determining the topography of particular area with the help of survey work, map study and reconnaissance work. The instruments used may include: The survey practicals generally performed in survey camp are:
https://en.wikipedia.org/wiki/Survey_camp
The Survey of Health, Ageing and Retirement in Europe ( SHARE ) is a multidisciplinary and cross-national panel database of micro data on health, socio-economic status and social and family networks. In seven survey waves to date, SHARE has conducted approximately 380,000 interviews with about 140,000 individuals aged 50 and over. The survey covers 28 European countries and Israel. [ 1 ] SHARE was founded as a response to the European Commission 's call to "examine the possibility of establishing, in co-operation with Member States, a European Longitudinal Ageing Survey". [ 2 ] It has since become a major pillar of the European Research Area , selected as one of the projects to be implemented in the European Strategy Forum on Research Infrastructures (ESFRI) in 2006 and was given a new legal status as the first ever European Research Infrastructure Consortium (SHARE-ERIC) in March 2011. [ 3 ] [ 4 ] Founded in 2002, SHARE is coordinated centrally at the SHARE Berlin Institute [ 5 ] and led by Managing Director Axel Börsch-Supan . It is a collaborative effort of more than 150 researchers worldwide who are organized in multidisciplinary national teams and cross-national working groups. A Scientific Monitoring Board composed of eminent international researchers and a network of advisors help to maintain and improve the project's high scientific standards. SHARE is harmonized with its role models and sister studies the U.S. Health and Retirement Study (HRS) and the English Longitudinal Study of Ageing (ELSA), and has the advantage of encompassing cross-national variation in public policy, culture and history across a variety of European countries. Its scientific power is based on its panel design that grasps the dynamic character of the ageing process. SHARE's multi-disciplinary approach delivers a full picture of the ageing process. Procedural guidelines and programs ensure an ex-ante harmonized cross-national design. The collected data include health variables (e.g. self-reported health, health conditions, physical and cognitive functioning, health behaviour, use of health care facilities), biomarkers (e.g. grip strength, body-mass index, peak flow), psychological variables (e.g. psychological health, well-being, life satisfaction ), economic variables (current work activity, job characteristics, opportunities to work past retirement age, sources and composition of current income, wealth and consumption, housing, education), and social support variables (e.g. assistance within families, transfers of incomes and assets, social networks, volunteer activities). SHARE data collection is based on computer-assisted personal interviewing (CAPI) complemented by measurements as well as paper-and-pencil questionnaires. The data are available to the entire research community free of charge. SHARE receives funding from the European Commission , the American National Institute on Aging and national sources, especially the German Federal Ministry of Education and Research and the Deutsche Forschungsgemeinschaft . The data collected by SHARE provide a detailed insight into the financial situation of households of elderly Europeans. Among other things, the study shows that not in all European countries incomes are sufficient – therefore, poverty in old age is a serious problem in some countries. Income is considered the least sufficient in the Eastern European countries Poland and Czech Republic, in the Southern European countries Greece, Italy, and Spain as well as Israel. In these countries, more than 50 percent of households report difficulties making ends meet with their income. In contrast, income is considered satisfactory especially in Sweden, Denmark, the Netherlands, and Switzerland; there, less than 20 percent of households have problems getting by with their income. Likewise, patterns of employment and retirement differ significantly between the European countries. The proportion of people with high workloads in the low wage sector is particularly high in Poland and Greece. Accordingly, the proportion of early retirees is above average. In contrast, work quality in regard to the balance between performance and wage is high in the Nordic countries, the Netherlands, and Switzerland. These countries also show the lowest percentage of older employees opting for early retirement. Availability of kin support largely depends in general on geographic accessibility and social contact. The SHARE data confirm, on the one hand, the existence of longstanding regional patterns of ‘weak’ and ‘strong’ family ties, while, on the other hand, they reveal many similarities across Europe. In all countries – and across all age groups – 85 percent of all parents have at least one child living at a distance of at most 25 km. Moreover, the share of parents with less than weekly contacts to a child is equally low (7%) in Sweden and in Spain. [ 6 ] These results provide no evidence to support the notion of a ‘decline’ of parent-child relations in ageing Europe at the beginning of the 21st century. SHARE data document a strong relationship between education and health among the older population. This holds not only on the individual level (better educated individuals are healthier than less educated) but also across European nations. Comparing average education and average health levels in SHARE countries reveals that in particular the East European and Mediterranean countries are characterized by low levels of education and health simultaneously. In contrast, populations in Northern European countries and Switzerland are both healthier and better educated than the average. [ 6 ] As of July 2019, about 10,000 researchers worldwide use SHARE data for their research. [ 7 ] Publications based on SHARE data are documented and published online. [ 8 ] By now, nine waves of data collection have been conducted. Further waves are being planned to take place on a biennial basis. Eleven European countries have contributed data to the 2004 SHARE baseline study. They constitute a balanced representation of the various regions in Europe, ranging from Scandinavia (Denmark and Sweden) through Central Europe (Austria, France, Germany, Switzerland, Belgium, and the Netherlands) to the Mediterranean (Spain, Italy and Greece). Israel joined the SHARE framework in late 2004, being the first country in the Middle East to initiate a systematic study of its aging population. The SHARE main questionnaire consisted of 20 modules on health, socio-economics and social networks. [ 9 ] All data were collected by face-to-face, computer-aided personal interviews (CAPI), supplemented by a self-completion paper and pencil questionnaire. [ 10 ] Two 'new' EU member states - the Czech Republic and Poland - as well as Ireland joined SHARE in 2006 and participated in the second wave of data collection in 2006–07. In addition to the main questionnaire an ‘End of Life’ interview was conducted for family members of deceased respondents. [ 11 ] Israel carried out its second wave in 2009–10. [ 12 ] SHARELIFE is the third wave of data collection for SHARE, which focuses on people's life histories. 30,000 men and women across 13 European countries took part in this round of the survey. SHARELIFE links individual micro data over the respondents’ entire life with institutional macro data on the welfare state. It thereby allows assessing the full effect of welfare state interventions on the life of the individual. Changes in institutional settings that influence individual decisions are of specific interest to evaluate policies throughout Europe. The SHARELIFE questionnaire contains all important areas of the respondents’ lives, ranging from partners and children over housing and work history to detailed questions on health and health care. [ 13 ] With this variety SHARELIFE constitutes a large interdisciplinary dataset for research in the fields of sociology, economics, gerontology, and demography. The SHARELIFE life history data can be linked to the first two waves of SHARE assessing the present living conditions of older Europeans. [ 14 ] SHARELIFE was repeated in Wave 7, collecting life-history information from those respondents who had not done a life-history interview in Wave 3. In the fourth wave, which started in autumn 2010, Estonia, Hungary, Luxemburg, Portugal and Slovenia joined the SHARE survey. In the other European countries the national samples were enlarged, and a new social network module was added to the main questionnaire. [ 15 ] In the German study, three additional projects including innovative biomarkers (e.g. dried bloodspots), the linkage with the German pension data as well as nonresponse experiments were implemented. [ 16 ] Data collection for Wave 5 took place in 2013. A total of 15 countries participated in this wave, including, for the first time, Luxemburg. Since March 2015 the data is available for research purposes. Wave 5 included additional questions regarding childhood, material deprivation, social exclusion, and migration, as well as information on computer skills and the use of computers at the workplace. [ 17 ] Wave 6 was conducted in 2015 in 17 countries. One of the most important innovations was the collection of objective health measures by means of “Dried Blood Spot Sampling” (DBSS): In 12 countries, a blood samples were collected in order to determine blood levels which are associated with diseases that primarily occur among older people. These include cardiovascular diseases and diseases that can be triggered by external living conditions and environmental factors, such as diabetes mellitus (type 2). These additional biomarkers are expected to be a useful instrument for comparing the objective health status with the subjective perception of the respondents. Moreover, they should help to explain correlations between health and social status and to demonstrate the course of a disease. Wave 6 furthermore captures longitudinal changes in the social networks. [ 18 ] In 2017, the main data collection of Wave 7 took place in 28 countries - full coverage of the EU was achieved by including 8 new countries in SHARE: Finland, Lithuania, Latvia, Slovakia, Romania, Bulgaria, Malta and Cyprus. All respondents who had already taken part in the third wave of SHARE (SHARELIFE) were interviewed about their current situation in terms of family, friends, health as well as social and financial circumstances. For those who had not taken part in SHARELIFE, the Wave 7 questionnaire contained a SHARELIFE module to collect information on their life histories. Wave 7 data was released in April 2019. [ 19 ] Fieldwork for the eighth wave of SHARE began in October 2019. In addition to regular interviews, physical activity measurements using sensors were also conducted in a subsample in 10 countries. [ 20 ] From the beginning of 2020, the COVID-19 pandemic spread across Europe and affected almost all areas of life, including survey research. SHARE, like other surveys, had to suspend regular face-to-face interviews in March 2020 due to strict epidemiological control measures in the 28 participating countries. This was particularly urgent as SHARE targets the population aged 50 and over, including very old respondents and residents of retirement and nursing homes who are at highest risk of possible infection. As a result, SHARE switched to telephone interviewing and developed a special "SHARE Corona" survey, which was successfully conducted in all 28 countries between April and August 2020. [ 21 ] In summer 2021, one year after the first SHARE Corona survey, another telephone survey on the effects of the pandemic was conducted. [ 22 ] The face-to-face surveys for the ninth wave were conducted between October 2021 and September 2022 in all 28 SHARE countries. In five countries (Czech Republic, Denmark, France, Germany, Italy), an additional project was carried out to measure the cognitive abilities of respondents (SHARE HCAP). [ 23 ] The SHARE-Study is not the only study engaging in these fields of research - it has a number of sister studies all over the world dealing with these subjects like ageing , pensions , retirement and population aging in general. Analogue studies following the SHARE model are for instance The Irish Longitudinal Study on Ageing (TILDA), The Longitudinal Aging Study in India (LASI), The Japanese Study of Aging and Retirement (JSTAR), SHARE Israel , The Korean Longitudinal Study of Aging (KLoSA), Chinese Health and Retirement Survey (CHARLS) and Mexican Health and Aging Study (MHAS). SHARE project website: SHARE coordination: SHARE country websites:
https://en.wikipedia.org/wiki/Survey_of_Health,_Ageing_and_Retirement_in_Europe
Survivability is the ability to remain alive or continue to exist. The term has more specific meaning in certain contexts. Following disruptive forces such as flood , fire , disease , war , or climate change some species of flora , fauna , and local life forms are likely to survive more successfully than others because of consequent changes to their surrounding biophysical conditions. In engineering , survivability is the quantified ability of a system , subsystem, equipment, process, or procedure to continue to function during and after a natural or man-made disturbance; for example a nuclear electromagnetic pulse from the detonation of a nuclear weapon . For a given application, survivability must be qualified by specifying the range of conditions over which the entity will survive, the minimum acceptable level or post-disturbance functionality, and the maximum acceptable downtime . [ 1 ] In the military environment, survivability can be defined as the ability to remain mission capable after a single engagement. Engineers working in survivability are often responsible for improving four main system elements: [ 2 ] The European Survivability Workshop introduced the concept of "Mission Survivability" whilst retaining the three core areas above, either pertaining to the "survivability" of a platform through a complete mission, or the "survivability" of the mission itself (i.e. probability of mission success). Recent studies have also introduced the concept of "Force Survivability" which relates to the ability of a force rather than an individual platform to remain "mission capable". There is no clear prioritisation of the three elements; this will depend on the characteristics and role of the platform. Some platform types, such as submarines and airplanes, minimise their susceptibility and may, to some extent, compromise in the other areas. Main Battle Tanks minimise vulnerability through the use of heavy armours. Present day surface warship designs tend to aim for a balanced combination of all three areas. A popular term is the "survivability onion "; described as 5-8 layers: [ 3 ] [ 4 ] Don't be there. If you are there, don’t be seen. If you are seen, don’t be targeted/acquired. If you are targeted/acquired, don’t be hit. If you are hit, don’t be penetrated. If you are penetrated, don’t be killed. Survivability denotes the ability of a ship and its on-board systems to remain functional and continue designated mission in a man-made hostile environment. [ 5 ] The naval vessels are designed to operate in a man-made hostile environment, and therefore the survivability is a vital feature required from them. The naval vessel's survivability is a complicated subject affecting the whole life cycle of the vessel, and should be considered from the initial design phase of every war ship. [ 6 ] The classical definition of naval survivability includes three main aspects, which are susceptibility, vulnerability, and recoverability; although, recoverability is often subsumed within vulnerability. [ 7 ] [ 3 ] Susceptibility consists of all the factors that expose the ship to the weapons effects in a combat environment. These factors in general are the operating conditions, the threat, and the features of the ship itself. The operating conditions, such as sea state, weather and atmospheric conditions, vary considerably, and their influence is difficult to address (hence they are often not accounted for in survivability assessment). The threat is dependent on the weapons directed against the ship and weapon's performance, such as the range. The features of the ship in this sense include platform signatures (radar, infrared, acoustic, magnetic), the defensive systems on board, such as surface-to-air missiles, EW and decoys, and also the tactics employed by the platform in countering the attack (aspects such as speed, maneuverability, chosen aspect presented to the threat). [ 6 ] Vulnerability refers to the ability of the vessel to withstand the short-term effects of the threat weapon. Vulnerability is an attribute typical to the vessel and therefore heavily affected by the vessel's basic characteristics such as size, subdivision, armouring, and other hardening features, and also the design of the ship's systems, in particular the location of equipment, degrees of redundancy and separation, and the presence within a system of single point failures. Recoverability refers to vessel's ability to restore and maintain its functionality after sustaining damage. Thus, recoverability is dependent on the actions aimed to neutralize the effects of the damage. These actions include firefighting, limiting the extent of flooding, and dewatering. Besides the equipment, the crew also has a vital role in recoverability. [ 8 ] The crews of military combat vehicles face numerous lethal hazards which are both diverse and constantly evolving. Improvised Explosive Devices (IEDs), mines , and enemy fire are examples of such persistent and variable threats. Historically, measures taken to mitigate these hazards were concerned with protecting the vehicle itself, but due to this achieving only limited protection, the focus has now shifted to safeguarding the crew within from an ever-broadening range of threats, including Radio Controlled IEDs (RCIEDs) , blast, fragmentation , heat stress , and dehydration . The expressed goal of "crew survivability" is to ensure vehicle occupants are best protected. It goes beyond simply ensuring crew have the appropriate protective equipment and has expanded to include measuring the overpressure and blunt impact forces experienced by a vehicle from real blast incidents in order to develop medical treatment and improve overall crew survivability. Sustainable crew survivability is dependent on the effective integration of knowledge, training, and equipment. Threat intelligence identifying trends, emerging technologies, and attack tactics used by enemy forces enables crews to implement procedures that will reduce their exposure to unnecessary risks. Such intelligence also allows for more effective pre-deployment training programs where personnel can be taught the most up-to-date developments in IED concealment, for example, or undertake tailored training that will enable them to identify the likely attack strategy of enemy forces. In addition, with expert, current threat intelligence, the most effective equipment can be procured or rapidly developed in support of operations. "The capability of a system to fulfill its mission, in a timely manner, in the presence of threats such as attacks or large-scale natural disasters. Survivability is a subset of resilience ." [ 9 ] [ 10 ] “The capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures , or accidents .” [ 11 ]
https://en.wikipedia.org/wiki/Survivability
The Survivable Communications Integration System was the replacement missile early warning communication system . SCIS was a program replacement awarded in the 1980s. The system was designed and delivered (late), with final introduction in the late 1990s. In a GAO article published in 1992, E-Systems (the prime contractor) was significantly over budget and significantly delayed. Management and development problems with the SCIS program have contributed to a 65 percent increase in program costs (from $142 million to $234 million) and a 3-year delay in completion (from 1992 to 1995). After working on SCIS for 4 years, the prime contractor was unable to deliver a system that could process sensor data fast enough to meet Air Force specifications. To help solve the problem, the Air Force is allowing the contractor to replace the computer platform , for the second time at government expense, with a faster, more powerful model. [ 1 ]
https://en.wikipedia.org/wiki/Survivable_Communications_Integration_System
Survival analysis is a branch of statistics for analyzing the expected duration of time until one event occurs, such as death in biological organisms and failure in mechanical systems. [ 1 ] This topic is called reliability theory , reliability analysis or reliability engineering in engineering , duration analysis or duration modelling in economics , and event history analysis in sociology . Survival analysis attempts to answer certain questions, such as what is the proportion of a population which will survive past a certain time? Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival ? To answer such questions, it is necessary to define "lifetime". In the case of biological survival, death is unambiguous, but for mechanical reliability, failure may not be well-defined, for there may well be mechanical systems in which failure is partial, a matter of degree, or not otherwise localized in time . Even in biological problems, some events (for example, heart attack or other organ failure) may have the same ambiguity. The theory outlined below assumes well-defined events at specific times; other cases may be better treated by models which explicitly account for ambiguous events. More generally, survival analysis involves the modelling of time to event data; in this context, death or failure is considered an "event" in the survival analysis literature – traditionally only a single event occurs for each subject, after which the organism or mechanism is dead or broken. Recurring event or repeated event models relax that assumption. The study of recurring events is relevant in systems reliability , and in many areas of social sciences and medical research. Survival analysis is used in several ways: The following terms are commonly used in survival analyses: This example uses the Acute Myelogenous Leukemia survival data set "aml" from the "survival" package in R. The data set is from Miller (1997) [ 2 ] and the question is whether the standard course of chemotherapy should be extended ('maintained') for additional cycles. The aml data set sorted by survival time is shown in the box. (weeks) The last observation (11), at 161 weeks, is censored. Censoring indicates that the patient did not have an event (no recurrence of aml cancer). Another subject, observation 3, was censored at 13 weeks (indicated by status=0). This subject was in the study for only 13 weeks, and the aml cancer did not recur during those 13 weeks. It is possible that this patient was enrolled near the end of the study, so that they could be observed for only 13 weeks. It is also possible that the patient was enrolled early in the study, but was lost to follow up or withdrew from the study. The table shows that other subjects were censored at 16, 28, and 45 weeks (observations 17, 6, and 9 with status=0). The remaining subjects all experienced events (recurrence of aml cancer) while in the study. The question of interest is whether recurrence occurs later in maintained patients than in non-maintained patients. The survival function S ( t ), is the probability that a subject survives longer than time t . S ( t ) is theoretically a smooth curve, but it is usually estimated using the Kaplan–Meier (KM) curve. The graph shows the KM plot for the aml data and can be interpreted as follows: A life table summarizes survival data in terms of the number of events and the proportion surviving at each event time point. The life table for the aml data, created using the R software, is shown. The life table summarizes the events and the proportion surviving at each event time point. The columns in the life table have the following interpretation: The log-rank test compares the survival times of two or more groups. This example uses a log-rank test for a difference in survival in the maintained versus non-maintained treatment groups in the aml data. The graph shows KM plots for the aml data broken out by treatment group, which is indicated by the variable "x" in the data. The null hypothesis for a log-rank test is that the groups have the same survival. The expected number of subjects surviving at each time point in each is adjusted for the number of subjects at risk in the groups at each event time. The log-rank test determines if the observed number of events in each group is significantly different from the expected number. The formal test is based on a chi-squared statistic. When the log-rank statistic is large, it is evidence for a difference in the survival times between the groups. The log-rank statistic approximately has a Chi-squared distribution with one degree of freedom, and the p-value is calculated using the Chi-squared test . For the example data, the log-rank test for difference in survival gives a p-value of p=0.0653, indicating that the treatment groups do not differ significantly in survival, assuming an alpha level of 0.05. The sample size of 23 subjects is modest, so there is little power to detect differences between the treatment groups. The chi-squared test is based on asymptotic approximation, so the p-value should be regarded with caution for small sample sizes . Kaplan–Meier curves and log-rank tests are most useful when the predictor variable is categorical (e.g., drug vs. placebo), or takes a small number of values (e.g., drug doses 0, 20, 50, and 100 mg/day) that can be treated as categorical. The log-rank test and KM curves don't work easily with quantitative predictors such as gene expression, white blood count, or age. For quantitative predictor variables, an alternative method is Cox proportional hazards regression analysis. Cox PH models work also with categorical predictor variables, which are encoded as {0,1} indicator or dummy variables. The log-rank test is a special case of a Cox PH analysis, and can be performed using Cox PH software. This example uses the melanoma data set from Dalgaard Chapter 14. [ 3 ] Data are in the R package ISwR. The Cox proportional hazards regression using R gives the results shown in the box. The Cox regression results are interpreted as follows. The summary output also gives upper and lower 95% confidence intervals for the hazard ratio: lower 95% bound = 1.15; upper 95% bound = 3.26. Finally, the output gives p-values for three alternative tests for overall significance of the model: These three tests are asymptotically equivalent. For large enough N, they will give similar results. For small N, they may differ somewhat. The last row, "Score (logrank) test" is the result for the log-rank test, with p=0.011, the same result as the log-rank test, because the log-rank test is a special case of a Cox PH regression. The Likelihood ratio test has better behavior for small sample sizes, so it is generally preferred. The Cox model extends the log-rank test by allowing the inclusion of additional covariates. [ 4 ] This example use the melanoma data set where the predictor variables include a continuous covariate, the thickness of the tumor (variable name = "thick"). In the histograms, the thickness values are positively skewed and do not have a Gaussian -like, Symmetric probability distribution . Regression models, including the Cox model, generally give more reliable results with normally-distributed variables. [ citation needed ] For this example we may use a logarithmic transform. The log of the thickness of the tumor looks to be more normally distributed, so the Cox models will use log thickness. The Cox PH analysis gives the results in the box. The p-value for all three overall tests (likelihood, Wald, and score) are significant, indicating that the model is significant. The p-value for log(thick) is 6.9e-07, with a hazard ratio HR = exp(coef) = 2.18, indicating a strong relationship between the thickness of the tumor and increased risk of death. By contrast, the p-value for sex is now p=0.088. The hazard ratio HR = exp(coef) = 1.58, with a 95% confidence interval of 0.934 to 2.68. Because the confidence interval for HR includes 1, these results indicate that sex makes a smaller contribution to the difference in the HR after controlling for the thickness of the tumor, and only trend toward significance. Examination of graphs of log(thickness) by sex and a t-test of log(thickness) by sex both indicate that there is a significant difference between men and women in the thickness of the tumor when they first see the clinician. The Cox model assumes that the hazards are proportional. The proportional hazard assumption may be tested using the R function cox.zph(). A p-value which is less than 0.05 indicates that the hazards are not proportional. For the melanoma data we obtain p=0.222. Hence, we cannot reject the null hypothesis of the hazards being proportional. Additional tests and graphs for examining a Cox model are described in the textbooks cited. Cox models can be extended to deal with variations on the simple analysis. The Cox PH regression model is a linear model. It is similar to linear regression and logistic regression. Specifically, these methods assume that a single line, curve, plane, or surface is sufficient to separate groups (alive, dead) or to estimate a quantitative response (survival time). In some cases alternative partitions give more accurate classification or quantitative estimates. One set of alternative methods are tree-structured survival models, [ 5 ] [ 6 ] [ 7 ] including survival random forests. [ 8 ] Tree-structured survival models may give more accurate predictions than Cox models. Examining both types of models for a given data set is a reasonable strategy. This example of a survival tree analysis uses the R package "rpart". [ 9 ] The example is based on 146 stage C prostate cancer patients in the data set stagec in rpart. Rpart and the stagec example are described in Atkinson and Therneau (1997), [ 10 ] which is also distributed as a vignette of the rpart package. [ 9 ] The variables in stages are: The survival tree produced by the analysis is shown in the figure. Each branch in the tree indicates a split on the value of a variable. For example, the root of the tree splits subjects with grade < 2.5 versus subjects with grade 2.5 or greater. The terminal nodes indicate the number of subjects in the node, the number of subjects who have events, and the relative event rate compared to the root. In the node on the far left, the values 1/33 indicate that one of the 33 subjects in the node had an event, and that the relative event rate is 0.122. In the node on the far right bottom, the values 11/15 indicate that 11 of 15 subjects in the node had an event, and the relative event rate is 2.7. An alternative to building a single survival tree is to build many survival trees, where each tree is constructed using a sample of the data, and average the trees to predict survival. [ 8 ] This is the method underlying the survival random forest models. Survival random forest analysis is available in the R package "randomForestSRC". [ 11 ] The randomForestSRC package includes an example survival random forest analysis using the data set pbc. This data is from the Mayo Clinic Primary Biliary Cirrhosis (PBC) trial of the liver conducted between 1974 and 1984. In the example, the random forest survival model gives more accurate predictions of survival than the Cox PH model. The prediction errors are estimated by bootstrap re-sampling . Recent advancements in deep representation learning have been extended to survival estimation. The DeepSurv [ 12 ] model proposes to replace the log-linear parameterization of the CoxPH model with a multi-layer perceptron. Further extensions like Deep Survival Machines [ 13 ] and Deep Cox Mixtures [ 14 ] involve the use of latent variable mixture models to model the time-to-event distribution as a mixture of parametric or semi-parametric distributions while jointly learning representations of the input covariates. Deep learning approaches have shown superior performance especially on complex input data modalities such as images and clinical time-series. The object of primary interest is the survival function , conventionally denoted S , which is defined as S ( t ) = Pr ( T > t ) {\displaystyle S(t)=\Pr(T>t)} where t is some time, T is a random variable denoting the time of death, and "Pr" stands for probability . That is, the survival function is the probability that the time of death is later than some specified time t . The survival function is also called the survivor function or survivorship function in problems of biological survival, and the reliability function in mechanical survival problems. In the latter case, the reliability function is denoted R ( t ). Usually one assumes S (0) = 1, although it could be less than 1 if there is the possibility of immediate death or failure. The survival function must be non-increasing: S ( u ) ≤ S ( t ) if u ≥ t . This property follows directly because T > u implies T > t . This reflects the notion that survival to a later age is possible only if all younger ages are attained. Given this property, the lifetime distribution function and event density ( F and f below) are well-defined. The survival function is usually assumed to approach zero as age increases without bound (i.e., S ( t ) → 0 as t → ∞), although the limit could be greater than zero if eternal life is possible. For instance, we could apply survival analysis to a mixture of stable and unstable carbon isotopes ; unstable isotopes would decay sooner or later, but the stable isotopes would last indefinitely. Related quantities are defined in terms of the survival function. The lifetime distribution function , conventionally denoted F , is defined as the complement of the survival function, F ( t ) = Pr ( T ≤ t ) = 1 − S ( t ) . {\displaystyle F(t)=\Pr(T\leq t)=1-S(t).} If F is differentiable then the derivative, which is the density function of the lifetime distribution, is conventionally denoted f , f ( t ) = F ′ ( t ) = d d t F ( t ) . {\displaystyle f(t)=F'(t)={\frac {d}{dt}}F(t).} The function f is sometimes called the event density ; it is the rate of death or failure events per unit time. The survival function can be expressed in terms of probability distribution and probability density functions S ( t ) = Pr ( T > t ) = ∫ t ∞ f ( u ) d u = 1 − F ( t ) . {\displaystyle S(t)=\Pr(T>t)=\int _{t}^{\infty }f(u)\,du=1-F(t).} Similarly, a survival event density function can be defined as s ( t ) = S ′ ( t ) = d d t S ( t ) = d d t ∫ t ∞ f ( u ) d u = d d t [ 1 − F ( t ) ] = − f ( t ) . {\displaystyle s(t)=S'(t)={\frac {d}{dt}}S(t)={\frac {d}{dt}}\int _{t}^{\infty }f(u)\,du={\frac {d}{dt}}[1-F(t)]=-f(t).} In other fields, such as statistical physics, the survival event density function is known as the first passage time density. The hazard function h {\displaystyle h} is defined as the event rate at time t , {\displaystyle t,} conditional on survival at time t . {\displaystyle t.} Synonyms for hazard function in different fields include hazard rate, force of mortality ( demography and actuarial science , denoted by μ {\displaystyle \mu } ), force of failure, or failure rate ( engineering , denoted λ {\displaystyle \lambda } ). For example, in actuarial science, μ ( x ) {\displaystyle \mu (x)} denotes rate of death for people aged x {\displaystyle x} , whereas in reliability engineering λ ( t ) {\displaystyle \lambda (t)} denotes rate of failure of components after operation for time t {\displaystyle t} . Suppose that an item has survived for a time t {\displaystyle t} and we desire the probability that it will not survive for an additional time d t {\displaystyle dt} : h ( t ) = lim d t → 0 Pr ( t ≤ T < t + d t ) d t ⋅ S ( t ) = f ( t ) S ( t ) = − S ′ ( t ) S ( t ) . {\displaystyle h(t)=\lim _{dt\rightarrow 0}{\frac {\Pr(t\leq T<t+dt)}{dt\cdot S(t)}}={\frac {f(t)}{S(t)}}=-{\frac {S'(t)}{S(t)}}.} Any function h {\displaystyle h} is a hazard function if and only if it satisfies the following properties: In fact, the hazard rate is usually more informative about the underlying mechanism of failure than the other representations of a lifetime distribution. The hazard function must be non-negative, λ ( t ) ≥ 0 {\displaystyle \lambda (t)\geq 0} , and its integral over [ 0 , ∞ ] {\displaystyle [0,\infty ]} must be infinite, but is not otherwise constrained; it may be increasing or decreasing, non-monotonic, or discontinuous. An example is the bathtub curve hazard function, which is large for small values of t {\displaystyle t} , decreasing to some minimum, and thereafter increasing again; this can model the property of some mechanical systems to either fail soon after operation, or much later, as the system ages. The hazard function can alternatively be represented in terms of the cumulative hazard function , conventionally denoted Λ {\displaystyle \Lambda } or H {\displaystyle H} : Λ ( t ) = − log ⁡ S ( t ) {\displaystyle \,\Lambda (t)=-\log S(t)} so transposing signs and exponentiating S ( t ) = exp ⁡ ( − Λ ( t ) ) {\displaystyle \,S(t)=\exp(-\Lambda (t))} or differentiating (with the chain rule) d d t Λ ( t ) = − S ′ ( t ) S ( t ) = λ ( t ) . {\displaystyle {\frac {d}{dt}}\Lambda (t)=-{\frac {S'(t)}{S(t)}}=\lambda (t).} The name "cumulative hazard function" is derived from the fact that Λ ( t ) = ∫ 0 t λ ( u ) d u {\displaystyle \Lambda (t)=\int _{0}^{t}\lambda (u)\,du} which is the "accumulation" of the hazard over time. From the definition of Λ ( t ) {\displaystyle \Lambda (t)} , we see that it increases without bound as t tends to infinity (assuming that S ( t ) {\displaystyle S(t)} tends to zero). This implies that λ ( t ) {\displaystyle \lambda (t)} must not decrease too quickly, since, by definition, the cumulative hazard has to diverge. For example, exp ⁡ ( − t ) {\displaystyle \exp(-t)} is not the hazard function of any survival distribution, because its integral converges to 1. The survival function S ( t ) {\displaystyle S(t)} , the cumulative hazard function Λ ( t ) {\displaystyle \Lambda (t)} , the density f ( t ) {\displaystyle f(t)} , the hazard function λ ( t ) {\displaystyle \lambda (t)} , and the lifetime distribution function F ( t ) {\displaystyle F(t)} are related through S ( t ) = exp ⁡ [ − Λ ( t ) ] = f ( t ) λ ( t ) = 1 − F ( t ) , t > 0. {\displaystyle S(t)=\exp[-\Lambda (t)]={\frac {f(t)}{\lambda (t)}}=1-F(t),\quad t>0.} Future lifetime at a given time t 0 {\displaystyle t_{0}} is the time remaining until death, given survival to age t 0 {\displaystyle t_{0}} . Thus, it is T − t 0 {\displaystyle T-t_{0}} in the present notation. The expected future lifetime is the expected value of future lifetime. The probability of death at or before age t 0 + t {\displaystyle t_{0}+t} , given survival until age t 0 {\displaystyle t_{0}} , is just P ( T ≤ t 0 + t ∣ T > t 0 ) = P ( t 0 < T ≤ t 0 + t ) P ( T > t 0 ) = F ( t 0 + t ) − F ( t 0 ) S ( t 0 ) . {\displaystyle P(T\leq t_{0}+t\mid T>t_{0})={\frac {P(t_{0}<T\leq t_{0}+t)}{P(T>t_{0})}}={\frac {F(t_{0}+t)-F(t_{0})}{S(t_{0})}}.} Therefore, the probability density of future lifetime is d d t F ( t 0 + t ) − F ( t 0 ) S ( t 0 ) = f ( t 0 + t ) S ( t 0 ) {\displaystyle {\frac {d}{dt}}{\frac {F(t_{0}+t)-F(t_{0})}{S(t_{0})}}={\frac {f(t_{0}+t)}{S(t_{0})}}} and the expected future lifetime is 1 S ( t 0 ) ∫ 0 ∞ t f ( t 0 + t ) d t = 1 S ( t 0 ) ∫ t 0 ∞ S ( t ) d t , {\displaystyle {\frac {1}{S(t_{0})}}\int _{0}^{\infty }t\,f(t_{0}+t)\,dt={\frac {1}{S(t_{0})}}\int _{t_{0}}^{\infty }S(t)\,dt,} where the second expression is obtained using integration by parts . For t 0 = 0 {\displaystyle t_{0}=0} , that is, at birth, this reduces to the expected lifetime. In reliability problems, the expected lifetime is called the mean time to failure , and the expected future lifetime is called the mean residual lifetime . As the probability of an individual surviving until age t or later is S ( t ), by definition, the expected number of survivors at age t out of an initial population of n newborns is n × S ( t ), assuming the same survival function for all individuals. Thus the expected proportion of survivors is S ( t ). If the survival of different individuals is independent, the number of survivors at age t has a binomial distribution with parameters n and S ( t ), and the variance of the proportion of survivors is S ( t ) × (1- S ( t ))/ n . The age at which a specified proportion of survivors remain can be found by solving the equation S ( t ) = q for t , where q is the quantile in question. Typically one is interested in the median lifetime , for which q = 1/2, or other quantiles such as q = 0.90 or q = 0.99. Censoring is a form of missing data problem in which time to event is not observed for reasons such as termination of study before all recruited subjects have shown the event of interest or the subject has left the study prior to experiencing an event. Censoring is common in survival analysis. If only the lower limit l for the true event time T is known such that T > l , this is called right censoring . Right censoring will occur, for example, for those subjects whose birth date is known but who are still alive when they are lost to follow-up or when the study ends. We generally encounter right-censored data. If the event of interest has already happened before the subject is included in the study but it is not known when it occurred, the data is said to be left-censored . [ 15 ] When it can only be said that the event happened between two observations or examinations, this is interval censoring . Left censoring occurs for example when a permanent tooth has already emerged prior to the start of a dental study that aims to estimate its emergence distribution. In the same study, an emergence time is interval-censored when the permanent tooth is present in the mouth at the current examination but not yet at the previous examination. Interval censoring often occurs in HIV/AIDS studies. Indeed, time to HIV seroconversion can be determined only by a laboratory assessment which is usually initiated after a visit to the physician. Then one can only conclude that HIV seroconversion has happened between two examinations. The same is true for the diagnosis of AIDS, which is based on clinical symptoms and needs to be confirmed by a medical examination. It may also happen that subjects with a lifetime less than some threshold may not be observed at all: this is called truncation . Note that truncation is different from left censoring, since for a left censored datum, we know the subject exists, but for a truncated datum, we may be completely unaware of the subject. Truncation is also common. In a so-called delayed entry study, subjects are not observed at all until they have reached a certain age. For example, people may not be observed until they have reached the age to enter school. Any deceased subjects in the pre-school age group would be unknown. Left-truncated data are common in actuarial work for life insurance and pensions . [ 16 ] Left-censored data can occur when a person's survival time becomes incomplete on the left side of the follow-up period for the person. For example, in an epidemiological example, we may monitor a patient for an infectious disorder starting from the time when he or she is tested positive for the infection. Although we may know the right-hand side of the duration of interest, we may never know the exact time of exposure to the infectious agent. [ 17 ] Survival models can be usefully viewed as ordinary regression models in which the response variable is time. However, computing the likelihood function (needed for fitting parameters or making other kinds of inferences) is complicated by the censoring. The likelihood function for a survival model, in the presence of censored data, is formulated as follows. By definition the likelihood function is the conditional probability of the data given the parameters of the model. It is customary to assume that the data are independent given the parameters. Then the likelihood function is the product of the likelihood of each datum. It is convenient to partition the data into four categories: uncensored, left censored, right censored, and interval censored. These are denoted "unc.", "l.c.", "r.c.", and "i.c." in the equation below. L ( θ ) = ∏ T i ∈ u n c . Pr ( T = T i ∣ θ ) ∏ i ∈ l . c . Pr ( T < T i ∣ θ ) ∏ i ∈ r . c . Pr ( T > T i ∣ θ ) ∏ i ∈ i . c . Pr ( T i , l < T < T i , r ∣ θ ) . {\displaystyle L(\theta )=\prod _{T_{i}\in unc.}\Pr(T=T_{i}\mid \theta )\prod _{i\in l.c.}\Pr(T<T_{i}\mid \theta )\prod _{i\in r.c.}\Pr(T>T_{i}\mid \theta )\prod _{i\in i.c.}\Pr(T_{i,l}<T<T_{i,r}\mid \theta ).} For uncensored data, with T i {\displaystyle T_{i}} equal to the age at death, we have Pr ( T = T i ∣ θ ) = f ( T i ∣ θ ) . {\displaystyle \Pr(T=T_{i}\mid \theta )=f(T_{i}\mid \theta ).} For left-censored data, such that the age at death is known to be less than T i {\displaystyle T_{i}} , we have Pr ( T < T i ∣ θ ) = F ( T i ∣ θ ) = 1 − S ( T i ∣ θ ) . {\displaystyle \Pr(T<T_{i}\mid \theta )=F(T_{i}\mid \theta )=1-S(T_{i}\mid \theta ).} For right-censored data, such that the age at death is known to be greater than T i {\displaystyle T_{i}} , we have Pr ( T > T i ∣ θ ) = 1 − F ( T i ∣ θ ) = S ( T i ∣ θ ) . {\displaystyle \Pr(T>T_{i}\mid \theta )=1-F(T_{i}\mid \theta )=S(T_{i}\mid \theta ).} For an interval censored datum, such that the age at death is known to be less than T i , r {\displaystyle T_{i,r}} and greater than T i , l {\displaystyle T_{i,l}} , we have Pr ( T i , l < T < T i , r ∣ θ ) = S ( T i , l ∣ θ ) − S ( T i , r ∣ θ ) . {\displaystyle \Pr(T_{i,l}<T<T_{i,r}\mid \theta )=S(T_{i,l}\mid \theta )-S(T_{i,r}\mid \theta ).} An important application where interval-censored data arises is current status data, where an event T i {\displaystyle T_{i}} is known not to have occurred before an observation time and to have occurred before the next observation time. The Kaplan–Meier estimator can be used to estimate the survival function. The Nelson–Aalen estimator can be used to provide a non-parametric estimate of the cumulative hazard rate function. These estimators require lifetime data. Periodic case (cohort) and death (and recovery) counts are statistically sufficient to make nonparametric maximum likelihood and least squares estimates of survival functions, without lifetime data. While many parametric models assume a continuous-time, discrete-time survival models can be mapped to a binary classification problem. In a discrete-time survival model the survival period is artificially resampled in intervals where for each interval a binary target indicator is recorded if the event takes place in a certain time horizon. [ 18 ] If a binary classifier (potentially enhanced with a different likelihood to take more structure of the problem into account) is calibrated , then the classifier score is the hazard function (i.e. the conditional probability of failure). [ 18 ] Discrete-time survival models are connected to empirical likelihood . [ 19 ] [ 20 ] The goodness of fit of survival models can be assessed using scoring rules . [ 21 ] The textbook by Kleinbaum has examples of survival analyses using SAS, R, and other packages. [ 22 ] The textbooks by Brostrom, [ 23 ] Dalgaard [ 3 ] and Tableman and Kim [ 24 ] give examples of survival analyses using R (or using S, and which run in R).
https://en.wikipedia.org/wiki/Survival_analysis
" Survival of the fittest " [ 1 ] is a phrase that originated from Darwinian evolutionary theory as a way of describing the mechanism of natural selection . The biological concept of fitness is defined as reproductive success . In Darwinian terms, the phrase is best understood as "survival of the form that in successive generations will leave most copies of itself." [ 2 ] Herbert Spencer first used the phrase, after reading Charles Darwin 's On the Origin of Species , in his Principles of Biology (1864), in which he drew parallels between his own economic theories and Darwin's biological ones: "This survival of the fittest, which I have here sought to express in mechanical terms, is that which Mr. Darwin has called 'natural selection', or the preservation of favoured races in the struggle for life." [ 3 ] Darwin responded positively to Alfred Russel Wallace 's suggestion of using Spencer's new phrase "survival of the fittest" as an alternative to "natural selection", and adopted the phrase in The Variation of Animals and Plants Under Domestication published in 1868. [ 3 ] [ 4 ] In On the Origin of Species , he introduced the phrase in the fifth edition published in 1869, [ 5 ] [ 6 ] intending it to mean "better designed for an immediate, local environment". [ 7 ] [ 8 ] By his own account, Herbert Spencer described a concept similar to "survival of the fittest" in his 1852 "A Theory of Population". [ 9 ] He first used the phrase – after reading Charles Darwin 's On the Origin of Species – in his Principles of Biology of 1864 [ 10 ] in which he drew parallels between his economic theories and Darwin's biological, evolutionary ones, writing, "This survival of the fittest, which I have here sought to express in mechanical terms, is that which Mr. Darwin has called 'natural selection', or the preservation of favored races in the struggle for life." [ 3 ] In July 1866 Alfred Russel Wallace wrote to Darwin about readers thinking that the phrase " natural selection " personified nature as "selecting" and said this misconception could be avoided "by adopting Spencer's term" Survival of the fittest . Darwin promptly replied that Wallace's letter was "as clear as daylight. I fully agree with all that you say on the advantages of H. Spencer's excellent expression of 'the survival of the fittest'. This however had not occurred to me till reading your letter. It is, however, a great objection to this term that it cannot be used as a substantive governing a verb". Had he received the letter two months earlier, he would have worked the phrase into the fourth edition of the Origin which was then being printed, and he would use it in his next book on "Domestic Animals etc.". [ 3 ] Darwin wrote on page 6 of The Variation of Animals and Plants Under Domestication published in 1868, "This preservation, during the battle for life, of varieties which possess any advantage in structure, constitution, or instinct, I have called Natural Selection; and Mr. Herbert Spencer has well expressed the same idea by the Survival of the Fittest. The term 'natural selection' is in some respects a bad one, as it seems to imply conscious choice; but this will be disregarded after a little familiarity". He defended his analogy as similar to language used in chemistry, and to astronomers depicting the "attraction of gravity as ruling the movements of the planets", or the way in which "agriculturists speak of man making domestic races by his power of selection". He had "often personified the word Nature; for I have found it difficult to avoid this ambiguity; but I mean by nature only the aggregate action and product of many natural laws,—and by laws only the ascertained sequence of events." [ 4 ] In the first four editions of On the Origin of Species , Darwin had used the phrase "natural selection". [ 11 ] In Chapter 4 of the 5th edition of The Origin published in 1869, [ 5 ] Darwin implies again the synonym: "Natural Selection, or the Survival of the Fittest". [ 6 ] By "fittest" Darwin meant "better adapted for the immediate, local environment", not the common modern meaning of "in the best physical shape" (think of a puzzle piece, not an athlete). [ 7 ] In the introduction he gave full credit to Spencer, writing "I have called this principle, by which each slight variation, if useful, is preserved, by the term Natural Selection, in order to mark its relation to man's power of selection. But the expression often used by Mr. Herbert Spencer of the Survival of the Fittest is more accurate, and is sometimes equally convenient." [ 12 ] In The Man Versus The State , Spencer used the phrase in a postscript to justify a plausible explanation of how his theories would not be adopted by "societies of militant type". He uses the term in the context of societies at war, and the form of his reference suggests that he is applying a general principle: [ 13 ] "Thus by survival of the fittest, the militant type of society becomes characterized by profound confidence in the governing power, joined with a loyalty causing submission to it in all matters whatever". [ 14 ] Though Spencer's conception of organic evolution is commonly interpreted as a form of Lamarckism , [ a ] Herbert Spencer is sometimes credited with inaugurating Social Darwinism . Evolutionary biologists criticise the manner in which the term is used by non-scientists and the connotations that have grown around the term in popular culture . The phrase also does not help in conveying the complex nature of natural selection, so modern biologists prefer and almost exclusively use the term natural selection . The biological concept of fitness refers to both reproductive success ( fecundity selection ), as well as survival ( viability selection ) and is not prescriptive in the specific ways in which organisms can be more "fit" by having phenotypic characteristics that enhance survival and reproduction (which was the meaning that Spencer had in mind). [ 16 ] While the phrase "survival of the fittest" is often used to mean " natural selection ", it is avoided by modern biologists, because the phrase can be misleading. For example, survival is only one aspect of selection, and not always the most important. Another problem is that the word "fit" is frequently confused with a state of physical fitness . In the evolutionary meaning, " fitness " is the rate of reproductive output among a class of genetic variants. [ 17 ] The phrase can also be interpreted to express a theory or hypothesis: that "fit" as opposed to "unfit" individuals or species, in some sense of "fit", will survive some test. Nevertheless, when extended to individuals it is a conceptual mistake, the phrase is a reference to the transgenerational survival of the heritable attributes; particular individuals are quite irrelevant. This becomes clearer when referring to Viral quasispecies , in survival of the flattest , which makes it clear to survive makes no reference to the question of even being alive itself; rather the functional capacity of proteins to carry out work. Interpretations of the phrase as expressing a theory are in danger of being tautological , meaning roughly "those with a propensity to survive have a propensity to survive"; to have content the theory must use a concept of fitness that is independent of that of survival. [ 7 ] [ 18 ] Interpreted as a theory of species survival, the theory that the fittest species survive is undermined by evidence that while direct competition is observed between individuals, populations and species, there is little evidence that competition has been the driving force in the evolution of large groups such as, for example, amphibians, reptiles, and mammals. Instead, these groups have evolved by expanding into empty ecological niches . [ 19 ] In the punctuated equilibrium model of environmental and biological change, the factor determining survival is often not superiority over another in competition but ability to survive dramatic changes in environmental conditions, such as after a meteor impact energetic enough to greatly change the environment globally. In 2010 Sahney et al. argued that there is little evidence that intrinsic, biological factors such as competition have been the driving force in the evolution of large groups. Instead, they cited extrinsic, abiotic factors such as expansion as the driving factor on a large evolutionary scale. The rise of dominant groups such as amphibians, reptiles, mammals and birds occurred by opportunistic expansion into empty ecological niches and the extinction of groups happened due to large shifts in the abiotic environment. [ 19 ] The use of the term " social Darwinism " as a critique of capitalist ideologies was introduced in Richard Hofstadter 's Social Darwinism in American Thought published in 1944. [ 20 ] Russian zoologist and anarchist Peter Kropotkin viewed the concept of "survival of the fittest" as supporting co-operation rather than competition. In his book Mutual Aid: A Factor of Evolution he set out his analysis leading to the conclusion that the fittest was not necessarily the best at competing individually, but often the community made up of those best at working together. He concluded that: In the animal world we have seen that the vast majority of species live in societies, and that they find in association the best arms for the struggle for life: understood in its wide Darwinian sense – not as a struggle for the sheer means of existence, but as a struggle against all natural conditions unfavourable to the species. The animal species, in which individual struggle has been reduced to its narrowest limits, and the practice of mutual aid has attained the greatest development, are invariably the most numerous, the most prosperous, and the most open to further progress. [ 21 ] Applying this concept to human society, Kropotkin presented mutual aid as one of the dominant factors of evolution, the other being self-assertion, and concluded that: In the practice of mutual aid, which we can retrace to the earliest beginnings of evolution, we thus find the positive and undoubted origin of our ethical conceptions; and we can affirm that in the ethical progress of man, mutual support not mutual struggle – has had the leading part. In its wide extension, even at the present time, we also see the best guarantee of a still loftier evolution of our race. [ 22 ] "Survival of the fittest" is sometimes claimed to be a tautology . [ 23 ] The reasoning is that if one takes the term "fit" to mean "endowed with phenotypic characteristics which improve chances of survival and reproduction" (which is roughly how Spencer understood it), then "survival of the fittest" can simply be rewritten as "survival of those who are better equipped for surviving". Furthermore, the expression does become a tautology if one uses the most widely accepted definition of "fitness" in modern biology, namely reproductive success itself (rather than any set of characters conducive to this reproductive success). This reasoning is sometimes used to claim that Darwin's entire theory of evolution by natural selection is fundamentally tautological, and therefore devoid of any explanatory power. [ 23 ] However, the expression "survival of the fittest" (taken on its own and out of context) gives a very incomplete account of the mechanism of natural selection. The reason is that it does not mention a key requirement for natural selection, namely the requirement of heritability . It is true that the phrase "survival of the fittest", in and by itself, is a tautology if fitness is defined by survival and reproduction. Natural selection is the portion of variation in reproductive success that is caused by heritable characters (see the article on natural selection ). [ 23 ] If certain heritable characters increase or decrease the chances of survival and reproduction of their bearers, then it follows mechanically (by definition of "heritable") that those characters that improve survival and reproduction will increase in frequency over generations. This is precisely what is called " evolution by natural selection ". On the other hand, if the characters which lead to differential reproductive success are not heritable, then no meaningful evolution will occur, "survival of the fittest" or not: if improvement in reproductive success is caused by traits that are not heritable, then there is no reason why these traits should increase in frequency over generations. In other words, natural selection does not simply state that "survivors survive" or "reproducers reproduce"; rather, it states that "survivors survive, reproduce and therefore propagate any heritable characters which have affected their survival and reproductive success". This statement is not tautological: it hinges on the testable hypothesis that such fitness-impacting heritable variations actually exist (a hypothesis that has been amply confirmed.) [ 23 ] Momme von Sydow suggested further definitions of 'survival of the fittest' that may yield a testable meaning in biology and also in other areas where Darwinian processes have been influential. However, much care would be needed to disentangle tautological from testable aspects. Moreover, an "implicit shifting between a testable and an untestable interpretation can be an illicit tactic to immunize natural selection ... while conveying the impression that one is concerned with testable hypotheses". [ 18 ] [ 24 ] Skeptic Society founder and Skeptic magazine publisher Michael Shermer addresses the tautology problem in his 1997 book, Why People Believe Weird Things , in which he points out that although tautologies are sometimes the beginning of science, they are never the end, and that scientific principles like natural selection are testable and falsifiable by virtue of their predictive power. Shermer points out, as an example, that population genetics accurately demonstrate when natural selection will and will not effect change on a population. Shermer hypothesizes that if hominid fossils were found in the same geological strata as trilobites , it would be evidence against natural selection. [ 25 ]
https://en.wikipedia.org/wiki/Survival_of_the_fittest
Survive To Fight is the title of a British Army publication which details the use of NBC protective equipment and other procedures to be carried in the event of an attack with nuclear , biological or chemical weapons . So far five editions have been published (and two reprint runs); the first three of which are in the form of a ring-bound manual with a plastic cover, and the last, Edition 5, is an 82-page loose leaf TAM insert (6 hole ringbinder). Edition I (AC71388) covers the use of the S6 Respirator and Mk.III NBC suit and overboots, Edition II (1990) (reprinted in 1992 with colour photographs on cover) featuring the S10 Respirator and Mk.IV suit and overboots which were introduced since. Edition III with different colour photographs on cover (1995) (reprinted once with glue bound at the top (circa 1998)) is a revised version of II, and features the addition of the Mk.V overboots. Edition IV (Jan 2002) is a TAM sided bound booklet. The last edition carrying the title 'Survive to Fight' was issued in September 2005 (edition 5), though this carried Army Code 64358 (not a JSP). The title of the publication was changed to JSP926 'Counter CBRN Aide Memoire' in July 2012 to reflect different operating conditions, this booklet is in the same loose leaf TAM insert format. The publication continues in this TAM sized form and can also be downloaded for use on a phone or laptop. [ 1 ] This United Kingdom military article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Survive_To_Fight
1E31 , 1F3H , 1XOX , 2QFA , 2RAW , 2RAX , 3UEC , 3UED , 3UEE , 3UEF , 3UEG , 3UEH , 3UEI , 3UIG , 3UIH , 3UII , 3UIJ , 3UIK , 4A0I , 4A0J , 4A0N 332 11799 ENSG00000089685 ENSMUSG00000017716 O15392 O70201 NM_001012270 NM_001012271 NM_001168 NM_001012272 NM_001012273 NM_009689 NP_001012270 NP_001012271 NP_001159 NP_001012273 NP_033819 Survivin , also called baculoviral inhibitor of apoptosis repeat-containing 5 or BIRC5 , is a protein that, in humans, is encoded by the BIRC5 gene. [ 5 ] Survivin is a member of the inhibitor of apoptosis (IAP) family. The survivin protein functions to inhibit caspase activation, thereby leading to negative regulation of apoptosis or programmed cell death. [ 6 ] This has been shown by disruption of survivin induction pathways leading to increase in apoptosis and decrease in tumour growth. The survivin protein is expressed highly in most human tumours and fetal tissue, but is completely absent in terminally differentiated cells. [ 7 ] Survivin is distinguished from other IAP family members in that it has only one baculoviral IAP repeat (BIR) domain. The protein is 16.5 kDa large and is the smallest member of the IAP family. [ 8 ] Survivin is expressed in a cell cycle-dependent manner, with highest levels in the G2/M phase. It localizes to the mitotic spindle during cell division and interacts with tubulin. [ 9 ] [ 10 ] Survivin plays important roles in regulating mitosis, inhibiting apoptosis, and promoting angiogenesis. [ 10 ] [ 11 ] Survivin is highly expressed in most human cancers but is rarely detectable in normal adult tissues. [ 7 ] Its overexpression in tumors correlates with increased drug resistance, reduced apoptosis, and poor patient prognosis. The aberrant regulation of survivin in cancer cells makes it an attractive target for cancer therapy. [ 12 ] Several approaches targeting survivin are being investigated as potential cancer treatments, including: Survivin has been shown to interact with:
https://en.wikipedia.org/wiki/Survivin
Susan Chomba is a Kenyan scientist and environmentalist. She is a director at the World Resources Institute . Chomba grew up in poverty in Kirinyaga County . [ 1 ] Chomba was largely raised by her grandmother as her mother, a single parent, was always working. Chomba's mother grew capsicum and French beans on a small plot of land owned by a step-uncle and created a farming cooperative. [ 2 ] When Chomba was nine, a local boarding school rejected her due to her poverty, so she attended one further away, in Western Kenya. When her mother was no longer able to afford to send her there, Chomba returned to Kirinyaga to attend the provincial high school. Each student in the school was given a patch of land to farm. Chomba experimented with organic farming, growing cabbage to withstand the cold climate. [ 2 ] Although Chomba had hoped to study law or agricultural economics, she was placed in a forestry course at Moi University . [ 2 ] [ 3 ] In her third year, when taking an agroforestry class, she found her calling. [ 2 ] Chomba joined the International Centre for Research in Agroforestry , where she Regreening Africa, an eight-country land restoration program that restored one million hectares of degraded land in Africa. [ 2 ] Chomba was a member of the first cohort to graduate with a dual European master's degree in Sustainable Tropical Forestry from Bangor University and the University of Copenhagen . She completed fieldwork in Tanzania . [ 4 ] She continued to get her PhD in forest governance at the University of Copenhagen . [ 2 ] [ 3 ] In 2021, Chomba joined the World Resources Institute as their Director of Vital Landscapes for Africa, where she leads their work on "Forests, Food systems and People." [ 5 ] [ 3 ] She is also a global ambassador for the Race to Zero and Race to Resilience under the UN High Level Champions for Climate Action. [ 3 ] [ 6 ]
https://en.wikipedia.org/wiki/Susan_Chomba
Susan Lynn Solomon (August 23, 1951 – September 8, 2022) was an American executive and lawyer. She was the chief executive officer and co-founder of the New York Stem Cell Foundation (NYSCF). [ 1 ] Solomon was born in Brooklyn on August 23, 1951. [ 2 ] Her father, Seymour Solomon , was the co-founder of Vanguard Records alongside his brother, Maynard ; [ 2 ] [ 3 ] her mother, Ruth (Katz), was a pianist and worked as a manager of concert musicians. [ 2 ] Solomon attended the Fieldston School . She then studied history at New York University , graduating with a bachelor's degree in 1975. Three years later, she obtained a Juris Doctor from Rutgers University School of Law , [ 4 ] where she was an editor of the Rutgers Law Review . [ 5 ] Solomon started her career as an attorney at Debevoise & Plimpton , [ 6 ] and worked in the legal profession until 1981. [ 2 ] She subsequently held executive positions at MacAndrews & Forbes and APAX (formerly MMG Patricof and Co.). She was the founder and President of Sony Worldwide Networks , [ 7 ] the chairman and CEO of Lancit Media Productions, [ 2 ] an Emmy award-winning television production company, and then served as the founding CEO of Sotheby's website [ 8 ] prior to founding her own strategic management consulting firm Solomon Partners LLC in 2000. [ 2 ] Solomon was a founding Board member of the Global Alliance for iPSC Therapies (GAiT) and New Yorkers for the Advancement of Medical Research (NYAMR). She served on the Board of the College Diabetes Network [ 9 ] and was a board member for the Centre for Commercialization of Regenerative Medicine. [ 10 ] She also served on the board of directors of the Regional Plan Association of New York, [ 11 ] where she was a member of the nominating and governance committee. She previously sat on the strategic planning committee for the Empire State Stem Cell Board. [ 12 ] Solomon co-founded NYSCF in 2005. She had earlier started work as a health-care advocate in 1992, when her son was diagnosed with type 1 diabetes . [ 13 ] As a result of her son's diagnosis and then her mother's death from cancer in 2004, she sought to find a way in which the most advanced medical research could translate more quickly into cures. In conversations with clinicians and scientists, Solomon identified stem cells as the most promising way to address unmet patient needs. [ 14 ] At the time of her death, NYSCF was one of the biggest nonprofits dedicated to stem cell research, employing 45 scientists at their Research Institute in Manhattan and funding an additional 75 scientists around the world. [ 15 ] Solomon married her first husband, Gary Hirsh , in 1968. Together, they had one son. They divorced and she later married Paul Goldberger in 1980. They remained married until her death, and had two children. [ 2 ] Solomon died on September 8, 2022, at her home in Amagansett, New York . She was 71, and suffered from ovarian cancer prior to her death. [ 2 ]
https://en.wikipedia.org/wiki/Susan_L._Solomon
Susan M. Kauzlarich is an American chemist and is presently a distinguished professor of chemistry at the University of California, Davis (UC Davis). [ 1 ] At UC Davis, Kauzlarich leads a research group focused on the synthesis and characterization of Zintl phases and nanoclusters with applications in the fields of thermoelectric materials , [ 2 ] [ 3 ] [ 4 ] magnetic resonance imaging , energy storage , [ 5 ] opto-electronics, and drug delivery . Kauzlarich has published over 250 peer-reviewed publications and has been awarded several patents. [ 6 ] In 2009, Kauzlarich received the annual Presidential Award for Excellence in Science, Mathematics and Engineering Mentoring , which is administered by the National Science Foundation to acknowledge faculty members who raise the membership of minorities, women and disabled students in the science and engineering fields. [ 7 ] In January 2022 she became Deputy Editor for the scientific journal, Science Advances . She gave the Edward Herbert Boomer Memorial Lecture of the University of Alberta in 2023. [ 8 ] Kauzlarich received a Bachelor of Science in chemistry from the College of William & Mary in 1980. [ 1 ] [ 9 ] Although originally planning to become a high school chemistry teacher, her collegiate mentors encouraged her to pursue graduate studies in chemistry. [ 10 ] She did her graduate studies with Bruce A. Averill at Michigan State University , receiving a chemistry PhD in 1985. During her graduate studies, Kauzlarich primarily worked on the synthesis, development and study of low-dimensional conducting materials derived from the layered material FeOCl. [ 11 ] [ 12 ] Her characterization methods of these new materials included x-ray absorption spectroscopy and neutron diffraction . From 1985 to 1987, Kauzlarich was a postdoctoral fellow with John Corbett at Iowa State University [ 13 ] where she explored the synthesis and bonding characteristics of novel extended condensed metal chain compounds built on [R 6 6I 12 Z] (R=Ln, Y, Sc; Z=B,C,N, C 2 ) clusters. [ 14 ] Kauzlarich joined the department of Chemistry at the University of California, Davis in 1987. She was promoted to associate professor in 1992, promoted to full professor in 1996, and in 2014, distinguished professor. She was the Maria Goeppert Mayer Distinguished Scholar at Argonne National Laboratory from 1997 to 1998, Faculty Assistant to the Dean of Mathematical and Physical Sciences from 2010 to 2013, and chair of the chemistry department from 2013 to 2016. [ 1 ] Kauzlarich served as an associate editor for the journal Chemistry of Materials from 2006 to 2021 and is a deputy editor for Science Advances (2022-). She has been a member of the editorial advisory board for the handbook Physics and Chemistry of the Rare Earths since 2002. She was an associate editor for the Journal of Solid State Chemistry from 2000 to 2005, [ 1 ] and as a member of the advisory review board of the Research Corporation for Science Advancement from 2004 to 2010. [ 1 ] [ 10 ] Kauzlarich is the editor of the book Chemistry, structure, and bonding of Zintl phases and ions . [ 15 ] [ 16 ] Kauzlarich is an advocate for diversity in the chemistry community and is well known for her personal commitment to mentorship. Throughout her career she has built and continues to support a pipeline of women and underrepresented students in the field of chemistry from high school through graduate study. During her career, Kauzlarich's mentorship strategies have expanded to help support a culture shift in her community through discussions, workshops, and development of new initiatives. [ 7 ] [ 17 ] One of her initiatives has been the development of the American Chemical Society Summer Educational Experience for the Economically Disadvantaged Program (SEED) program which she established at UC Davis in 1988. [ 18 ] For her mentorship of students, Kauzlarich was recognized by Barack Obama with the 2009 Presidential Award for Excellence in Science, Mathematics, and Engineering Mentoring . [ 7 ] [ 19 ] At UC Davis, she serves as committee member for the Center for the Advancement of Multicultural Perspectives on Science, part of the UC Davis "ADVANCE" initiative. [ 20 ] She is also an active member of the steering committees at UC Davis including the Women's Research and Resource Center and Women in Science and Engineering. [ 21 ] Kauzlarich's research focuses on synthesis and characterization of novel solid state materials. Some of Kauzlarich's publications from her independent research career are listed below: Kauzlarich has also been a longstanding global expert on the preparation of colloidal nanoclusters and most particularly the preparation of challenging to access Group IV derivatives. These materials hold promise in the areas of biomedicine alongside, importantly, next-generation devices with novel optical and transport properties. Listed below are some of her research team's publications in this research area to-date: Kauzlarich has received numerous awards including: [ 1 ]
https://en.wikipedia.org/wiki/Susan_M._Kauzlarich
Susan Reutzel-Edens FRSC is an American chemist who is the Head of Science at the Cambridge Crystallographic Data Centre . Her work considers solid state chemistry and pharmaceuticals. She is interested in crystal structure predictions. She serves on the editorial boards of CrystEngComm and Crystal Growth & Design . Reutzel-Edens was a doctoral researcher at the University of Minnesota , where she studied the design and characterization of hydrogen-bonded imide aggregates. She worked in the laboratory of crystallographer Margaret C. Etter , and made use of solid state NMR. [ 1 ] During her doctorate, she investigated how hydrogen bonds could be used as design elements that guided the solid-state self assembly of organic molecules. She made use of the Cambridge Structural Database to unravel the complicated relationships between hydrate formation and crystal polymorphism. [ citation needed ] Reutzel-Edens joined Eli Lilly and Company , where she recognized that it would be challenging to identify and design increasingly complicated drug targets, and instead proposed the use of computation approaches. Through collaborations with the Cambridge Crystallographic Database , Reutzel-Edens founded the Lilly solid form design program. [ 2 ] Her research has considered crystal polymorphism and the crystal nucleation. She used computational approaches to identify commercially viable small molecule drug products. [ 3 ] [ 4 ] [ 5 ] To this end, Reutzel-Edens proposed the use of crystal structure prediction to identify pharmaceutical molecules to complement experimental investigations. [ 6 ] She has described Olanzapine as “an incredible molecule, a gift to crystal chemistry that keeps on giving,”. [ 7 ] In 2018, Reutzel-Edens was appointed Fellow of the Royal Society of Chemistry . She serves on the editorial boards of CrystEngComm and Crystal Growth and Design . [ 8 ] In 2021 Reutzel-Edens joined the Cambridge Crystallographic Database as Head of Science. [ 9 ]
https://en.wikipedia.org/wiki/Susan_Reutzel-Edens
Sushila Maharjan is a Nepalese biochemist and biotechnologist who is the research director at Nepal's Research Institute for Bioscience and Biotechnology which she co-founded in 2011. Sushila Maharjan is a daughter of Mr Jit Govinda Maharjan and Mrs Radha Devi Maharjan, born in a village called Badegaun, from Lalitpur District of Nepal. She has 3 sisters and a brother. Her sisters, Hema Maharjan (founder of Namaste Kids Montessori), Rajani Maharjan (Business Development and Marketing manager, Software Engineer, System Analyst, Quality Auditor, 7+ year experience), and Ranjeeta Maharjan (an architect). Her brother is Dipendra Maharjan (An army orthopaedic surgeon). She completed her early studies from Arniko Secondary Boarding School. She did her I.Sc. from Tribhuvan University affiliated Amrit Science College. She did her master's degree in organic chemistry from Tribhuvan University . later, she did her PhD from Sun Moon University , South Korea. She is the research director at Nepal's Research Institute for Bioscience and Biotechnology which she co-founded in 2011. She has conducted research into the use of soil microbes for use in medicine, including applications in the development of new antibiotics . In recognition for her work, she was one of five young scientists from the developing countries who received the Elsevier Foundation Award in 2016. [ 1 ] [ 2 ] After receiving a bachelor's degree in chemistry and biological sciences, Maharjan went on to study organic chemistry at Nepal's Tribhuvan University , earning a master's degree in 2003. For her PhD from Sun Moon University , Korea, in 2011, she pursued her interest in metabolic and genetic engineering, focusing on streptomycetes . [ 2 ] [ 3 ] Back in Nepal, she investigated the potential of natural resources for medical applications. A founding member of Nepal's Research Institute for Bioscience and Biotechnology, where she is now research director, she researched streptomyces in soil at high altitudes as a basis for developing new antibiotics for treating diseases which are resistant to existing drugs. She has also sought to improve the teaching of science in Nepal by encouraging her students to work in research laboratories where they can put theory into practice. [ 4 ] As of April 2018, Maharjan is applying an organ-on-a-chip approach to the establishment of differences between the sexes to diseases in her role as a post-doctoral research fellow at the Brigham and Women's Hospital in Boston . [ 5 ]
https://en.wikipedia.org/wiki/Sushila_Maharjan
In mathematics , Suslin's problem is a question about totally ordered sets posed by Mikhail Yakovlevich Suslin ( 1920 ) and published posthumously. It has been shown to be independent of the standard axiomatic system of set theory known as ZFC ; Solovay & Tennenbaum (1971) showed that the statement can neither be proven nor disproven from those axioms, assuming ZF is consistent. (Suslin is also sometimes written with the French transliteration as Souslin , from the Cyrillic Суслин .) Un ensemble ordonné (linéairement) sans sauts ni lacunes et tel que tout ensemble de ses intervalles (contenant plus qu'un élément) n'empiétant pas les uns sur les autres est au plus dénumerable, est-il nécessairement un continue linéaire (ordinaire)? Is a (linearly) ordered set without jumps or gaps and such that every set of its intervals (containing more than one element) not overlapping each other is at most denumerable, necessarily an (ordinary) linear continuum? Suslin's problem asks: Given a non-empty totally ordered set R with the four properties is R necessarily order-isomorphic to the real line R ? If the requirement for the countable chain condition is replaced with the requirement that R contains a countable dense subset (i.e., R is a separable space ), then the answer is indeed yes: any such set R is necessarily order-isomorphic to R (proved by Cantor ). The condition for a topological space that every collection of non-empty disjoint open sets is at most countable is called the Suslin property . Any totally ordered set that is not isomorphic to R but satisfies properties 1–4 is known as a Suslin line . The Suslin hypothesis says that there are no Suslin lines: that every countable-chain-condition dense complete linear order without endpoints is isomorphic to the real line. An equivalent statement is that every tree of height ω 1 either has a branch of length ω 1 or an antichain of cardinality ℵ 1 . The generalized Suslin hypothesis says that for every infinite regular cardinal κ every tree of height κ either has a branch of length κ or an antichain of cardinality κ. The existence of Suslin lines is equivalent to the existence of Suslin trees and to Suslin algebras . The Suslin hypothesis is independent of ZFC. Jech (1967) and Tennenbaum (1968) independently used forcing methods to construct models of ZFC in which Suslin lines exist. Jensen later proved that Suslin lines exist if the diamond principle , a consequence of the axiom of constructibility V = L, is assumed. (Jensen's result was a surprise, as it had previously been conjectured that V = L implies that no Suslin lines exist, on the grounds that V = L implies that there are "few" sets.) On the other hand, Solovay & Tennenbaum (1971) used forcing to construct a model of ZFC without Suslin lines; more precisely, they showed that Martin's axiom plus the negation of the continuum hypothesis implies the Suslin hypothesis. The Suslin hypothesis is also independent of both the generalized continuum hypothesis (proved by Ronald Jensen ) and of the negation of the continuum hypothesis . It is not known whether the generalized Suslin hypothesis is consistent with the generalized continuum hypothesis; however, since the combination implies the negation of the square principle at a singular strong limit cardinal —in fact, at all singular cardinals and all regular successor cardinals —it implies that the axiom of determinacy holds in L(R) and is believed to imply the existence of an inner model with a superstrong cardinal .
https://en.wikipedia.org/wiki/Suslin's_problem
In mathematics, a Suslin algebra is a Boolean algebra that is complete , atomless , countably distributive , and satisfies the countable chain condition . They are named after Mikhail Yakovlevich Suslin . [ 1 ] The existence of Suslin algebras is independent of the axioms of ZFC , and is equivalent to the existence of Suslin trees or Suslin lines . [ 2 ] This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Suslin_algebra
Suspended animation is the slowing or stopping of biological function so that physiological capabilities are preserved. States of suspended animation are common in micro-organisms and some plant tissue, such as seeds. Many animals, including large ones, may undergo hibernation , and most plants have periods of dormancy . This article focuses primarily on the potential of large animals, especially humans, to undergo suspended animation. In animals, suspended animation may be either hypometabolic or ametabolic in nature. It may be induced by either endogenous, natural or artificial biological, chemical or physical means. In its natural form, it may be spontaneously reversible as in the case of species demonstrating hypometabolic states of hibernation . When applied with therapeutic intent, as in deep hypothermic circulatory arrest (DHCA), usually technologically mediated revival is required. [ 1 ] [ 2 ] Suspended animation is understood as the pausing of life processes by external or internal means without terminating life itself. [ 3 ] Breathing, heartbeat and other involuntary functions may still occur, but they can only be detected by artificial means. [ 4 ] For this reason, this procedure has been associated with a lethargic state in nature when animals or plants appear, over a period, to be dead but then can wake up or prevail without suffering any harm. This has been termed in different contexts hibernation , dormancy or anabiosis (the latter in some aquatic invertebrates and plants in scarcity conditions). In July 2020, marine biologists reported that aerobic microorganisms (mainly), in "quasi-suspended animation", were found in organically-poor sediments , up to 101.5 million years old, 68.9 metres (226 feet) below the sea floor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found. [ 5 ] [ 6 ] This condition of apparent death or interruption of vital signs in humans may be similar to a medical interpretation of suspended animation. It is only possible to recover signs of life if the brain and other vital organs suffer no cell deterioration, necrosis or molecular death principally caused by oxygen deprivation or excess temperature (especially high temperature). [ 7 ] Cases have been reported of individuals having returned from this apparent interruption of life lasting over one half hour, two hours, eight hours, or more (while adhering to these specific conditions for oxygen and temperature) have been analysed in depth, but these cases are considered rare and unusual phenomena. The brain begins to die after five minutes without oxygen; nervous tissues die intermediately when a "somatic death" occurs while muscles die over one to two hours following this last condition. [ 8 ] It has been possible to obtain a successful resuscitation and recover life after apparent suspended animation in such instances as after anaesthesia, heat stroke, electrocution, narcotic poisoning, heart attack or cardiac arrest, shock, newborn infants, cerebral concussion, or cholera. Supposedly, in suspended animation, a person technically would not die, as long as they were able to preserve the minimum conditions in an environment extremely close to death and return to a normal living state. An example of such a case is Anna Bågenholm , a Swedish radiologist who allegedly survived 80 minutes under ice in a frozen lake in a state of cardiac arrest with no brain damage in 1999. [ 9 ] [ 10 ] Other cases of hypothermia where people survived without damage are: It has been suggested that bone lesions provide evidence of hibernation among the early human population whose remains have been retrieved at the Archaeological site of Atapuerca . In a paper published in the journal L'Anthropologie , researchers Juan-Luis Arsuaga and Antonis Bartsiokas point out that "primitive mammals and primates" like bush babies and lorises hibernate, which suggests that "the genetic basis and physiology for such a hypometabolism could be preserved in many mammalian species, including humans". [ 15 ] Since the 1970s, induced hypothermia has been performed for some open-heart surgeries as an alternative to heart-lung machines . Hypothermia, however, provides only a limited amount of time in which to operate and there is a risk of tissue and brain damage for prolonged periods. There are many research projects currently investigating how to achieve "induced hibernation " in humans. [ 16 ] [ 17 ] This ability to hibernate humans would be useful for a number of reasons, such as saving the lives of seriously ill or injured people by temporarily putting them in a state of hibernation until treatment can be given. The primary focus of research for human hibernation is to reach a state of torpor , defined as a gradual physiological inhibition to reduce oxygen demand and obtain energy conservation by hypometabolic behaviors altering biochemical processes. In previous studies, it was demonstrated that physiological and biochemical events could inhibit endogenous thermoregulation before the onset of hypothermia in a challenging process known as "estivation". This is indispensable to survive harsh environmental conditions, as seen in some amphibians and reptiles. [ 18 ] Lowering the temperature of a substance reduces its chemical activity by the Arrhenius equation . This includes life processes such as metabolism. Cryonics could eventually provide long-term suspended animation. [ 19 ] Emergency Preservation and Resuscitation (EPR) is a way to slow the bodily processes that would lead to death in cases of severe injury. [ 20 ] This involves lowering the body's temperature below 34 °C (93 °F), which is the current standard for therapeutic hypothermia . [ 20 ] In June 2005, scientists at the University of Pittsburgh 's Safar Center for Resuscitation Research announced they had managed to place dogs in suspended animation and bring them back to life, most of them without brain damage , by draining the blood out of the dogs' bodies and injecting a low temperature solution into their circulatory systems , which in turn keeps the bodies alive in stasis. After three hours of being clinically dead , the dogs' blood was returned to their circulatory systems, and the animals were revived by delivering an electric shock to their hearts. The heart started pumping the blood around the body, and the dogs were brought back to life. [ 21 ] On 20 January 2006, doctors from the Massachusetts General Hospital in Boston announced they had placed pigs in suspended animation with a similar technique. The pigs were anaesthetized and major blood loss was induced, along with simulated - via scalpel - severe injuries (e.g. a punctured aorta as might happen in a car accident or shooting). After the pigs lost about half their blood the remaining blood was replaced with a chilled saline solution. As the body temperature reached 10 °C (50 °F) the damaged blood vessels were repaired and the blood was returned. [ 22 ] The method was tested 200 times with a 90% success rate. [ 23 ] The laboratory of Mark Roth at the Fred Hutchinson Cancer Research Center and institutes such as Suspended Animation, Inc are trying to implement suspended animation as a medical procedure which involves the therapeutic induction to a complete and temporary systemic ischemia , directed to obtain a state of tolerance for the protection-preservation of the entire organism, this during a circulatory collapse "only by a limited period of one hour". The purpose is to avoid a serious injury, risk of brain damage or death, until the patient reaches specialized attention. [ 24 ]
https://en.wikipedia.org/wiki/Suspended_animation
A suspended structure is a structure which is supported by cables coming from beams or trusses which sit atop a concrete center column or core. The design allows the walls, roof and cantilevered floors to be supported entirely by cables and a center column. Another type of suspended structure, suspended catenary, uses outer-wall concrete columns angled away from the center with a cable system strung between them suspending a roof and outer wall structure. In this example there are no supports or visual obstructions inside the structure. Some of the first suspension structures were bridges . The first iron chain suspension bridge in the Western world was the Jacob's Creek Bridge (1801) in Westmoreland County, Pennsylvania , designed by inventor James Finley . [ 1 ] The Golden Gate Bridge in San Francisco, California, is another example of a suspension structure. Much like the suspended building structure, towers hold the weight and cables support the bridge deck. In the case of suspension bridges, there is "tensional force" transferred to the columns. [ 2 ] Minimal interior visual obstruction is a feature of all suspended structure buildings. The architectural method creates a visually striking open space in the interior of the structure. The load for the suspended structure is either a suspended catenary or is supported by truss-work carrying the weight of the building through a building core. [ 3 ] Suspended structures of the center column type utilize high-strength cable to suspend or support the floors. In some cases beams are cantilevered out from the concrete column at the center of the building. From the top of the center column, cables are used to support the roof system and the walls. Cables run down from the top of the tower to support floors. The external skeleton of the structure is a type of curtain wall which also is supported by cables. [ 4 ] Suspended structures often allow much light to enter, because of the unobstructed interior. [ 5 ] An example of a catenary -shaped suspended structure is the Eero Saarinen designed Dulles International Airport . The roof of the structure is made up of suspension cable which stretches across angled concrete columns. In the design of Dulles airport, the floor, the columns and the roof all work together to allow the walls and ceiling to float. [ 6 ] This leaves a large open space for the building. [ 7 ] The Yoyogi National Gymnasium in Tokyo is an example of a cable suspended structure. The roof system is a large span and the structure has been called "one of the most beautiful buildings in the 20th century", largely due to the suspended roof system. [ 8 ]
https://en.wikipedia.org/wiki/Suspended_structure
In chemistry , a suspension is a heterogeneous mixture of a fluid that contains solid particles sufficiently large for sedimentation . The particles may be visible to the naked eye , usually must be larger than one micrometer , and will eventually settle , although the mixture is only classified as a suspension when and while the particles have not settled out. A suspension is a heterogeneous mixture in which the solid particles do not dissolve , but get suspended throughout the bulk of the solvent , left floating around freely in the medium. [ 1 ] The internal phase (solid) is dispersed throughout the external phase (fluid) through mechanical agitation , with the use of certain excipients or suspending agents. An example of a suspension would be sand in water. The suspended particles are visible under a microscope and will settle over time if left undisturbed. This distinguishes a suspension from a colloid , in which the colloid particles are smaller and do not settle. [ 2 ] Colloids and suspensions are different from solution , in which the dissolved substance (solute) does not exist as a solid, and solvent and solute are homogeneously mixed. A suspension of liquid droplets or fine solid particles in a gas is called an aerosol . In the atmosphere , the suspended particles are called particulates and consist of fine dust and soot particles, sea salt , biogenic and volcanogenic sulfates , nitrates , and cloud droplets. Suspensions are classified on the basis of the dispersed phase and the dispersion medium , where the former is essentially solid while the latter may either be a solid, a liquid, or a gas. In modern chemical process industries, high-shear mixing technology has been used to create many novel suspensions. Suspensions are unstable from a thermodynamic point of view but can be kinetically stable over a longer period of time, which in turn can determine a suspension's shelf life. This time span needs to be measured in order to provide accurate information to the consumer and ensure the best product quality. "Dispersion stability refers to the ability of a dispersion to resist change in its properties over time." [ 3 ] Dispersion of solid particles in a liquid. Note : Definition based on that in ref. [ 4 ] [ 5 ] Multiple light scattering coupled with vertical scanning is the most widely used technique to monitor the dispersion state of a product, hence identifying and quantifying destabilization phenomena. [ 6 ] [ 7 ] [ 8 ] [ 9 ] It works on concentrated dispersions without dilution. When light is sent through the sample, it is back scattered by the particles. The backscattering intensity is directly proportional to the size and volume fraction of the dispersed phase. Therefore, local changes in concentration ( sedimentation ) and global changes in size ( flocculation , aggregation ) are detected and monitored. Of primary importance in the analysis of stability in particle suspensions is the value of the zeta potential exhibited by suspended solids. This parameter indicates the magnitude of interparticle electrostatic repulsion and is commonly analyzed to determine how the use of adsorbates and pH modification affect particle repulsion and suspension stabilization or destabilization. The kinetic process of destabilisation can be rather long (up to several months or even years for some products) and it is often required for the formulator to use further accelerating methods in order to reach reasonable development time for new product design. Thermal methods are the most commonly used and consists in increasing temperature to accelerate destabilisation (below critical temperatures of phase and degradation). Temperature affects not only the viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables simulation of real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / film drainage. However, some emulsions would never coalesce in normal gravity, while they do under artificial gravity. [ 10 ] Moreover, segregation of different populations of particles have been highlighted when using centrifugation and vibration. [ 11 ] Common examples of suspensions include:
https://en.wikipedia.org/wiki/Suspension_(chemistry)
Suspension is a construction passing from a map to a flow . Namely, let X {\displaystyle X} be a metric space , f : X → X {\displaystyle f:X\to X} be a continuous map and r : X → R + {\displaystyle r:X\to \mathbb {R} ^{+}} be a function (roof function or ceiling function) bounded away from 0. Consider the quotient space : The suspension of ( X , f ) {\displaystyle (X,f)} with roof function r {\displaystyle r} is the semiflow [ 1 ] f t : X r → X r {\displaystyle f_{t}:X_{r}\to X_{r}} induced by the time translation T t : X × R → X × R , ( x , s ) ↦ ( x , s + t ) {\displaystyle T_{t}:X\times \mathbb {R} \to X\times \mathbb {R} ,(x,s)\mapsto (x,s+t)} . If r ( x ) ≡ 1 {\displaystyle r(x)\equiv 1} , then the quotient space is also called the mapping torus of ( X , f ) {\displaystyle (X,f)} .
https://en.wikipedia.org/wiki/Suspension_(dynamical_systems)
In mechanics , suspension is a system of components allowing a machine (normally a vehicle) to move smoothly with reduced shock . Types may include: Related concepts include:
https://en.wikipedia.org/wiki/Suspension_(mechanics)
In topology , a branch of mathematics , the suspension of a topological space X is intuitively obtained by stretching X into a cylinder and then collapsing both end faces to points. One views X as "suspended" between these end points. The suspension of X is denoted by SX [ 1 ] or susp( X ) . [ 2 ] : 76 There is a variant of the suspension for a pointed space , which is called the reduced suspension and denoted by Σ X . The "usual" suspension SX is sometimes called the unreduced suspension , unbased suspension , or free suspension of X , to distinguish it from Σ X. The (free) suspension S X {\displaystyle SX} of a topological space X {\displaystyle X} can be defined in several ways. 1. S X {\displaystyle SX} is the quotient space ( X × [ 0 , 1 ] ) / ( X × { 0 } ) / ( X × { 1 } ) . {\displaystyle (X\times [0,1])/(X\times \{0\}){\big /}(X\times \{1\}).} In other words, it can be constructed as follows: 2 . Another way to write this is: S X := v 0 ∪ p 0 ( X × [ 0 , 1 ] ) ∪ p 1 v 1 = lim → i ∈ { 0 , 1 } ⁡ ( ( X × [ 0 , 1 ] ) ↩ ( X × { i } ) → p i v i ) , {\displaystyle SX:=v_{0}\cup _{p_{0}}(X\times [0,1])\cup _{p_{1}}v_{1}\ =\ \varinjlim _{i\in \{0,1\}}{\bigl (}(X\times [0,1])\hookleftarrow (X\times \{i\})\xrightarrow {p_{i}} v_{i}{\bigr )},} Where v 0 , v 1 {\displaystyle v_{0},v_{1}} are two points , and for each i in {0,1}, p i {\displaystyle p_{i}} is the projection to the point v i {\displaystyle v_{i}} (a function that maps everything to v i {\displaystyle v_{i}} ). That means, the suspension S X {\displaystyle SX} is the result of constructing the cylinder X × [ 0 , 1 ] {\displaystyle X\times [0,1]} , and then attaching it by its faces, X × { 0 } {\displaystyle X\times \{0\}} and X × { 1 } {\displaystyle X\times \{1\}} , to the points v 0 , v 1 {\displaystyle v_{0},v_{1}} along the projections p i : ( X × { i } ) → v i {\displaystyle p_{i}:{\bigl (}X\times \{i\}{\bigr )}\to v_{i}} . 3. One can view S X {\displaystyle SX} as two cones on X, glued together at their base. 4. S X {\displaystyle SX} can also be defined as the join X ⋆ S 0 , {\displaystyle X\star S^{0},} where S 0 {\displaystyle S^{0}} is a discrete space with two points. [ 2 ] : 76 5 . In Homotopy type theory , S X {\displaystyle SX} be defined as a higher inductive type generated by S: S X {\displaystyle SX} N: S X {\displaystyle SX} M e r i d : ( X ) → ( N = S ) {\displaystyle Merid:{\bigl (}X{\bigr )}\to (N=S)} [ 3 ] In rough terms, S increases the dimension of a space by one: for example, it takes an n - sphere to an ( n + 1)-sphere for n ≥ 0. Given a continuous map f : X → Y , {\displaystyle f:X\rightarrow Y,} there is a continuous map S f : S X → S Y {\displaystyle Sf:SX\rightarrow SY} defined by S f ( [ x , t ] ) := [ f ( x ) , t ] , {\displaystyle Sf([x,t]):=[f(x),t],} where square brackets denote equivalence classes . This makes S {\displaystyle S} into a functor from the category of topological spaces to itself. If X is a pointed space with basepoint x 0 , there is a variation of the suspension which is sometimes more useful. The reduced suspension or based suspension Σ X of X is the quotient space: This is the equivalent to taking SX and collapsing the line ( x 0 × I ) joining the two ends to a single point. The basepoint of the pointed space Σ X is taken to be the equivalence class of ( x 0 , 0). One can show that the reduced suspension of X is homeomorphic to the smash product of X with the unit circle S 1 . For well-behaved spaces, such as CW complexes , the reduced suspension of X is homotopy equivalent to the unbased suspension. Σ gives rise to a functor from the category of pointed spaces to itself. An important property of this functor is that it is left adjoint to the functor Ω {\displaystyle \Omega } taking a pointed space X {\displaystyle X} to its loop space Ω X {\displaystyle \Omega X} . In other words, we have a natural isomorphism where X {\displaystyle X} and Y {\displaystyle Y} are pointed spaces and Maps ∗ {\displaystyle \operatorname {Maps} _{*}} stands for continuous maps that preserve basepoints. This adjunction can be understood geometrically, as follows: Σ X {\displaystyle \Sigma X} arises out of X {\displaystyle X} if a pointed circle is attached to every non-basepoint of X {\displaystyle X} , and the basepoints of all these circles are identified and glued to the basepoint of X {\displaystyle X} . Now, to specify a pointed map from Σ X {\displaystyle \Sigma X} to Y {\displaystyle Y} , we need to give pointed maps from each of these pointed circles to Y {\displaystyle Y} . This is to say we need to associate to each element of X {\displaystyle X} a loop in Y {\displaystyle Y} (an element of the loop space Ω Y {\displaystyle \Omega Y} ), and the trivial loop should be associated to the basepoint of X {\displaystyle X} : this is a pointed map from X {\displaystyle X} to Ω Y {\displaystyle \Omega Y} . (The continuity of all involved maps needs to be checked.) The adjunction is thus akin to currying , taking maps on cartesian products to their curried form, and is an example of Eckmann–Hilton duality . This adjunction is a special case of the adjunction explained in the article on smash products . The reduced suspension can be used to construct a homomorphism of homotopy groups , to which the Freudenthal suspension theorem applies. In homotopy theory , the phenomena which are preserved under suspension, in a suitable sense, make up stable homotopy theory . Some examples of suspensions are: [ 4 ] : 77, Exercise.1 Desuspension is an operation partially inverse to suspension. [ 5 ]
https://en.wikipedia.org/wiki/Suspension_(topology)
Suspension array technology (or SAT ) is a high throughput, large-scale, and multiplexed screening platform used in molecular biology . SAT has been widely applied to genomic and proteomic research, such as single nucleotide polymorphism (SNP) genotyping, genetic disease screening, gene expression profiling, screening drug discovery and clinical diagnosis. [ 1 ] [ 2 ] [ 3 ] SAT uses microsphere beads (5.6 um in diameter) to prepare arrays. SAT allows for the simultaneous testing of multiple gene variants through the use of these microsphere beads as each type of microsphere bead has a unique identification based on variations in optical properties, most common is fluorescent colour. As each colour and intensity of colour has a unique wavelength, beads can easily be differentiated based on their wavelength intensity. Microspheres are readily suspendable in solution and exhibit favorable kinetics during an assay. Similar to flat microarrays (e.g. DNA microarray ), an appropriate receptor molecule, such as DNA oligonucleotide probes, antibodies , or other proteins , attach themselves to the differently labeled microspheres. This produces thousands of microsphere array elements. Probe-target hybridization is usually detected by optically labeled targets, which determines the relative abundance of each target in the sample. [ 4 ] DNA is extracted from cells used to create test fragments. These test fragments are added to a solution containing a variety of microsphere beads. Each type of microsphere bead contains a known DNA probe with a unique fluorescent identity. Test fragments and probes on the microsphere beads are allowed to hybridize to each other. Once hybridized, the microsphere beads are sorted, usually using flow cytometry . This allows for the detection of each of the gene variants from the original sample. The resulting data collected will indicate the relative abundance of each hybridized sample to the microsphere. Since microsphere beads are easily suspended in solution and each microsphere retains its identity when hybridized to the test sample, a typical suspension array experiment can analyze a wide range of biological analysis in a single reaction, called "multiplexing". In general, each type of microsphere used in an array is individually prepared in bulk. For example, the commercially available microsphere arrays from Luminex xMAP technology uses a 10X10 element array. This array involves beads with red and infrared dyes, each with ten different intensities, to give a 100-element array. [ 4 ] Thus, the array size would increase exponentially if multiple dyes are used. For example, five different dyes with 10 different intensities per dye will give rise to 100,000 different array elements. When using different types of microspheres, SAT is capable of simultaneously testing multiple variables, such as DNA and proteins , in a given sample. This allows SAT to analyze variety of molecular targets during a single reaction. The common nucleic acid detection method includes direct DNA hybridization. The direct DNA hybridization approach is the simplest suspension array assay whereby 15 to 20 bp DNA oligonucleotides attached to microspheres are amplified using PCR . This is the optimized probe length as it minimizes the melting temperature variation among different probes during probe-target hybridization. [ 1 ] After amplifying one DNA oligoprobe of interest, it can be used to create 100 different probes on 100 different sets of microspheres, each with the capability of capturing 100 potential targets (if using a 100-plex array). Similarly, target DNA samples are usually PCR amplified and labeled. [ 4 ] Hybridization between the capture probe and the target DNA is achieved by melting and annealing complementary target DNA sequences to their capture probes located on the microspheres. After washing to remove non-specific binding between sequences, only strongly paired probe-targets will remain hybridized. [ 1 ] For more details on this topic, see flow cytometry Since the optical identity of each microsphere is known, the quantification of target samples hybridized to the microspheres can be achieved by comparing the relative intensity of target markers in one set of microspheres to target markers in another set of microspheres using flow cytometry . Microspheres can be sorted based using both their unique optical properties and level of hybridization to the target sequence.
https://en.wikipedia.org/wiki/Suspension_array_technology
A suspension bridge is a type of bridge in which the deck is hung below suspension cables on vertical suspenders. The first modern examples of this type of bridge were built in the early 1800s. [ 5 ] [ 6 ] Simple suspension bridges , which lack vertical suspenders, have a long history in many mountainous parts of the world. Besides the bridge type most commonly called suspension bridges, covered in this article, there are other types of suspension bridges . The type covered here has cables suspended between towers , with vertical suspender cables that transfer the live and dead loads of the deck below, upon which traffic crosses. This arrangement allows the deck to be level or to arc upward for additional clearance. Like other suspension bridge types, this type often is constructed without the use of falsework . The suspension cables must be anchored at each end of the bridge, since any load applied to the bridge is transformed into tension in these main cables. The main cables continue beyond the pillars to deck-level supports, and further continue to connections with anchors in the ground. The roadway is supported by vertical suspender cables or rods, called hangers. In some circumstances, the towers may sit on a bluff or canyon edge where the road may proceed directly to the main span. Otherwise, the bridge will typically have two smaller spans, running between either pair of pillars and the highway, which may be supported by suspender cables or their own trusswork . In cases where trusswork supports the spans, there will be very little arc in the outboard main cables. The earliest suspension bridges were ropes slung across a chasm, with a deck possibly at the same level or hung below the ropes such that the rope had a catenary shape. The Tibetan siddha and bridge-builder Thangtong Gyalpo originated the use of iron chains in his version of simple suspension bridges . In 1433, Gyalpo built eight bridges in eastern Bhutan . The last surviving chain-linked bridge of Gyalpo's was the Thangtong Gyalpo Bridge in Duksum en route to Trashi Yangtse , which was finally washed away in 2004. [ 10 ] Gyalpo's iron chain bridges did not include a suspended-deck bridge , which is the standard on all modern suspension bridges today. Instead, both the railing and the walking layer of Gyalpo's bridges used wires. The stress points that carried the screed were reinforced by the iron chains. Before the use of iron chains it is thought that Gyalpo used ropes from twisted willows or yak skins. [ 11 ] He may have also used tightly bound cloth. The Inca used rope bridges , documented as early as 1615. It is not known when they were first made. Queshuachaca is considered the last remaining Inca rope bridge and is rebuilt annually. The first iron chain suspension bridge in the Western world was the Jacob's Creek Bridge (1801) in Westmoreland County, Pennsylvania , designed by inventor James Finley . [ 12 ] Finley's bridge was the first to incorporate all of the necessary components of a modern suspension bridge, including a suspended deck which hung by trusses. Finley patented his design in 1808, and published it in the Philadelphia journal, The Port Folio , in 1810. [ 13 ] Early British chain bridges included the Dryburgh Abbey Bridge (1817) and 137 m Union Bridge (1820), with spans rapidly increasing to 176 m with the Menai Bridge (1826), "the first important modern suspension bridge". [ 14 ] The first chain bridge on the German speaking territories was the Chain Bridge in Nuremberg . The Sagar Iron Suspension Bridge with a 200 feet span (also termed Beose Bridge) was constructed near Sagar, India during 1828–1830 by Duncan Presgrave, Mint and Assay Master. [ 15 ] The Clifton Suspension Bridge (designed in 1831, completed in 1864 with a 214 m central span), is similar to the Sagar bridge. It is one of the longest of the parabolic arc chain type. The current Marlow suspension bridge was designed by William Tierney Clark and was built between 1829 and 1832, replacing a wooden bridge further downstream which collapsed in 1828. It is the only suspension bridge across the non-tidal Thames. The Széchenyi Chain Bridge , (designed in 1840, opened in 1849), spanning the River Danube in Budapest, was also designed by William Clark and it is a larger-scale version of Marlow Bridge. [ 16 ] One variation is the Thornewill and Warham 's Ferry Bridge in Burton-on-Trent , Staffordshire (1889), where the chains are not attached to abutments as is usual, but instead are attached to the main girders, which are thus in compression. Here, the chains are made from flat wrought iron plates, eight inches (203 mm) wide by an inch and a half (38 mm) thick, rivetted together. [ 17 ] The first wire-cable suspension bridge was the Spider Bridge at Falls of Schuylkill (1816), a modest and temporary footbridge built following the collapse of James Finley's nearby Chain Bridge at Falls of Schuylkill (1808). The footbridge's span was 124 m, although its deck was only 0.45 m wide. Development of wire-cable suspension bridges dates to the temporary simple suspension bridge at Annonay built by Marc Seguin and his brothers in 1822. It spanned only 18 m. [ 18 ] The first permanent wire cable suspension bridge was Guillaume Henri Dufour 's Saint Antoine Bridge in Geneva of 1823, with two 40 m spans. [ 18 ] The first with cables assembled in mid-air in the modern method was Joseph Chaley 's Grand Pont Suspendu in Fribourg , in 1834. [ 18 ] In the United States, the first major wire-cable suspension bridge was the Wire Bridge at Fairmount in Philadelphia, Pennsylvania. Designed by Charles Ellet Jr. and completed in 1842, it had a span of 109 m. Ellet's Niagara Falls suspension bridge (1847–48) was abandoned before completion. It was used as scaffolding for John A. Roebling 's double decker railroad and carriage bridge (1855). The Otto Beit Bridge (1938–1939) was the first modern suspension bridge outside the United States built with parallel wire cables. [ 19 ] Two towers/pillars, two suspension cables, four suspension cable anchors, multiple suspender cables, the bridge deck. [ 20 ] The main cables of a suspension bridge will form a catenary when hanging under their own weight only. When supporting the deck, the cables will instead form a parabola , assuming the weight of the cables is small compared to the weight of the deck. One can see the shape from the constant increase of the gradient of the cable with linear (deck) distance, this increase in gradient at each connection with the deck providing a net upward support force. Combined with the relatively simple constraints placed upon the actual deck, that makes the suspension bridge much simpler to design and analyze than a cable-stayed bridge in which the deck is in compression. Cable-stayed bridges and suspension bridges may appear to be similar, but are quite different in principle and in their construction. In suspension bridges, large main cables (normally two) hang between the towers and are anchored at each end to the ground. The main cables, which are free to move on bearings in the towers, bear the load of the bridge deck. Before the deck is installed, the cables are under tension from their own weight. Along the main cables smaller cables or rods connect to the bridge deck, which is lifted in sections. As this is done, the tension in the cables increases, as it does with the live load of traffic crossing the bridge. The tension on the main cables is transferred to the ground at the anchorages and by downwards compression on the towers. In cable-stayed bridges, the towers are the primary load-bearing structures that transmit the bridge loads to the ground. A cantilever approach is often used to support the bridge deck near the towers, but lengths further from them are supported by cables running directly to the towers. By design, all static horizontal forces of the cable-stayed bridge are balanced so that the supporting towers do not tend to tilt or slide and so must only resist horizontal forces from the live loads. In an underspanned suspension bridge, also called under-deck cable-stayed bridge, [ 21 ] the main cables hang entirely below the bridge deck, but are still anchored into the ground in a similar way to the conventional type. Very few bridges of this nature have been built, as the deck is inherently less stable than when suspended below the cables. Examples include the Pont des Bergues of 1834 designed by Guillaume Henri Dufour ; [ 18 ] James Smith's Micklewood Bridge; [ 22 ] and a proposal by Robert Stevenson for a bridge over the River Almond near Edinburgh . [ 22 ] Roebling's Delaware Aqueduct (begun 1847) consists of three sections supported by cables. The timber structure essentially hides the cables; and from a quick view, it is not immediately apparent that it is even a suspension bridge. The main suspension cables in older bridges were often made from a chain or linked bars, but modern bridge cables are made from multiple strands of wire. This not only adds strength but improves reliability (often called redundancy in engineering terms) because the failure of a few flawed strands in the hundreds used pose very little threat of failure, whereas a single bad link or eyebar can cause failure of an entire bridge. (The failure of a single eyebar was found to be the cause of the collapse of the Silver Bridge over the Ohio River .) Another reason is that as spans increased, engineers were unable to lift larger chains into position, whereas wire strand cables can be formulated one by one in mid-air from a temporary walkway. Poured sockets are used to make a high strength, permanent cable termination. They are created by inserting the suspender wire rope (at the bridge deck supports) into the narrow end of a conical cavity which is oriented in-line with the intended direction of strain. The individual wires are splayed out inside the cone or 'capel', and the cone is then filled with molten lead-antimony-tin (Pb80Sb15Sn5) solder. [ 23 ] Most suspension bridges have open truss structures to support the roadbed, particularly owing to the unfavorable effects of using plate girders, discovered from the Tacoma Narrows Bridge (1940) bridge collapse. In the 1960s, developments in bridge aerodynamics allowed the re-introduction of plate structures as shallow box girders , first seen on the Severn bridge , built 1961–1966. In the picture of the Yichang Bridge , note the very sharp entry edge and sloping undergirders in the suspension bridge shown. This enables this type of construction to be used without the danger of vortex shedding and consequent aeroelastic effects, such as those that destroyed the original Tacoma Narrows bridge. Three kinds of forces operate on any bridge: the dead load, the live load, and the dynamic load. Dead load refers to the weight of the bridge itself. Like any other structure, a bridge has a tendency to collapse simply because of the gravitational forces acting on the materials of which the bridge is made. Live load refers to traffic that moves across the bridge as well as normal environmental factors such as changes in temperature, precipitation, and winds. Dynamic load refers to environmental factors that go beyond normal weather conditions, factors such as sudden gusts of wind and earthquakes. All three factors must be taken into consideration when building a bridge. The principles of suspension used on a large scale also appear in contexts less dramatic than road or rail bridges. Light cable suspension may prove less expensive and seem more elegant for a cycle or footbridge than strong girder supports. An example of this is the Nescio Bridge in the Netherlands, and the Roebling designed 1904 Riegelsville suspension pedestrian bridge across the Delaware River in Pennsylvania. [ 24 ] The longest pedestrian suspension bridge, which spans the River Paiva, Arouca Geopark , Portugal, opened in April 2021. The 516 metres bridge hangs 175 meters above the river. [ 25 ] Where such a bridge spans a gap between two buildings, there is no need to construct towers, as the buildings can anchor the cables. Cable suspension may also be augmented by the inherent stiffness of a structure that has much in common with a tubular bridge . Typical suspension bridges are constructed using a sequence generally described as follows. Depending on length and size, construction may take anywhere between a year and a half (construction on the original Tacoma Narrows Bridge took only 19 months) up to as long as a decade (the Akashi-Kaikyō Bridge's construction began in May 1986 and was opened in May 1998 – a total of twelve years). Suspension bridges are typically ranked by the length of their main span. These are the ten bridges with the longest spans, followed by the length of the span and the year the bridge opened for traffic: (Chronological) Broughton Suspension Bridge (England) was an iron chain bridge built in 1826. One of Europe's first suspension bridges, it collapsed in 1831 due to mechanical resonance induced by troops marching in step. As a result of the incident, the British Army issued an order that troops should "break step" when crossing a bridge. Silver Bridge (USA) was an eyebar chain highway bridge, built in 1928, that collapsed in late 1967, killing forty-six people. The bridge had a low-redundancy design that was difficult to inspect. The collapse inspired legislation to ensure that older bridges were regularly inspected and maintained. Following the collapse a bridge of similar design was immediately closed and eventually demolished. A second similarly-designed bridge had been built with a higher margin of safety and remained in service until 1991. The Tacoma Narrows Bridge , (USA), 1940, was vulnerable to structural vibration in sustained and moderately strong winds due to its plate-girder deck structure. Wind caused a phenomenon called aeroelastic fluttering that led to its collapse only months after completion. The collapse was captured on film. There were no human deaths in the collapse; several drivers escaped their cars on foot and reached the anchorages before the span dropped. Yarmouth suspension bridge (England) was built in 1829 and collapsed in 1845, killing 79 people. Peace River Suspension Bridge (Canada), which was completed in 1943, collapsed when the north anchor's soil support for the suspension bridge failed in October 1957. The entire bridge subsequently collapsed. Kutai Kartanegara Bridge (Indonesia) over the Mahakam River , located in Kutai Kartanegara Regency , East Kalimantan district on the Indonesia island of Borneo , was built in 1995, completed in 2001 and collapsed in 2011. Dozens of vehicles on the bridge fell into the Mahakam River . As a result of this incident, 24 people died and dozens of others were injured and were treated at the Aji Muhammad Parikesit Regional Hospital. Meanwhile, 12 people were reported missing, 31 people were seriously injured, and 8 people had minor injuries. Research findings indicate that the collapse was largely caused by the construction failure of the vertical hanging clamp. It was also found that poor maintenance, fatigue in the cable hanger construction materials, material quality, and bridge loads that exceed vehicle capacity, can also have an impact on bridge collapse. In 2013 the Kutai Kartanegara Bridge rebuilt the same location and completed in 2015 with a Through arch bridge design. On 30 October 2022, Jhulto Pul , a pedestrian suspension bridge over the Machchhu River in the city of Morbi, Gujarat, India collapsed, leading to the deaths of at least 141 people.
https://en.wikipedia.org/wiki/Suspension_bridge
A cell suspension or suspension culture is a type of cell culture in which single cells or small aggregates of cells are allowed to function and multiply in an agitated growth medium , thus forming a suspension . Suspension culture is one of the two classical types of cell culture, the other being adherent culture . The history of suspension cell culture closely aligns with the history of cell culture overall, but differs in maintenance methods and commercial applications. The cells themselves can either be derived from homogenized tissue or from heterogenous cell solutions. Suspension cell culture is commonly used to culture nonadhesive cell lines like hematopoietic cells, plant cells , and insect cells . [ 1 ] While some cell lines are cultured in suspension, the majority of commercially available mammalian cell lines are adherent. [ 2 ] [ 3 ] Suspension cell cultures must be agitated to maintain cells in suspension, and may require specialized equipment (e.g. magnetic stir plate, orbital shakers, incubators) and flasks (e.g. culture flasks, spinner flasks, shaker flasks). [ 4 ] These cultures need to be maintained with nutrient containing media and cultured in a specific cell density range to avoid cell death. [ 5 ] The history of suspension cell culture is closely tied to the overall history of cell and tissue culture. In 1885, Wilhelm Roux laid the groundwork for future tissue culture, by developing a saline buffer that was used to maintain living cells (chicken embryos) for a few days. [ 6 ] Ross Granville Harrison in 1907 then developed in vitro cell culture techniques, including modifying the hanging drop technique for nerve cells and introducing aseptic technique to the culture process. [ 7 ] Later in 1910, Montrose Thomas Burrows adapted Harrison's technique and collaborated with Alexis Carrel to establish multiple tissue cultures that could be maintained in vitro using fresh plasma combined with saline solutions. [ 8 ] Carrel went on to develop the first known cell line, a line derived from chicken embryo heart which was maintained continuously for 34 years. [ 9 ] Though the "immortality" of the cell line was later challenged by Leonard Hayflick , this was a major breakthrough and inspired others to pursue creating other cell lines. [ 10 ] Notably in 1952, George Otto Gay and his assistant Mary Kubicek cultured the first human derived immortalized cell line - HeLa . While the other cell lines were adherent, HeLa cells were able to be maintained in suspension. [ 11 ] All primary cells (cells derived directly from a subject) must first be removed from a subject, isolated (using digestion enzymes), and suspended in media before being cultured. [ 1 ] However, this does not mean that these cells are compatible with suspension culture, as most mammalian cells are adherent and need to attach to a surface to divide. White blood cells can be taken from a subject and cultured in suspension, since they naturally exist in suspension in blood. [ 12 ] Adhesion of white blood cells in vivo is typically the result of an inflammatory immune response and requires specific cell-cell interactions that should not occur in a suspension of a single type of white blood cell. [ 13 ] Immortalized mammalian cell lines (cells that are able to replicate indefinitely), plant cells, and insect cells can be obtained cryopreserved from manufacturers and used to start a suspension culture. [ 14 ] To start a culture from cryopreserved cells, the cells must first be thawed and added to a flask or bioreactor containing media. Depending upon the cryoprotectant agent, the cells might need to be washed to avoid deleterious effects from the agent. [ 3 ] Suspension cell cultures are similar to adherent cultures in a number of ways. Both require specialized nutrient containing media, containers that allow for gas transfer, aseptic conditions to avoid contamination, and frequent passaging to prevent overcrowding of cells. However, even within these similarities there are a few key differences between these culture methods. For example, though both adherent and suspension cell cultures can be maintained in standard flasks such as the T-75 tissue culture flask, suspension cultures need to be agitated to avoid settling to the bottom of the flask. While adherent cell cultures can be maintained in flat flasks with a lot of surface area (to promote cell adhesion), suspension cultures require agitation otherwise the cells will fall to the bottom of a flask, greatly impacting their access to nutrients and oxygen, eventually resulting in cell death. [ 4 ] For this reason, specialized flasks (including the spinner flask and shaker flask, discussed below) have been developed to agitate media and keep the cells in suspension. However, the agitation of media subjects the cells to shear forces which can stress the cells and negatively impact growth. Although both adherent and suspension cell cultures require media, media used in suspension culture may contain a surfactant to protect cells from shear forces in addition to the amino acids, vitamins and salt solution contained in culture media such as DMEM . [ 5 ] Spinner flasks, which are used for suspension cultures, contain a magnetic spinner bar which circulates the media throughout the flask and keeps cells in suspension. [ 15 ] Spinner flasks contain one central capped opening flanked by two protruding arms which are also capped and allow for additional gas exchange. The magnetic spinner bar itself is typically suspended from a rod attached to the central cap so that it maximizes media circulation in the cell suspension. When culturing cells, the spinner flask containing cells is placed on a magnetic stir plate, inside of an incubator and the spinner parameters need to be adjusted carefully to avoid killing cells with shear forces. [ 16 ] Shaker flasks are also used for suspension cultures, and appear similar to typical Erlenmeyer flasks but have a semi-permeable lid to allow for gas exchange. [ 17 ] During suspension cell culturing, shaker flasks are loaded with cells and the appropriate media before they are placed on an orbital shaker. To optimize cell culture proliferation, the revolutions per minute of the orbital shaker must be adjusted within an acceptable range depending on the cells and media used. The media must be allowed to stir, but cannot disturb the cells too much causing them excessive stress. Shaker flasks are often used for fermentation cultures with microorganisms such as yeast. [ 18 ] Passaging, or subculturing, suspension cell cultures is more straightforward than passaging adherent cells. While adherent cells require initial processing with a digestion enzyme, to remove them from the culture flask surface, suspension cells are floating freely in media. [ 19 ] A sample from the culture can then be taken and analyzed to determine the ratio of living to dead cells (using a stain such as trypan blue ) and the total concentration of cells in the flask (using a hemocytometer ). Using this information, a portion of the current suspension culture will be transferred to fresh flask and supplemented with media. The passage number should be recorded, particularly if the cells are primary and not immortalized as primary cell lines will eventually undergo senescence . [ 20 ] Suspension cells are often passaged outright without changing the media. In order to change the media for a suspension culture, all cells from the current container should be removed and centrifuged into a pellet. The excess media is then removed from the centrifuged sample, and the flask is refilled with fresh media before re-adding the cells to the flask. Media changes and subculturing are important to maintain cell lines, since cells will consume nutrients in media to expand. Cells will also grow exponentially until the environment becomes inhospitable due to lack of nutrients, extreme pH, or lack of space to grow. Unlike adherent cultures , which are limited by the surface area provided for them to expand on, suspension cultures are limited by the volume of their container. Meaning, suspension cells can exist in much larger quantities in a given flask and are preferred when using cells to make products including proteins, antibodies, metabolites or just to produce a high volume of cells. However, there are far fewer mammalian suspension cell lines than mammalian adhesive cell lines. Most large scale suspension culture involves non-mammalian cells and takes place in bioreactors. Some examples of suspension cell culture:
https://en.wikipedia.org/wiki/Suspension_culture
In algebra , more specifically in algebraic K-theory , the suspension Σ R {\displaystyle \Sigma R} of a ring R is given by [ 1 ] Σ ( R ) = C ( R ) / M ( R ) {\displaystyle \Sigma (R)=C(R)/M(R)} where C ( R ) {\displaystyle C(R)} is the ring of all infinite matrices with entries in R having only finitely many nonzero elements in each row or column and M ( R ) {\displaystyle M(R)} is its ideal of matrices having only finitely many nonzero elements. It is an analog of suspension in topology . One then has: K i ( R ) ≃ K i + 1 ( Σ R ) {\displaystyle K_{i}(R)\simeq K_{i+1}(\Sigma R)} . This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Suspension_of_a_ring
Suspensory behaviour is a form of arboreal locomotion or a feeding behavior that involves hanging or suspension of the body below or among tree branches. [ 1 ] This behavior enables faster travel while reducing path lengths to cover more ground when travelling, searching for food and avoiding predators . [ 2 ] [ 3 ] Different types of suspensory behaviour include brachiation , climbing , and bridging. These mechanisms allow larger species to distribute their weight among smaller branches rather than balancing above these weak supports. [ 1 ] Primates and sloths are most commonly seen using these behaviours, however, other animals such as bats may be seen hanging below surfaces to obtain food or when resting. [ 1 ] [ 4 ] Animals who exhibit suspensory behaviour have similar mechanisms to perform this action and often involve many different parts of their body like the trunk , shoulders and many other features of their upper body. [ 5 ] Typically, these animals have an overall dorso-ventral flattening, a shortened lumbar region and a mediolateral expansion of the rib cage causing the scapula to be repositioned dorsally and humeral articulation to be oriented more cranially than the usual lateral placement shown in quadrupedal animals. [ 6 ] The scapula is also longer, giving these animals a particular arm and shoulder shape. [ 5 ] Combined, these morphologies allow for the infraspinatus muscle to be repositioned creating more resistance to trans articular tensile stress for suspending below a branch. These animals also have longer clavicles , creating a bigger projection of the shoulder which increases the ability to move when the forearm is raised above the head . To help with supporting their weight, the forelimbs are elongated. The humerus is longer as well and this helps with the movement of the deltoid muscles in the shoulder joint when the arm is moving away from the body. [ 5 ] The triceps branchii is small and there is a shorter distance to the elbow joint and a shorter olecranon process which allows for a greater elbow extension. Animals, especially primates, have many different ways to position themselves during suspensory behaviour, and these positions require different bones and muscles . Below is a list of different positions and their mechanisms. [ 7 ] Roosting is a vertical upside down behaviour seen in bats which involves the use of the feet to grasp a surface. [ 8 ] The hind limbs are very important as they provide most of the strength to support the bat. [ 8 ] The forelimbs can be used as well, having all four limbs supporting the animal. [ 8 ] The head and neck are usually kept at a 90° or 180° angle. [ 9 ] Suspensory locomotion aids with reducing path lengths and covering longer distances by moving faster through branches and trees above. [ 2 ] The movements of involved in suspensory behavior can be described as being seen most often among monkeys . The swinging motion of grabbing branch after branch with alternating hands or launching the body from one support to another losing contact with the support is very common and the most popular form of locomotion among suspensory animals. [ 10 ] Some animals such as the platyrrhines , use their tails for traveling and usually never use their forelimbs for transportation, while some species use both their tails and forelimbs. [ 10 ] Suspensory behavior is advantageous for avoiding predators. The quick motions and ability to escape high above the ground enables an avoidance strategy, maintaining survival. [ 3 ] While this type of locomotion can be beneficial there can be some consequences when dealing with extreme heights as vigorously moving through the trees allows for more opportunity for injury. [ 3 ] The easiest way for animals to avoid this consequence is using their abilities to focus on uninterrupted travel, accuracy and avoiding alternative routes. [ 3 ] Brachiation involves the animal swinging from branch to branch in a sequence motion above the ground in a canopy of trees. [ 5 ] [ 10 ] Typically these movements involve both arms without the aid of the legs or tail. [ 10 ] Tail and hind limb suspension can be used in different situations like feeding or escaping predators during drastic situations, however the use of the arms is preferred for this type of movement. [ 10 ] Climbing consists of moving up or down a vertical surface using all four arms and legs to help move the body upward or downward. [ 11 ] There are many different ways in which an animal can climb such as using alternating arms and legs, climbing sideways, fire-pole slides and head or bottom first decline. [ 11 ] Vertical climbing is the most costly form of locomotion as the animal must defy gravity and move up the tree . [ 12 ] This is particularly harder for animals with a larger body mass , as carrying their entire weight becomes more difficult with size. [ 12 ] Also involved with climbing is a "pulling up" motion in which the animal will pull itself above a branch using both of its arms and the hind limbs launch over the branch using a swinging motion. [ 11 ] Animals use this type of behavior when crossing between trees and other surfaces. [ 2 ] This movement requires the use of the hind limbs to leap across extended areas. [ 11 ] Small animals have an easier time leaping between gaps, while larger animals are more cautious due to their weight and typically swing from branch to branch instead. [ 2 ] Suspensory behaviour is very important for animals in regards to feeding. It has been reported that suspensory movements make up approximately 25% of all feeding strategies shown in primates . [ 13 ] Suspension helps them reach fruits and other vegetation that might be difficult to obtain on foot, while allowing them to cover a large distance at a greater speed. [ 2 ] [ 13 ] Often in arboreal regions, flowers , fruits and other plants are located on small terminal branches and suspension enables animals to access this food while saving time and energy. [ 2 ] By suspending below the branch they avoid a greater chance at the branch breaking and are able to keep a steady balance . [ 14 ] Hanging by the tail is very common when foraging which permits the use of the hands and arms to not only grab food but to catch themselves if they were to slip or fall. [ 14 ] Suspension allows for fast travel, which is helpful when collecting food as well. Speed allows animals to minimize competition while avoiding predators to ensure they grab as much food as they can in a short period of time. [ 2 ] If an animal is in a high tree, they often eat their food then and there to avoid injury and predators. Quadrupedalism and bipedalism combined with suspensory mechanisms are crucial for providing support during feeding so the animal does not fall and risk losing the food, or risking its life. [ 12 ]
https://en.wikipedia.org/wiki/Suspensory_behavior
The Sussex Manifesto was a report on science and technology for development written at the request of the United Nations and published in 1970. In the late 1960s the United Nations asked for recommendations on science and technology for development [ 1 ] from a team of academics at the Institute of Development Studies (IDS) and the Science Policy Research Unit (SPRU) at the University of Sussex , UK. This team became known as the Sussex Group and their report, Science and Technology to Developing Countries during the Second Development Decade , became known as the Sussex Manifesto . [ 2 ] The Sussex Manifesto was intended as the introductory chapter to the UN World Plan of Action on Science and Technology for Development . But the solutions presented in the Manifesto were deemed too radical to be used for that purpose. It was instead published in 1970 as an annex in Science and Technology for Development: Proposals for the Second United Nations Development Decade , a UN report by the Advisory Committee on the Application of Science and Technology to Development (ACAST). [ 3 ] The Sussex Manifesto helped raise awareness of science and technology for development [ 4 ] in UN circles [ 5 ] influenced the design of development institutions and was used for teaching courses in both North and South universities. The Sussex Group were Hans Singer (Chairman), Charles Cooper (Secretary), R.C. Desai, Christopher Freeman , Oscar Gish, Stephen Hill and Geoffrey Oldham. The Sussex Manifesto was originally published as the ‘Draft Introductory Statement for the World Plan of Action for the Application of Science and Technology to Development’, prepared by the ‘Sussex Group’, Annex II in 'Science and Technology for Development: Proposals for the Second Development Decade', United Nations, Dept of Economic and Social Affairs, New York, 1970, Document ST/ECA/133, and reprinted as 'The Sussex Manifesto: Science and Technology to Developing Countries during the Second Development Decade', IDS Reprints 101. In 2008 one of the authors of the original report Professor Geoff Oldham gave a seminar [ 6 ] [ 7 ] at the STEPS Centre – a research centre and policy engagement based at IDS and SPRU. Following this event, the STEPS Centre decided to create a new manifesto in association with its partners around the world and Professor Oldham. The new publication, Innovation, Sustainability, Development: A New Manifesto , [ 8 ] was launched in 2010, forty years after the original. The New Manifesto has also been translated into Chinese, French, Portuguese and Spanish. [ 9 ] The STEPS Centre is funded by the Economic and Social Research Council (ESRC).
https://en.wikipedia.org/wiki/Sussex_Manifesto
Sustainable construction aims to reduce the negative health and environmental impacts caused by the construction process and by the operation and use of buildings and the built environment . [ 1 ] It can be seen as the construction industry 's contribution to more sustainable development . Precise definitions vary from place to place, and are constantly evolving to encompass varying approaches and priorities. More comprehensively, sustainability can be considered from three dimension of planet, people and profit across the entire construction supply chain . [ 2 ] Key concepts include the protection of the natural environment , choice of non-toxic materials, reduction and reuse of resources, waste minimization , and the use of life-cycle cost analysis . One definition of "Sustainable Construction" is the introduction of healthy living and workplace environments, the use of materials that are sustainable, durable and by extension environmentally friendly . [ citation needed ] In the United States, the Environmental Protection Agency (EPA) defines sustainable construction as "the practice of creating structures and using processes that are environmentally responsible and resource-efficient throughout a building's life-cycle from siting to design, construction , operation, maintenance, renovation and deconstruction ." [ 3 ] Agyekum-Mensah et al . note that some definitions of sustainable construction and development "seem to be vague" and they question use of any definition of "sustainability" which suggests that sustainable or acceptable activities can be continued indefinitely, because construction projects do not run on indefinitely. [ 4 ] In the 1970s , awareness of sustainability emerged, [ 5 ] amidst oil crises . At that time, people began to realize the necessity and urgency of energy conservation , which is to utilize energy in an efficient way and find alternatives to contemporary sources of energy . Additionally, shortages of other natural resources at that time, such as water, also raised public attention to the importance of sustainability and conservation. [ 5 ] In the late 1960s, the construction industry began to explore ecological approaches to construction. [ 6 ] The concept of sustainable construction was born out of sustainable development discourse. [ 7 ] The term 'sustainable development' was first coined in the Brundtland report of 1987, defined as the ability to meet the needs of all people in the present without compromising the ability of future generations to meet their own. [ 7 ] This report defined a turning point in sustainability discourse since it deviated from the earlier limits-to-growth perspective to focus more on achieving social and economic milestones, and their connection to environmental goals, particularly in developing countries. [ 7 ] [ 8 ] Sustainable development interconnects three socially concerned systems—environment, society and economy—a system seeking to achieve a range of goals as defined by the United Nations Development Program. [ 9 ] The introduction of sustainable development into the environmental/economical discourse served as a middle ground for the limits-to-growth theory, and earlier pro-growth theories that argued maintaining economic growth would not hinder long-term sustainability. [ 7 ] As a result, scholars have faulted sustainable development for being too value-laden since applications of its definition vary heavily depending on relevant stakeholders, allowing it to be used in support of both pro-growth and pro-limitation perspectives of development arguments despite their vastly different implications. [ 7 ] In order for the concept to be effective in real-life applications, several specified frameworks for its use in various fields and industries, including sustainable construction, were developed. The construction industry's response to sustainable development is sustainable construction. [ 1 ] In 1994, the definition of sustainable construction was given by Professor Charles J. Kibert during the Final Session of the First International Conference of CIB TG 16 on Sustainable Construction as "the creation and responsible management of a healthy built environment based on resource efficient and ecological principles". [ 10 ] Notably, the traditional concerns in construction (performance, quality, cost) are replaced in sustainable construction by resource depletion , environmental degradation and healthy environment. [ 11 ] Sustainable construction addresses these criteria through the following principles set by the conference: [ 11 ] Additional definitions and frameworks for sustainable construction practices were more rigorously defined in the 1999 Agenda 21 on Sustainable Construction, published by the International Council for Research and Innovation in Building and Construction (CIB). [ 12 ] The same council also published an additional version of the agenda for sustainable construction in developing countries in 2001 to counteract biases present in the original report as a result of most contributors being from the developed world. [ 12 ] Since 1994, much progress to sustainable construction has been made all over the world. According to a 2015 Green Building Economic Impact Study released by U.S. Green Building Council (USGBC), the green building industry contributes more than $134.3 billion in labor income to working Americans. The study also found that green construction's growth rate is rapidly outpacing that of conventional construction and will continue to rise. [ 13 ] According to United Nations Environment Programme (UNEP), " the increased construction activities and urbanization will increase waste which will eventually destroy natural resources and wild life habitats over 70% of land surface from now up to 2032. " [ 14 ] Moreover, construction uses around half of natural resources that humans consume. Production and transport of building materials consumes 25 - 50 percent of all energy used (depending on the country considered). [ 15 ] Taking UK as an example, the construction industry counts for 47% of CO 2 emissions , of which manufacturing of construction products and materials accounts for the largest amount within the process of construction. [ 5 ] By implementing sustainable construction, benefits such as lower cost, environmental protection, sustainability promotion, and expansion of the market may be achieved during the construction phase. As mentioned in ConstructionExecutive , construction waste accounts for 34.7% of all waste in Europe. Implementing sustainability in construction would cut down on wasted materials substantially. [ 16 ] Sustainable construction might result in higher investment at the construction stage of projects, the competition between contractors, due to the promotion of sustainability in the industry, would encourage the application of sustainable construction technologies, ultimately decreasing the construction cost. Meanwhile, the encouraged cooperation of designer and engineer would bring better design into the construction phase. [ 17 ] Using more sustainable resources reduces cost of construction as there will be less water and energy being used for construction and with less resources being used in the projects, it would lead to lower disposal costs as there is less waste being made. [ 18 ] By adopting sustainable construction, contractors would make the construction plan or sustainable site plan to minimize the environmental impact of the project. According to a study took place in Sri Lanka, [ 19 ] considerations of sustainability may influence the contractor to choose more sustainable, locally sourced products and materials, and to minimize the amount of waste and water pollution.  Another example is from a case study in Singapore, [ 20 ] the construction team implemented rainwater recycling and waste water treatment systems that help achieve a lower environmental impact. [ 21 ] Contractors delivering projects in a sustainable way in collaboration with owners, treating sustainabilty as a key performance indicator for the clients from day one "sends a clear message to the industry, 'sustainability is important to us' and this, especially within the government and public sectors can significantly drive change in the way projects are undertaken, as well as up-skilling the industry to meet this growing demand. [ 22 ] There is be potential to expand the market of sustainable concepts or products. According to a report published by USGBC, "The global green building market grew in 2013 to $260 billion, including an estimated 20 percent of all new U.S. commercial real estate construction." [ 19 ] Globally, construction industries are attempting to implement sustainable construction principles. Below are some examples of successful implementations of sustainable construction promotion on a national level. Also included are new technologies that could improve the application of sustainable construction. The Government in Singapore has developed a Sustainable Construction Master Plan with the hope to transform the industrial development path from only focusing on the traditional concerns of "cost, time, and quality" to construction products and materials, to reduce natural resource consumption and minimize waste on site. With the expediting concern of the climate crisis, it is essential to keep in mind the importance of reducing energy consumption and toxic waste whilst moving forward with sustainable architectural plans. [ 25 ] [ 20 ] The development of efficiency codes has prompted the development of new construction technologies and methods, many pioneered by academic departments of construction management that seek to improve efficiency and performance while reducing construction waste. New techniques of building construction are being researched, made possible by advances in 3D printing technology. In a form of additive building construction , similar to the additive manufacturing techniques for manufactured parts, building printing is making it possible to flexibly construct small commercial buildings and private habitations in around 20 hours, with built-in plumbing and electrical facilities, in one continuous build, using large 3D printers. Working versions of 3D-printing building technology are already printing 2 metres (6 ft 7 in) of building material per hour as of January 2013 [update] , with the next-generation printers capable of 3.5 metres (11 ft) per hour, sufficient to complete a building in a week. [ 26 ] Dutch architect Janjaap Ruijssenaars's performative architecture 3D-printed building was scheduled to be built in 2014. [ 27 ] Over the years, the construction industry has seen a trend in IT adoption, something it always found hard to compete with when paired against other fields such as, the manufacturing or healthcare industries. Nowadays, construction is starting to see the full potential of technological advancements, moving on to paperless construction, using the power of automation and adopting BIM, the internet of things, cloud storage and co-working, and mobile apps, implementation of surveying drones, and more. [ 28 ] [ 29 ] In the current trend of sustainable construction , the recent movements of New Urbanism and New Classical architecture promote a sustainable approach towards construction, that appreciates and develops smart growth , architectural tradition and classical design . [ 30 ] This is in contrast to modernist and short-lived globally uniform architecture, as well as opposing solitary housing estates and suburban sprawl . [ 31 ] Both trends started in the 1980s. Timber is being introduced as a feasible material for skyscrapers (nicknamed "plyscrapers") thanks to new developments incorporating engineered timber, whose collective name is "mass timber" and includes cross-laminated timber . [ 32 ] Industrial hemp is becoming increasingly recognised as an eco-friendly building material. It can be used in a range of ways, including as an alternative to concrete (known as ' hempcrete '), flooring, and insulation. King Charles is reported to have used hemp to insulate an eco-home. In December 2022, the United Nations Conference on Trade and Development (UNCTAD) emphasised hemp's versatility and sustainability, and advocated its use as a building material, in a report entitled ' Commodities at a glance: Special issue on industrial hemp '. [ 33 ] Specific parameters are needed for sustainable construction projects in developing countries. Scholar Chrisna Du Plessis of the Council for Scientific and Industrial Research (CSIR) defines the following key issues as specific to work in developing countries: [ 12 ] In a later work, Du Plessis furthers the definition for sustainable construction to touch on the importance of sustainability in social and economic contexts as well. [ 34 ] This is especially relevant in construction projects in the Global South, where local value systems and social interactions may differ from the western context in which sustainable construction frameworks were developed. [ 34 ] First, the need for sustainable development measures in developing countries is considered. Most scholars have reached a consensus on the concept of the 'double burden' placed on developing countries as a result of the interactions between development and the environment. Developing countries are uniquely vulnerable to problems of both development (resource strain, pollution, waste management, etc.) and under-development (lack of housing, inadequate water and sanitation systems, hazardous work environments) that directly influence their relationship with the surrounding environment. [ 35 ] Additionally, scholars have defined two classes of environmental problems faced by developing countries; 'brown agendas' consider issues that cause more immediate environmental health consequences on localized populations, whereas ' green agendas ' consider issues that address long-term, wide-scope threats to the environment. [ 35 ] [ 36 ] Typically, green agenda solutions are promoted by environmentalists from developed, western countries, leading them to be commonly criticized as being elitist and ignorant to the needs of the poor, especially since positive results are often delayed due to their long-term scope. [ 36 ] Scholars have argued that sometimes these efforts can even end up hurting impoverished communities; for example, conservation initiatives often lead to restrictions on resource-use despite the fact that many rural communities rely on these resources as a source of income, forcing households to either find new livelihoods or find different areas for harvesting. [ 37 ] General consensus is that the best approaches to sustainable construction in developing countries is through a merging of brown and green agenda ideals. [ 35 ] [ 36 ] Since all of the definitions and frameworks for the major concepts outlined previously are developed by large international organizations and commissions, their research and writings directly influence the organization, procedures, and scale of rural development projects in the Global South . Attempts at community development by foreign organizations like the ones discussed have questionable records of success. For instance, billions of dollars of aid have flowed into Africa over the past 60 years in order to address infrastructure shortcomings, yet this aid has created numerous social and economic problems without making any progress toward infrastructure development. [ 38 ] One compelling explanation for why infrastructure projects as a result of foreign aid have failed in the past is that they are often eurocentric in modelling and applied off successful strategies used in western countries without adapting to local needs, environmental circumstances and cultural value systems. [ 39 ] Often NGOs and development nonprofits are criticized for taking over responsibilities that are traditionally carried out by the state, causing governments to become ineffective in handling these responsibilities over time. Within Africa, NGOs carry out the majority of sustainable building and construction through donor-funded, low-income housing projects . [ 38 ] Currently, sustainable construction has become mainstream in the construction industry. The increasing drive to adopt a better way of construction, stricter industrial standards and the improvement of technologies have lowered the cost of applying the concept, according to Business Case For Green Building Report. [ 17 ] The current cost of sustainable construction may be 0.4% lower than the normal cost of construction.
https://en.wikipedia.org/wiki/Sustainability_in_construction
Sustainability science first emerged in the 1980s and has become a new academic discipline. [ 1 ] [ 2 ] Similar to agricultural science or health science , it is an applied science defined by the practical problems it addresses. Sustainability science focuses on issues relating to sustainability and sustainable development as core parts of its subject matter. [ 2 ] It is "defined by the problems it addresses rather than by the disciplines it employs" and "serves the need for advancing both knowledge and action by creating a dynamic bridge between the two". [ 3 ] Sustainability science draws upon the related but not identical concepts of sustainable development and environmental science . [ 4 ] Sustainability science provides a critical framework for sustainability [ 5 ] while sustainability measurement provides the evidence-based quantitative data needed to guide sustainability governance . [ 6 ] Sustainability science began to emerge in the 1980s with a number of foundational publications, including the World Conservation Strategy (1980), [ 7 ] the Brundtland Commission 's report Our Common Future (1987), [ 8 ] and the U.S. National Research Council ’s Our Common Journey (1999). [ 9 ] [ 1 ] and has become a new academic discipline. [ 10 ] This new field of science was officially introduced with a "Birth Statement" at the World Congress "Challenges of a Changing Earth 2001" in Amsterdam organized by the International Council for Science (ICSU), the International Geosphere-Biosphere Programme ( IGBP ), the International Human Dimensions Programme on Global Environmental Change and the World Climate Research Programme (WCRP). The field reflects a desire to give the generalities and broad-based approach of " sustainability " a stronger analytic and scientific underpinning as it "brings together scholarship and practice, global and local perspectives from north and south, and disciplines across the natural and social sciences, engineering, and medicine". [ 11 ] Ecologist William C. Clark proposes that it can be usefully thought of as "neither 'basic' nor 'applied' research but as a field defined by the problems it addresses rather than by the disciplines it employs" and that it "serves the need for advancing both knowledge and action by creating a dynamic bridge between the two". [ 12 ] All the various definitions of sustainability themselves are as elusive as the definitions of sustainable developments themselves. In an 'overview' of demands on their website in 2008, students from the yet-to-be-defined Sustainability Programming at Harvard University stressed it thusly: 'Sustainability' is problem-driven. Students are defined by their problems. They draw from practice. [ 13 ] Susan W. Kieffer and colleagues, in 2003, suggest sustainability itself: ... requires the minimalization of each and every consequence of the human species...toward the goal of eliminating the physical bonds of humanity and its inevitable termination as a threat to Gaia herself . [ 14 ] According to some 'new paradigms' ...  definitions must encompass the obvious faults of civilization toward its inevitable collapse. [ 15 ] While strongly arguing their individual definitions of unsustainable itself, other students demand ending the complete unsustainability itself of Euro-centric economies in light of the African model. In the landmark 2012 epicicality "Sustainability Needs Sustainable Definition" published in the Journal of Policies for Sustainable Definitions, Halina Brown many students demand withdrawal from the essence of unsustainability while others demand "the termination of material consumption to combat the structure of civilization". [ 16 ] Students For Research And Development (SFRAD) demand an important component of sustainable development strategies to be embraced and promoted by the Brundtland Commission's report Our Common Future in the Agenda 21 agenda from the United Nations Conference on Environment and Development developed at the World Summit on Sustainable Development . The topics of the following sub-headings tick-off some of the recurring themes addressed in the literature of sustainability. [ 17 ] According to a compendium published as Readings in Sustainability , edited by Robert Kates , with a pre-face by William Clark. [ 18 ] The 2012 Commentary by Halina Brown extensively expands that scope. [ 16 ] This is work in progress. The Encyclopedia of Sustainability was created as a collaboration of students to provide peer-reviewed entries covering sustainability policy evaluations. [ 19 ] Knowledge structuring is an essential foundational evolution in the effort to acquire a comprehensive definition of sustainability which is complexly inter-connected. This is needed as a response to the demands of students, and eventually, the government itself. The data for sustainability are sourced from many students. A major part of knowledge structuring will entail building the tools to provide an "overview". Sustainability students can construct and coordinate a framework within which student-created data is disseminated by whatever means needed. The attempt by sustainability students to integrate "whole" of systems requires cooperation between students moving beyond the former boundaries of 'nations' as such defined, and eventually requiring the global government to require a global cooperative effort and one major task of sustainability itself is to require the global government thus legitimately expanded to forcibly assist integrated cross-disciplinary coordination by whatever means needed. Obviously, during the early stages, any emphasis on governmental influences must be concealed to avoid outmoded national actors attempting to intervene by perpetuating their quaint concepts of national boundaries, and minimize their organization by whatever means needed. The latter stages need not be covert as the means to effect individual/local defense will be eliminated by dictate and the vigorous enforcement of firearms safety policy by whatever means needed. Geoscience is the study of the Earth. Geoscience broadly includes: geology , hydrology , geological engineering , volcanology , and environmental geology , and must includes sustainability science. Geologists are very crucial to the sustainability movement. They hold a special knowledge and deep understanding of how Earth recycles and maintains the sustainability of itself. [ 20 ] To understand how sustainability and Gaia are related, there is the definition: the notable changes in geologic processes between how Gaia was pre-humans. [ 21 ] Therefore, the relationship between the two is a concept as seasoned as time itself. [ 22 ] Their very similarities have many limitations due to gradual changes. However, there is an issue between this relationship. Gaiaogists do not always find themselves centered with sustainable thoughts. One of the reasons for this is clear -- many students will continue to disagree on the issues of the Anthropocene Epoch [ 23 ] which focuses on if humans possess the capacity to adapt to environmental changes compared to how these changes are minimalized in conceptual form. [ 24 ] Regardless of Gaiaogists not always finding themselves centered, students are gaining a toehold through linking the two by creating the Journal of Sustainable Goals. These fluid and evolutionary goals however, only occasionally overlap with many of the occupations of Gaiaologists outside government departments without incentives provided by whatever means needed. Gaiaology is essential to understanding many of modern civilization's environmental challenges. [ 25 ] This transformation is important as it plays a major role in deciding if humans can live sustainably with Gaia. Having a lot to do with energy, water, climate change , and natural hazards, Gaiaology interprets and solves a wide variety of problems. [ 25 ] However, few Gaiaologists make any contributions toward a sustainable future outside of government without the incentives the government agents can provide by whatever means needed. [ 23 ] Tragically, many Gaiaologists work for oil and gas or mining companies which are typically poor avenues for sustainability. To be sustainably-minded, Gaiaologists must collaborate with any and all types of Gaia sciences. For example, Gaiaologists collaborating with sciences like ecology, zoology, physical geography, biology, environmental, and pathological sciences as [ 26 ] by whatever means needed, they could understand the impact their work could have on our Gaia home. [ 23 ] By working with more fields of study and broadening their knowledge of the environment Gaiaologists and their work could be evermore environmentally conscious in striving toward social justice for the downtrodden and marginalized. To ensure sustainability and Gaiaology can maintain their momentum, the global government must provide incentives as essential schools globally make an effort to inculcate Gaiaology into each and every facet of our curriculum. [ 27 ] and society incorporates the international development goals. [ 28 ] A misconception the masses have is this Gaiaology is the study of spirituality however it is much more complex, as it is the study of Gaia and the ways she works, and what it means for life. [ 27 ] Understanding Gaia processes opens many doors for understanding how humans affect Gaia and ways to protect her. Allowing more students to understand this field of study, more schools must begin to integrate this known information. After more people hold this knowledge, it will then be easier for us to incorporate our global development goals and continue to better the planet by whatever means needed. In recent years, more and more university degree programs have developed formal curricula which address issues of sustainability science and global change:
https://en.wikipedia.org/wiki/Sustainability_science
Sustainable architecture is architecture that seeks to minimize the negative environmental impact of buildings through improved efficiency and moderation in the use of materials, energy, development space and the ecosystem at large. Sometimes, sustainable architecture will also focus on the social aspect of sustainability as well. Sustainable architecture uses a conscious approach to energy and ecological conservation in the design of the built environment. [ 1 ] [ 2 ] The idea of sustainability , or ecological design , is to ensure that use of currently available resources does not end up having detrimental effects to a future society's well-being or making it impossible to obtain resources for other applications in the long run. [ 3 ] The term "sustainability" in relation to architecture has so far been mostly considered through the lens of building technology and its transformations. Going beyond the technical sphere of " green design ", invention and expertise, some scholars are starting to position architecture within a much broader cultural framework of the human interrelationship with nature . Adopting this framework allows tracing a rich history of cultural debates about humanity's relationship to nature and the environment, from the point of view of different historical and geographical contexts. [ 4 ] Global construction accounts for 38% of total global emissions. [ 5 ] While sustainable architecture and construction standards have traditionally focused on reducing operational carbon emissions, there are to date few standards or systems in place to track and reduce embodied carbon. [ 6 ] While steel and other materials are responsible for large-scale emissions, cement alone is responsible for 8% of all emissions. [ 7 ] Critics of the reductionism of modernism often noted the abandonment of the teaching of architectural history as a causal factor. The fact that a number of the major players in the deviation from modernism were trained at Princeton University's School of Architecture, where recourse to history continued to be a part of design training in the 1940s and 1950s, was significant. The increasing rise of interest in history had a profound impact on architectural education. History courses became more typical and regularized. With the demand for professors knowledgeable in the history of architecture, several PhD programs in schools of architecture arose in order to differentiate themselves from art history PhD programs, where architectural historians had previously trained. In the US, MIT and Cornell were the first, created in the mid-1970s, followed by Columbia , Berkeley , and Princeton . Among the founders of new architectural history programs were Bruno Zevi at the Institute for the History of Architecture in Venice, Stanford Anderson and Henry Millon at MIT, Alexander Tzonis at the Architectural Association , Anthony Vidler at Princeton, Manfredo Tafuri at the University of Venice, Kenneth Frampton at Columbia University , and Werner Oechslin and Kurt Forster at ETH Zürich . [ 8 ] Energy efficiency over the entire life cycle of a building is the most important goal of sustainable architecture. Architects use many different passive and active techniques to reduce the energy needs of buildings and increase their ability to capture or generate their own energy. [ 9 ] To minimize cost and complexity, sustainable architecture prioritizes passive systems to take advantage of building location with incorporated architectural elements, supplementing with renewable energy sources and then fossil fuel resources only as needed. [ 10 ] Site analysis can be employed to optimize use of local environmental resources such as daylight and ambient wind for heating and ventilation. Energy use very often depends on whether the building gets its energy on-grid, or off-grid. [ 11 ] Off-grid buildings do not use energy provided by utility services and instead have their own independent energy production. They use on-site electricity storage while on-grid sites feed in excessive electricity back to the grid. Numerous passive architectural strategies have been developed over time. Examples of such strategies include the arrangement of rooms or the sizing and orientation of windows in a building, [ 9 ] and the orientation of facades and streets or the ratio between building heights and street widths for urban planning. [ 12 ] An important and cost-effective element of an efficient heating, ventilation, and air conditioning (HVAC) system is a well-insulated building . A more efficient building requires less heat generating or dissipating power, but may require more ventilation capacity to expel polluted indoor air . Significant amounts of energy are flushed out of buildings in the water, air and compost streams. Off the shelf , on-site energy recycling technologies can effectively recapture energy from waste hot water and stale air and transfer that energy into incoming fresh cold water or fresh air. Recapture of energy for uses other than gardening from compost leaving buildings requires centralized anaerobic digesters . HVAC systems are powered by motors. Copper , versus other metal conductors, helps to improve the electrical energy efficiencies of motors, thereby enhancing the sustainability of electrical building components. Site and building orientation have some major effects on a building's HVAC efficiency. Passive solar building design allows buildings to harness the energy of the sun efficiently without the use of any active solar mechanisms such as photovoltaic cells or solar hot water panels . Typically passive solar building designs incorporate materials with high thermal mass that retain heat effectively and strong insulation that works to prevent heat escape. Low energy designs also requires the use of solar shading, by means of awnings, blinds or shutters, to relieve the solar heat gain in summer and to reduce the need for artificial cooling. In addition, low energy buildings typically have a very low surface area to volume ratio to minimize heat loss. This means that sprawling multi-winged building designs (often thought to look more "organic") are often avoided in favor of more centralized structures. Traditional cold climate buildings such as American colonial saltbox designs provide a good historical model for centralized heat efficiency in a small-scale building. Windows are placed to maximize the input of heat-creating light while minimizing the loss of heat through glass, a poor insulator. In the northern hemisphere this usually involves installing a large number of south-facing windows to collect direct sun and severely restricting the number of north-facing windows. Certain window types, such as double or triple glazed insulated windows with gas filled spaces and low emissivity (low-E) coatings, provide much better insulation than single-pane glass windows. Preventing excess solar gain by means of solar shading devices in the summer months is important to reduce cooling needs. Deciduous trees are often planted in front of windows to block excessive sun in summer with their leaves but allow light through in winter when their leaves fall off. Louvers or light shelves are installed to allow the sunlight in during the winter (when the sun is lower in the sky) and keep it out in the summer (when the sun is high in the sky). They are slatted like shutters and reflect light and radiation to reduce glare on the interior space. Advanced louver systems are automated to maximize daylight and monitor the interior temperature by adjusting their tilt. [ 13 ] Coniferous or evergreen plants are often planted to the north of buildings to shield against cold north winds. In colder climates, heating systems are a primary focus for sustainable architecture because they are typically one of the largest single energy drains in buildings. In warmer climates where cooling is a primary concern, passive solar designs can also be very effective. Masonry building materials with high thermal mass are very valuable for retaining the cool temperatures of night throughout the day. In addition builders often opt for sprawling single story structures in order to maximize surface area and heat loss. [ citation needed ] Buildings are often designed to capture and channel existing winds, particularly the especially cool winds coming from nearby bodies of water . Many of these valuable strategies are employed in some way by the traditional architecture of warm regions, such as south-western mission buildings. In climates with four seasons, an integrated energy system will increase in efficiency: when the building is well insulated, when it is sited to work with the forces of nature, when heat is recaptured (to be used immediately or stored), when the heat plant relying on fossil fuels or electricity is greater than 100% efficient, and when renewable energy is used. Active solar devices such as photovoltaic solar panels help to provide sustainable electricity for any use. Electrical output of a solar panel is dependent on orientation, efficiency, latitude, and climate—solar gain varies even at the same latitude. Typical efficiencies for commercially available PV panels range from 4% to 28%. The low efficiency of certain photovoltaic panels can significantly affect the payback period of their installation. This low efficiency does not mean that solar panels are not a viable energy alternative. In Germany, for example, solar panels are commonly installed in residential home construction. Roofs are often angled toward the sun to allow photovoltaic panels to collect at maximum efficiency. In the northern hemisphere, a true-south facing orientation maximizes yield for solar panels. If true-south is not possible, solar panels can produce adequate energy if aligned within 30° of south. However, at higher latitudes, winter energy yield will be significantly reduced for non-south orientation. To maximize efficiency in winter, the collector can be angled above horizontal Latitude +15°. To maximize efficiency in summer, the angle should be Latitude -15°. However, for an annual maximum production, the angle of the panel above horizontal should be equal to its latitude. [ 14 ] The use of undersized wind turbines in energy production in sustainable structures requires the consideration of many factors. In considering costs, small wind systems are generally more expensive than larger wind turbines relative to the amount of energy they produce. For small wind turbines , maintenance costs can be a deciding factor at sites with marginal wind-harnessing capabilities. At low-wind sites, maintenance can consume much of a small wind turbine's revenue. [ 15 ] Wind turbines begin operating when winds reach 8 mph, achieve energy production capacity at speeds of 32-37 mph, and shut off to avoid damage at speeds exceeding 55 mph. [ 15 ] The energy potential of a wind turbine is proportional to the square of the length of its blades and to the cube of the speed at which its blades spin. Though wind turbines are available that can supplement power for a single building, because of these factors, the efficiency of the wind turbine depends much upon the wind conditions at the building site. For these reasons, for wind turbines to be at all efficient, they must be installed at locations that are known to receive a constant amount of wind (with average wind speeds of more than 15 mph), rather than locations that receive wind sporadically. [ 16 ] A small wind turbine can be installed on a roof. Installation issues then include the strength of the roof, vibration, and the turbulence caused by the roof ledge. Small-scale rooftop wind turbines have been known to be able to generate power from 10% to up to 25% of the electricity required of a regular domestic household dwelling. [ 17 ] Turbines for residential scale use are usually between 7 feet (2 m) to 25 feet (8 m) in diameter and produce electricity at a rate of 900 watts to 10,000 watts at their tested wind speed. [ 18 ] The reliability of wind turbine systems is important to the success of a wind energy project. Unanticipated breakdowns can have a significant impact on a project's profitability due to the logistical and practical difficulties of replacing critical components in a wind turbine. Uncertainty with the long-term component reliability has a direct impact on the amount of confidence associated with cost of energy (COE) estimates. [ 19 ] Solar water heaters , also called solar domestic hot water systems, can be a cost-effective way to generate hot water for a home. They can be used in any climate, and the fuel they use—sunshine—is free. [ 20 ] There are two types of solar water systems: active and passive. An active solar collector system can produce about 80 to 100 gallons of hot water per day. A passive system will have a lower capacity. [ 21 ] Active solar water system's efficiency is 35-80% while a passive system is 30-50%, making active solar systems more powerful. [ 22 ] There are also two types of circulation, direct circulation systems and indirect circulation systems. Direct circulation systems loop the domestic water through the panels. They should not be used in climates with temperatures below freezing. Indirect circulation loops glycol or some other fluid through the solar panels and uses a heat exchanger to heat up the domestic water. The two most common types of collector panels are flat-plate and evacuated-tube. The two work similarly except that evacuated tubes do not convectively lose heat, which greatly improves their efficiency (5%–25% more efficient). With these higher efficiencies, Evacuated-tube solar collectors can also produce higher-temperature space heating, and even higher temperatures for absorption cooling systems. [ 23 ] Electric-resistance water heaters that are common in homes today have an electrical demand around 4500 kW·h/year. With the use of solar collectors, the energy use is cut in half. The up-front cost of installing solar collectors is high, but with the annual energy savings, payback periods are relatively short. [ 23 ] Air source heat pumps (ASHP) can be thought of as reversible air conditioners. Like an air conditioner, an ASHP can take heat from a relatively cool space (e.g. a house at 70 °F) and dump it into a hot place (e.g. outside at 85 °F). However, unlike an air conditioner, the condenser and evaporator of an ASHP can switch roles and absorb heat from the cool outside air and dump it into a warm house. Air-source heat pumps are inexpensive relative to other heat pump systems. As the efficiency of air-source heat pumps decline when the outdoor temperature is very cold or very hot; therefore, they are most efficiently used in temperate climates. [ 23 ] However, contrary to earlier expectations, they have proven to be also well suited for regions with cold outdoor temperatures, such as Scandinavia or Alaska. [ 24 ] [ 25 ] In Norway, Finland and Sweden, the use of heat pumps has grown strongly over the last two decades: in 2019, there were 15–25 heat pumps per 100 inhabitants in these countries, with ASHP the dominant heat pump technology. [ 25 ] Similarly, earlier assumptions that ASHP would only work well in fully insulated buildings have proven wrong—even old, partially insulated buildings can be retrofitted with ASHPs and thereby strongly reduce their energy demand. [ 26 ] Effects of EAHPs ( exhaust air heat pumps ) have also been studied within the aforementioned regions displaying promising results. An exhaust air heat pump uses electricity to extract heat from exhaust air leaving a building, redirecting it towards DHW ( domestic hot water ), space heating , and warming supply air. In colder countries, an EAHP may be able to recover around 2 - 3 times more energy than an air-to-air exchange system. [ 27 ] A 2022 study surrounding projected emission decreases within Sweden’s Kymenlaakso region explored the aspect of retrofitting existing apartment buildings (of varying ages) with EAHP systems. Select buildings were chosen in the cities of Kotka and Kouvola, their projected carbon emissions decreasing by about 590 tCO2 and 944 tCO2 respectively with a 7 - 13 year payoff period. [ 28 ] It is, however, important to note that EAHP systems may not produce favourable results if installed in a building exhibiting incompatible exhaust output rates or electricity consumption. In this case, EAHP systems may increase energy bills without providing reasonable cuts to carbon emissions (see EAHP ). Ground-source (or geothermal) heat pumps provide an efficient alternative. The difference between the two heat pumps is that the ground-source has one of its heat exchangers placed underground—usually in a horizontal or vertical arrangement. Ground-source takes advantage of the relatively constant, mild temperatures underground, which means their efficiencies can be much greater than that of an air-source heat pump. The in-ground heat exchanger generally needs a considerable amount of area. Designers have placed them in an open area next to the building or underneath a parking lot. Energy Star ground-source heat pumps can be 40% to 60% more efficient than their air-source counterparts. They are also quieter and can also be applied to other functions like domestic hot water heating. [ 23 ] In terms of initial cost, the ground-source heat pump system costs about twice as much as a standard air-source heat pump to be installed. However, the up-front costs can be more than offset by the decrease in energy costs. The reduction in energy costs is especially apparent in areas with typically hot summers and cold winters. [ 23 ] Other types of heat pumps are water-source and air-earth. If the building is located near a body of water, the pond or lake could be used as a heat source or sink. Air-earth heat pumps circulate the building's air through underground ducts. With higher fan power requirements and inefficient heat transfer, Air-earth heat pumps are generally not practical for major construction. Passive daytime radiative cooling harvests the extreme coldness of outer space as a renewable energy source to achieve daytime cooling. [ 29 ] Being high in solar reflectance to reduce solar heat gain and strong in longwave infrared (LWIR) thermal radiation heat transfer , daytime radiative cooling surfaces can achieve sub-ambient cooling for indoor and outdoor spaces when applied to roofs, which can significantly lower energy demand and costs devoted to cooling. [ 30 ] [ 31 ] These cooling surfaces can be applied as sky-facing panels, similar to other renewable energy sources like solar energy panels, making them for simple integration into architectural design. [ 32 ] A passive daytime radiative cooling roof application can double the energy savings of a white roof, [ 33 ] and when applied as a multilayer surface to 10% of a building's roof, it can replace 35% of air conditioning used during the hottest hours of daytime. [ 34 ] Daytime radiative cooling applications for indoor space cooling is growing with an estimated "market size of ~$27 billion in 2025." [ 35 ] Some examples of sustainable building materials include recycled denim or blown-in fiber glass insulation, sustainably harvested wood, Trass , Linoleum , [ 36 ] sheep wool, hempcrete , roman concrete , [ 37 ] panels made from paper flakes, baked earth, rammed earth, clay, vermiculite, flax linen, sisal, seagrass, expanded clay grains, coconut, wood fiber plates, calcium sandstone, locally obtained stone and rock, and bamboo , which is one of the strongest and fastest growing woody plants , and non-toxic low- VOC glues and paints. Bamboo flooring can be useful in ecological spaces since they help reduce pollution particles in the air. [ 38 ] Vegetative cover or shield over building envelopes also helps in the same. Paper which is fabricated or manufactured out of forest wood is supposedly one hundred percent recyclable, thus it regenerates and saves almost all the forest wood that it takes during its manufacturing process. There is an underutilized potential for systematically storing carbon in the built environment. [ 39 ] The use of natural building materials for their sustainable qualities is a practice seen in vernacular architecture . Regional architectural styles develop over generations, utilizing local materials. This practice reduces transportation and production emissions. [ 40 ] Regenerative sources, use of waste material, and the ability to reuse are sustainable qualities of timber, thatching, and stone and clay. Laminated timber products, straw, and stone are low carbon construction materials with major potential for scalability. Timber products can sequester carbon, while stone has a low extraction energy. Straw, including straw-bale construction , sequesters carbon while providing a high level of insulation. High thermal performance of natural materials contribute to regulating interior conditions without the use of modern technologies. [ 40 ] The uses of timber, straw, and stone in sustainable architecture were the subject of a major exhibit at the UK's Design Museum. [ 41 ] Sustainable architecture often incorporates the use of recycled or second hand materials, such as reclaimed lumber and recycled copper . The reduction in use of new materials creates a corresponding reduction in embodied energy (energy used in the production of materials). Often sustainable architects attempt to retrofit old structures to serve new needs in order to avoid unnecessary development. Architectural salvage and reclaimed materials are used when appropriate. When older buildings are demolished, frequently any good wood is reclaimed, renewed, and sold as flooring. Any good dimension stone is similarly reclaimed. Many other parts are reused as well, such as doors, windows, mantels, and hardware, thus reducing the consumption of new goods. When new materials are employed, green designers look for materials that are rapidly replenished, such as bamboo , which can be harvested for commercial use after only six years of growth, sorghum or wheat straw, both of which are waste material that can be pressed into panels, or cork oak , in which only the outer bark is removed for use, thus preserving the tree. When possible, building materials may be gleaned from the site itself; for example, if a new structure is being constructed in a wooded area, wood from the trees which were cut to make room for the building would be re-used as part of the building itself. For insulation in building envelopes, more experimental materials such as “waste sheep’s wool” alongside other waste fibers originating from textile and agri-industrial operations are being researched for use as well, with recent studies suggesting the recycled insulation effective for architectural purposes. [ 42 ] Low-impact building materials are used wherever feasible: for example, insulation may be made from low VOC ( volatile organic compound )-emitting materials such as recycled denim or cellulose insulation , rather than the building insulation materials that may contain carcinogenic or toxic materials such as formaldehyde. To discourage insect damage, these alternate insulation materials may be treated with boric acid . Organic or milk-based paints may be used. [ 43 ] However, a common fallacy is that "green" materials are always better for the health of occupants or the environment. Many harmful substances (including formaldehyde, arsenic, and asbestos) are naturally occurring and are not without their histories of use with the best of intentions. A study of emissions from materials by the State of California has shown that there are some green materials that have substantial emissions whereas some more "traditional" materials actually were lower emitters. Thus, the subject of emissions must be carefully investigated before concluding that natural materials are always the healthiest alternatives for occupants and for the Earth. [ 44 ] Volatile organic compounds (VOC) can be found in any indoor environment coming from a variety of different sources. VOCs have a high vapor pressure and low water solubility, and are suspected of causing sick building syndrome type symptoms. This is because many VOCs have been known to cause sensory irritation and central nervous system symptoms characteristic to sick building syndrome, indoor concentrations of VOCs are higher than in the outdoor atmosphere, and when there are many VOCs present, they can cause additive and multiplicative effects. Green products are usually considered to contain fewer VOCs and be better for human and environmental health. A case study conducted by the Department of Civil, Architectural, and Environmental Engineering at the University of Miami that compared three green products and their non-green counterparts found that even though both the green products and the non-green counterparts both emitted levels of VOCs, the amount and intensity of the VOCs emitted from the green products were much safer and comfortable for human exposure. [ 45 ] Commonly used building materials such as wood require deforestation that is, without proper care, unsustainable. As of October 2022, researchers at MIT have made developments on lab-grown Zinnia elegans cells growing into specific characteristics under conditions within their control. These characteristics include the “shape, thickness, [and] stiffness,” as well as mechanical properties that can mimic wood. [ 46 ] David N. Bengston from the USDA suggests that this alternative would be more efficient than traditional wood harvesting, with future developments potentially saving on transportation energy and conserve forests. However, Bengston notes that this breakthrough would change paradigms and raises new economic and environmental questions, such as timber-dependent communities′ jobs or how conservation would impact wildfires. [ 47 ] Despite the importance of materials to overall building sustainability, quantifying and evaluating the sustainability of building materials has proven difficult. There is little coherence in the measurement and assessment of materials sustainability attributes, resulting in a landscape today that is littered with hundreds of competing, inconsistent and often imprecise eco-labels, standards and certifications . This discord has led both to confusion among consumers and commercial purchasers and to the incorporation of inconsistent sustainability criteria in larger building certification programs such as LEED . Various proposals have been made regarding rationalization of the standardization landscape for sustainable building materials. [ 48 ] Building information modelling (BIM) is used to help enable sustainable design by allowing architects and engineers to integrate and analyze building performance.[5]. BIM services, including conceptual and topographic modelling, offer a new channel to green building with successive and immediate availability of internally coherent, and trustworthy project information. BIM enables designers to quantify the environmental impacts of systems and materials to support the decisions needed to design sustainable buildings. A sustainable building consultant may be engaged early in the design process, to forecast the sustainability implications of building materials , orientation, glazing and other physical factors, so as to identify a sustainable approach that meets the specific requirements of a project. Norms and standards have been formalized by performance-based rating systems e.g. LEED [ 49 ] and Energy Star for homes. [ 50 ] They define benchmarks to be met and provide metrics and testing to meet those benchmarks. It is up to the parties involved in the project to determine the best approach to meet those standards. As sustainable building consulting is often associated with cost premium, organisations such as Architects Assist aim for equity of access to sustainable and resident design. [ 51 ] One central and often ignored aspect of sustainable architecture is building placement. [ 52 ] Although the ideal environmental home or office structure is often envisioned as an isolated place, this kind of placement is usually detrimental to the environment. First, such structures often serve as the unknowing frontlines of suburban sprawl . Second, they usually increase the energy consumption required for transportation and lead to unnecessary auto emissions. Ideally, most building should avoid suburban sprawl in favor of the kind of light urban development articulated by the New Urbanist movement. [ 53 ] Careful mixed use zoning can make commercial, residential, and light industrial areas more accessible for those traveling by foot, bicycle, or public transit, as proposed in the Principles of Intelligent Urbanism . The study of permaculture , in its holistic application, can also greatly help in proper building placement that minimizes energy consumption and works with the surroundings rather than against them, especially in rural and forested zones. Sustainable buildings look for ways to conserve water . One strategic water saving design green buildings incorporate are green roofs . Green roofs have rooftop vegetation which captures storm drainage water. This function not only collects the water for further uses but also serves as a good insulator that can aid in the urban heat island effect. [ 38 ] Another strategic water efficient design is treating wastewater so it can be reused again. [ 54 ] Another sustainable design strategy is the use of rainwater harvesting systems, such as low-flow fixtures. These are widely known in green architecture for reducing water consumption exponentially and supporting long-term resource conservation. [ 55 ] Sustainable urbanism takes actions beyond sustainable architecture, and makes a broader view for sustainability. Typical solutions include eco-industrial park (EIP) and urban agriculture . International programs that are being supported include Sustainable Urban Development Network, [ 56 ] supported by UN-HABITAT; and Eco2 Cities, [ 57 ] supported by the World Bank. Concurrently, the recent movements of New Urbanism , New Classical architecture and complementary architecture promote a sustainable approach towards construction, that appreciates and develops smart growth , architectural tradition and classical design . [ 58 ] [ 59 ] This in contrast to modernist and globally uniform architecture, as well as leaning against solitary housing estates and suburban sprawl . [ 60 ] Both trends started in the 1980s. The Driehaus Architecture Prize is an award that recognizes efforts in New Urbanism and New Classical architecture, and is endowed with a prize money twice as high as that of the modernist Pritzker Prize . [ 61 ] Waste takes the form of spent or useless materials generated from households and businesses, construction and demolition processes, and manufacturing and agricultural industries. These materials are loosely categorized as municipal solid waste, construction and demolition (C&D) debris, and industrial or agricultural by-products. [ 62 ] Sustainable architecture focuses on the on-site use of waste management , incorporating things such as grey water systems for use on garden beds, and composting toilets to reduce sewage. These methods, when combined with on-site food waste composting and off-site recycling, can reduce a house's waste to a small amount of packaging waste .
https://en.wikipedia.org/wiki/Sustainable_architecture
Design standards , reference standards and performance standards are familiar throughout business and industry, virtually for anything that is definable. Sustainable design , taken as reducing our impact on the earth and making things better at the same time, is in the process of becoming defined. Also, many well organized specific methodologies are used by different communities of people for a variety of purposes. One of the better known is the Leadership in Energy and Environmental Design (LEED) green building rating system, which uses a diverse group of hard measures of environmental quality and impacts to define a holistic approach to sustainable building and assign ratings to individual projects. Sustainable design is really just a more determined effort to consider the whole range of impacts on our environment in making any decision. A more complete design guide, guided more by whole project impact measures, is the model offered by the U.S. cooperating agencies in the "Whole Building Design Guide" . Green construction codes and standards are beginning to emerge on the national code stage. The standards go beyond energy standards such as ASHRAE 90.1 and the International Energy Conservation Code (IECC) to cover additional areas such as site sustainability, water efficiency , indoor environmental quality and materials and resources. The first is ASHRAE 189.1 , Standard for the Design of High-Performance, Green Buildings Except Low-Rise Residential Buildings, published by ASHRAE in January 2010 in conjunction with the U.S. Green Building Council and the Illuminating Engineering Society . Standard 189.1 provides criteria by which a building can be judged as “green,” written in model code language that jurisdictions can use to develop a green building construction code. [ 1 ] Several organizations have developed their own ways of setting goals for energy reductions, such as Architecture 2030 and for qualifying performance toward them such as Cradle to Cradle . Developing real methods for how to discover the design opportunities that would allow you to meet or exceed the standards was one of the objectives of the environmental design movement in architectural schools in the 1960s and 1970s, but though some of the issues introduced then are still an important part of the process, not much actually changed about the methods of design. Now with the combination of many more interactive tools and much higher stakes in the outcome, and long gestating rethinking about natural systems in general, a dramatic new revolution in methodology seems inevitable. BIM (building information modeling) allows designers to work with many remote consultants on the same data file that represents all the decisions being made by the team. The same file is available to the climate and energy and environmental impact analysis and cost analysis tools and consultants, ... and of course to the prospective contractors and the regulators. Along with this new integrated access to the model there in needed a new way to integrate the conversation of so many people, each with some interest in reviewing each other's comments on the progress with the central design model. That is likely to involve development of wiki tools for the process. One such very early implementation of a Wiki SD tool called "4Dsustainability" organizes the project design evolution around the general learning process of how you define the problem by exploring its environment, and following that through the project. [ 2 ] The main difference between sustainable design methods and conventional design is incorporating the entire environment of the project's stakeholders on the design team, essentially, requiring new ways to explore connections and for more people and perspectives to be taken into account. Other methods that recognize this requirement are the "AIA SDAT" (sustainable design assessment team) program [ 3 ] and the "Scenarios for sustainability" process design tools. [ 4 ]
https://en.wikipedia.org/wiki/Sustainable_design_standards
Sustainable drainage systems (also known as SuDS , [ 1 ] SUDS , [ 2 ] [ 3 ] or sustainable urban drainage systems [ 4 ] ) are a collection of water management practices that aim to align modern drainage systems with natural water processes and are part of a larger green infrastructure strategy. [ 5 ] SuDS efforts make urban drainage systems more compatible with components of the natural water cycle such as storm surge overflows, soil percolation, and bio-filtration. These efforts hope to mitigate the effect human development has had or may have on the natural water cycle , particularly surface runoff and water pollution trends. [ 6 ] SuDS have become popular in recent decades as understanding of how urban development affects natural environments, as well as concern for climate change and sustainability, have increased. SuDS often use built components that mimic natural features in order to integrate urban drainage systems into the natural drainage systems or a site as efficiently and quickly as possible. SUDS infrastructure has become a large part of the Blue-Green Cities demonstration project in Newcastle upon Tyne . [ 7 ] Drainage systems have been found in ancient cities over 5,000 years old, including Minoan, Indus, Persian, and Mesopotamian civilizations. [ 8 ] These drainage systems focused mostly on reducing nuisances from localized flooding and waste water. Rudimentary systems made from brick or stone channels constituted the extent of urban drainage technologies for centuries. Cities in Ancient Rome also employed drainage systems to protect low-lying areas from excess rainfall. When builders began constructing aqueducts to import fresh water into cities, urban drainage systems became integrated into water supply infrastructure for the first time as a unified urban water cycle. [ 9 ] Modern drainage systems did not appear until the 19th century in Western Europe, although most of these systems were primarily built to deal with sewage issues rising from rapid urbanization . One such example is that of the London sewerage system , which was constructed to combat massive contamination of the River Thames . At the time, the River Thames was the primary component of London's drainage system, with human waste concentrating in the waters adjacent to the densely populated urban center. As a result, several epidemics plagued London's residents and even members of Parliament , including events known as the 1854 Broad Street cholera outbreak and the Great Stink of 1858 . [ 10 ] The concern for public health and quality of life launched several initiatives, which ultimately led to the creation of London's modern sewerage system designed by Joseph Bazalgette . [ 11 ] This new system explicitly aimed to ensure waste water was redirected as far away from water supply sources as possible in order to reduce the threat of waterborne pathogens . Since then, most urban drainage systems have aimed for similar goals of preventing public health crises. Within past decades, as climate change and urban flooding have become increasingly urgent challenges, drainage systems designed specifically for environmental sustainability have become more popular in both academia and practice. The first sustainable drainage system to utilize a full management train including source control in the UK was the Oxford services motorway station designed by SuDS specialists Robert Bray Associates [ 12 ] Originally the term SUDS described the UK approach to sustainable urban drainage systems. These developments may not necessarily be in "urban" areas, and thus the "urban" part of SuDS is now usually dropped to reduce confusion. Other countries have similar approaches in place using a different terminology such as best management practice (BMP) and low-impact development in the United States, [ 13 ] water-sensitive urban design (WSUD) in Australia, [ 14 ] low impact urban design and development (LIUDD) in New Zealand, [ 15 ] and comprehensive urban river basin management in Japan. [ 14 ] The National Research Council's definitive report on urban stormwater management described that urban drainage systems began in the United States after World War II. These structures were based on simple catch basins and pipes to transfer the water outside of the cities. [ 16 ] Urban stormwater management started to evolve more in the 1970s when landscape architects focused more on low-impact development and began using practices such as infiltration channels. [ 16 ] Parallel to this time, scientists started becoming concerned with other stormwater hazards surrounding pollution. Studies such as the Nationwide Urban Runoff Program showed that urban runoff contained pollutants like heavy metals, sediments, and pathogens, all of which water can pick up as it flows off of impermeable surfaces . [ 17 ] It was at the beginning of the 21st century where stormwater infrastructure to allow runoff to infiltrate close to the source became popular. This was around the same time that the term green infrastructure was coined. [ 18 ] Traditional urban drainage systems are limited by various factors including volume capacity, damage or blockage from debris and contamination of drinking water. Many of these issues are addressed by SuDS systems by bypassing traditional drainage systems altogether and returning rainwater to natural water sources or streams as soon as possible. Increasing urbanisation has caused problems with increased flash flooding after sudden rain. As areas of vegetation are replaced by concrete, asphalt , or roofed structures, leading to impervious surfaces , the area loses its ability to absorb rainwater. This rain is instead directed into surface water drainage systems, often overloading them and causing floods. The goal of all sustainable drainage systems is to use rainfall to recharge the water sources of a given site. These water sources are often underlying the water table , nearby streams, lakes, or other similar freshwater sources. For example, if a site is above an unconsolidated aquifer , then SuDS will aim to direct all rain that falls on the surface layer into the underground aquifer as quickly as possible. To accomplish this, SuDS use various forms of permeable layers to ensure the water is not captured or redirected to another location. Often these layers include soil and vegetation, though they can also be artificial materials. The paradigm of SuDS solutions should be that of a system that is easy to manage, requiring little or no energy input (except from environmental sources such as sunlight, etc.), resilient to use, and being environmentally as well as aesthetically attractive. Examples of this type of system are basins (shallow landscape depressions that are dry most of the time when it is not raining), rain gardens (shallow landscape depressions with shrub or herbaceous planting), swales (shallow normally-dry, wide-based ditches), filter drains (gravel filled trench drain), bioretention basins (shallow depressions with gravel and/or sand filtration layers beneath the growing medium), reed beds and other wetland habitats that collect, store, and filter dirty water along with providing a habitat for wildlife. A common misconception of SuDS is that they reduce flooding on the development site. In fact the SuDS is designed to reduce the impact that the surface water drainage system of one site has on other sites. For instance, sewer flooding is a problem in many places. Paving or building over land can result in flash flooding. This happens when flows entering a sewer exceed its capacity and it overflows. The SuDS system aims to minimise or eliminate discharges from the site, thus reducing the impact, the idea being that if all development sites incorporated SuDS then urban sewer flooding would be less of a problem. Unlike traditional urban stormwater drainage systems, SuDS can also help to protect and enhance ground water quality. Because SuDS describe a collection of systems with similar components or goals, there is a large crossover between SuDS and other terminologies dealing with sustainable urban development. [ 19 ] The following are examples generally accepted as components in a SuDS system: Bioswales are channels designed to concentrate and convey stormwater runoff while removing debris and pollution . Bioswales can also be beneficial in recharging groundwater . Bioswales are typically vegetated, mulched, or xeriscaped . [ 20 ] They consist of a swaled drainage course with gently sloped sides (less than 6%). [ 21 ] : 19 Bioswale design is intended to safely maximize the time water spends in the swale , which aids the collection and removal of pollutants, silt and debris. Depending on the site topography, the bioswale channel may be straight or meander. Check dams are also commonly added along the bioswale to increase stormwater infiltration. A bioswale's make-up can be influenced by many different variables, including climate, rainfall patterns, site size, budget, and vegetation suitability. It is important to maintain bioswales to ensure the best possible efficiency and effectiveness in removal of pollutants from stormwater runoff. Planning for maintenance is an important step, which can include the introduction of filters or large rocks to prevent clogging. Annual maintenance through soil testing, visual inspection, and mechanical testing is also crucial to the health of a bioswale. Permeable paving surfaces are made of either a porous material that enables stormwater to flow through it or nonporous blocks spaced so that water can flow between the gaps. Permeable paving can also include a variety of surfacing techniques for roads, parking lots, and pedestrian walkways. Permeable pavement surfaces may be composed of; pervious concrete , porous asphalt, paving stones , or interlocking pavers. [ 22 ] Unlike traditional impervious paving materials such as concrete and asphalt, permeable paving systems allow stormwater to percolate and infiltrate through the pavement and into the aggregate layers and/or soil below. In addition to reducing surface runoff, permeable paving systems can trap suspended solids, thereby filtering pollutants from stormwater. [ 23 ] Artificial wetlands can be constructed in areas that see large volumes of storm water surges or runoff. Built to replicate shallow marshes, wetlands as BMPs gather and filter water at scales larger than bioswales or rain gardens. Unlike bioswales, artificial wetlands are designed to replicate natural wetlands processes as opposed to having an engineered mechanism within the artificial wetland. Because of this, the ecology of the wetland (soil components, water, vegetation, microbes, sunlight processes, etc.) becomes the primary system to remove pollutants. [ 24 ] Water in an artificial wetland tends to be filtered slowly in comparison to systems with mechanized or explicitly engineered components. Wetlands can be used to concentrate large volumes of runoff from urban areas and neighborhoods. In 2012, the South Los Angeles Wetlands Park was constructed in a densely populated inner-city district as a renovation for a former LA Metro bus yard. [ 25 ] The park is designed to capture runoff from surrounding surfaces as well as storm water overflow from the city's current drainage system. [ 26 ] A retention basin, sometimes called a retention pond, wet detention basin , or storm water management pond (SWMP), is an artificial pond with vegetation around the perimeter and a permanent pool of water in its design. [ 27 ] [ 28 ] [ 29 ] It is used to manage stormwater runoff , for protection against flooding , for erosion control , and to serve as an artificial wetland and improve the water quality in adjacent bodies of water. It is distinguished from a detention basin , sometimes called a "dry pond", which temporarily stores water after a storm, but eventually empties out at a controlled rate to a downstream water body. It also differs from an infiltration basin which is designed to direct stormwater to groundwater through permeable soils. Wet ponds are frequently used for water quality improvement, groundwater recharge , flood protection, aesthetic improvement, or any combination of these. Sometimes they act as a replacement for the natural absorption of a forest or other natural process that was lost when an area is developed. As such, these structures are designed to blend into neighborhoods and viewed as an amenity. [ 30 ] A green roof or living roof is a roof of a building that is partially or completely covered with vegetation and a growing medium, planted over a waterproofing membrane . It may also include additional layers such as a root barrier and drainage and irrigation systems. [ 31 ] Container gardens on roofs, where plants are maintained in pots, are not generally considered to be true green roofs, although this is debated. Rooftop ponds are another form of green roofs which are used to treat greywater . [ 32 ] Vegetation, soil, drainage layer, roof barrier and irrigation system constitute the green roof. [ 33 ] Green roofs serve several purposes for a building, such as absorbing rainwater , providing insulation , creating a habitat for wildlife, [ 34 ] and decreasing stress of the people around the roof by providing a more aesthetically pleasing landscape, and helping to lower urban air temperatures and mitigate the heat island effect . [ 35 ] Green roofs are suitable for retrofit or redevelopment projects as well as new buildings and can be installed on small garages or larger industrial, commercial and municipal buildings. [ 31 ] They effectively use the natural functions of plants to filter water and treat air in urban and suburban landscapes. [ 36 ] There are two types of green roof: intensive roofs, which are thicker, with a minimum depth of 12.8 cm ( 5 + 1 ⁄ 16 in), and can support a wider variety of plants but are heavier and require more maintenance, and extensive roofs, which are shallow, ranging in depth from 2 to 12.7 cm ( 13 ⁄ 16 to 5 in), lighter than intensive green roofs, and require minimal maintenance. [ 37 ] Rain gardens are a form of stormwater management using water capture. Rain gardens are shallow depressed areas in the landscape, planted with shrubs and plants that are used to collect rainwater from roofs or pavement and allows for the stormwater to slowly infiltrate into the ground. [ 39 ] Rain gardens mimic natural landscape functions by capturing stormwater, filtering out pollutants, and recharging groundwater. [ 40 ] A study done in 2008 explains how rain gardens and stormwater planters are easy to incorporate into urban areas where they will improve the streets by minimizing the effects of drought and helping out with stormwater runoff. Stormwater planters can easily fit between other street landscapes and ideal in areas where spacing is tight. [ 41 ] Downspout disconnection is a form of green infrastructure that separates roof downspouts from the sewer system and redirects roof water runoff into permeable surfaces. [ 14 ] It can be used for storing stormwater or allowing the water to penetrate the ground. Downspout disconnection is especially beneficial in cities with combined sewer systems. With high volumes of rain, downspouts on buildings can send 12 gallons of water a minute into the sewer system, which increases the risk of basement backups and sewer overflows. [ 42 ] Green infrastructure keeps waterways clean and healthy in two primary ways; water retention and water quality . Different green infrastructure strategies prevents runoff by capturing the rain where it lies, allowing it to filter into the ground to recharge groundwater, return to the atmosphere through evapotranspiration , or be reused for another purpose like landscaping. [ 43 ] Water quality is also improved by decreasing the amount of stormwater that reaches other waterways and removing contaminants. Vegetation and soil help capture and remove pollutants from stormwater in many ways like adsorption, filtration, and plant uptake. [ 44 ] These processes break down or capture many of the common pollutants found in runoff. With climate change intensifying, heavy storms are becoming more frequent and so is the increasing risk of flooding and sewer system overflows. According to the EPA , the average size of a 100-year floodplain is likely to increase by 45% in the next ten years. [ 45 ] Another growing problem is urban flooding being caused by too much rain on impervious surfaces, urban floods can destroy neighborhoods. [ 46 ] They particularly affect minority and low-income neighborhoods and can leave behind health problems like asthma and illness caused by mold. Green infrastructure reduces flood risks and bolsters the climate resiliency of communities by keeping rain out of sewers and waterways, capturing it where it falls. [ 47 ] [ 48 ] More than half of the rain that falls in urban areas covered mostly by impervious surfaces ends up as runoff. [ 49 ] Green infrastructure practices reduce runoff by capturing stormwater and allowing it to recharge groundwater supplies or be harvested for purposes like landscaping. Green infrastructure promotes rainfall conservation through the use of capture methods and infiltration techniques, for instance bioswales. As much as 75 percent of the rainfall that lands on a rooftop can be captured and used for other purposes. [ 50 ] A city with miles of dark hot pavement absorbs and radiates heat into the surrounding atmosphere at a greater rate than a natural landscapes do. [ 51 ] This is urban heat island effect causing an increase in air temperatures. The EPA estimates that the average air temperature of a city with one million people or more can be 1.8 to 5.4 °F (1.0 to 3.0 °C) warmer than surrounding areas. [ 51 ] Higher temperatures reduce air quality by increasing smog . In Los Angeles, a 1 degree temperature increase makes the air roughly 3 percent more smog. [ 52 ] Green roofs and other forms of green infrastructure help improve air quality and reduce smog through their use of vegetation. Plants not only provide shade for cooling, but also absorb pollutants like carbon dioxide and help reduce air temperatures through evaporation and evapotranspiration. [ 53 ] By improving water quality, reducing air temperatures and pollution, green infrastructure provides many public health benefits. Cooler and cleaner air can help reduce heat related illnesses like exhaustion and heatstroke, as well as respiratory problems like asthma. [ 54 ] Cleaner and healthier waterways also means less illness from contaminated waters and seafood. Greener areas also promote physical activity and can boost mental health. [ 54 ] Green infrastructure is often cheaper than more conventional water management strategies. Philadelphia found that its new green infrastructure plan will cost $1.2 billion over 25 years, compared with the $6 billion a gray infrastructure would have cost. [ 55 ] The expenses for implementing green infrastructure are often smaller, planting a rain garden to deal with drainage costs less than digging tunnels and installing pipes. But even when it is not cheaper, green infrastructure still has a good long-term effect. A green roof lasts twice as long as a regular roof, and low maintenance costs of permeable pavement can make for a good long-term investment. [ 56 ] The Iowa town of West Union determined it could save $2.5 million over the lifespan of a single parking lot by using permeable pavement instead of traditional asphalt. [ 57 ] Green infrastructure also improves the quality of water drawn from rivers and lakes for drinking, which reduces the costs associated with purification and treatment, in some cases by more than 25 percent. [ 58 ] And green roofs can reduce heating and cooling costs, leading to energy savings of as much as 15 percent. [ 59 ]
https://en.wikipedia.org/wiki/Sustainable_drainage_system
Sustainable engineering is the process of designing or operating systems such that they use energy and resources sustainably , in other words, at a rate that does not compromise the natural environment, or the ability of future generations to meet their own needs. Sustainable engineering focuses on the following: Every engineering discipline is engaged in sustainable design, employing numerous initiatives, especially life cycle analysis (LCA), pollution prevention, Design for the Environment (DfE), Design for Disassembly (DfD), and Design for Recycling (DfR). These are replacing or at least changing pollution control paradigms. For example, concept of a " cap and trade " has been tested and works well for some pollutants. This is a system where companies are allowed to place a "bubble" over a whole manufacturing complex or trade pollution credits with other companies in their industry instead of a "stack-by-stack" and "pipe-by-pipe" approach, i.e. the so-called "command and control" approach. Such policy and regulatory innovations call for some improved technology based approaches as well as better quality-based approaches, such as leveling out the pollutant loadings and using less expensive technologies to remove the first large bulk of pollutants, followed by higher operation and maintenance (O&M) technologies for the more difficult to treat stacks and pipes. But, the net effect can be a greater reduction of pollutant emissions and effluents than treating each stack or pipe as an independent entity. This is a foundation for most sustainable design approaches, i.e. conducting a life-cycle analysis, prioritizing the most important problems, and matching the technologies and operations to address them. The problems will vary by size (e.g. pollutant loading), difficulty in treating, and feasibility. The most intractable problems are often those that are small but very expensive and difficult to treat, i.e. less feasible. Of course, as with all paradigm shifts , expectations must be managed from both a technical and an operational perspective. [ 2 ] Historically, sustainability considerations have been approached by engineers as constraints on their designs. For example, hazardous substances generated by a manufacturing process were dealt with as a waste stream that must be contained and treated. The hazardous waste production had to be constrained by selecting certain manufacturing types, increasing waste handling facilities, and if these did not entirely do the job, limiting rates of production. Green engineering recognizes that these processes are often inefficient economically and environmentally, calling for a comprehensive, systematic life cycle approach. [ 3 ] Green engineering attempts to achieve four goals: [ 4 ] Green engineering encompasses numerous ways to improve processes and products to make them more efficient from an environmental and sustainable standpoint. [ 5 ] Every one of these approaches depends on viewing possible impacts in space and time. Architects consider the sense of place. Engineers view the site map as a set of fluxes across the boundary. The design must consider short and long-term impacts. Those impacts beyond the near-term are the province of sustainable design. The effects may not manifest themselves for decades. In the mid-twentieth century, designers specified the use of what are now known to be hazardous building materials, such as asbestos flooring, pipe wrap and shingles, lead paint and pipes, and even structural and mechanical systems that may have increased the exposure to molds and radon. Those decisions have led to health risks to the inhabitants. It is easy in retrospect to criticize these decisions, but many were made for noble reasons, such as fire prevention and durability of materials. However, it does illustrate that seemingly small impacts when viewed through the prism of time can be amplified exponentially in their effects. Sustainable design requires a complete assessment of a design in place and time. Some impacts may not occur until centuries in the future. For example, the extent to which we decide to use nuclear power to generate electricity is a sustainable design decision. The radioactive wastes may have half-lives of hundreds of thousands of years, meaning it will take all these years for half of the radioactive isotopes to decay. Radioactive decay is the spontaneous transformation of one element into another. This occurs by irreversibly changing the number of protons in the nucleus. Thus, sustainable designs of such enterprises must consider highly uncertain futures. For example, even if we properly place warning signs about these hazardous wastes, we do not know if the English language will be understood. All four goals of green engineering mentioned above are supported by a long-term, life cycle point of view. A life cycle analysis is a holistic approach to consider the entirety of a product, process or activity, encompassing raw materials, manufacturing, transportation, distribution, use, maintenance, recycling, and final disposal. In other words, assessing its life cycle should yield a complete picture of the product. The first step in a life-cycle assessment is to gather data on the flow of a material through an identifiable society. Once the quantities of various components of such a flow are known, the important functions and impacts of each step in the production, manufacture, use, and recovery/disposal are estimated. Thus, in sustainable design, engineers must optimize for variables that give the best performance in temporal frames. [ 4 ] In 2013, the average annual electricity consumption for a U.S. residential utility customer was 10,908 kilowatt hours (kWh), an average of 909 kWh per month. Louisiana had the highest annual consumption at 15,270 kWh, and Hawaii had the lowest at 6,176 kWh. [ 6 ] Residential sector itself uses 18% [ 7 ] of the total energy generated and therefore, incorporating sustainable construction practices there can be significant reduction in this number. Basic Sustainable construction practices include :
https://en.wikipedia.org/wiki/Sustainable_engineering
Sustainable flooring is produced from sustainable materials (and by a sustainable process) that reduces demands on ecosystems during its life-cycle. [ according to whom? ] This includes harvest, production, use and disposal. It is thought that sustainable flooring creates safer and healthier buildings and guarantees a future for traditional producers of renewable resources that many communities depend on. [ according to whom? ] Several initiatives have led the charge to bring awareness of sustainable flooring as well as healthy buildings (air quality). [ 1 ] [ 2 ] [ 3 ] Below are examples of available, though sometimes less well-known, eco-friendly flooring options. [ 4 ] [ 5 ] [ 6 ] [ 7 ] The Asthma and Allergy Foundation of America recommends those with allergies to dust or other particulates choose flooring with smooth surfaces – such as hardwood, vinyl, linoleum tile or slate. In the U.S., the Building for Energy and Environmental Sustainability (BEES) program of the National Institute of Standards and Technology (NIST) [ 8 ] provides a one-stop source of life cycle assessment-based information about flooring options. Life cycle comparisons of flooring alternatives by research groups around the world consistently show bio-based flooring products to have lower environmental impacts than other types of flooring. The life cycle environmental impacts associated with producing and using flooring alternatives such as cork, linoleum, and solid wood are clearly lower than other alternatives. Wool carpeting and composite marble exhibit the greatest impacts, and impacts linked to typical carpeting used in residential structures are higher than those shown in the BEES system due to the use of a pad under the carpet layer. [ 9 ] The development of life cycle assessment methodology in the early 1990s has shown the environmental advantages of wood and wood-based products. [ 10 ] Wood is a unique and renewable material. Trees absorb carbon during their growing cycle, and this carbon remains stored in products like wood flooring during its service life, thus keeping it out of the atmosphere. At the end of its service life, wood can be reused (in which case the carbon continues to be stored in the wood) or used for fuel. [ 11 ] A life cycle assessment of flooring materials made of solid wood, linoleum and vinyl found the wood flooring had lower energy use and carbon dioxide emissions. It also performed better in environmental impact categories such as resource use, environmental toxin emissions, air pollution emissions and waste generation. [ 12 ] When reclaimed wood is used for wood flooring, [ 13 ] it is taken for reuse from many different sources, including old warehouses, boxcars, coal mines, gymnasiums, homes, wine barrels, historic barns, and more. Wood can also be recovered from rivers in the form of fallen trees along with logs that were once sent downstream by lumber mills. Parquet flooring in herringbone, double herringbone, or chevron style can be reclaimed from buildings typically undergoing demolition or renovation. Typically the flooring will have been in its original home for over 100 years. Occasionally very rare woods, including old growth teak, mahogany, oak and more exotic timbers such as panga, wenge, bubinga are utilized. Some of these woods, eg Rhodesian Teak are simply not in production any more, due to forestry or importation limits. The process of reclamation includes salvage, transport, cleaning of old bitumen residue, refitting and sanding with finishing by lacquering or oiling. Using reclaimed wood can earn credits towards achieving LEED project certification. Because reclaimed wood is considered recycled content, it meets the Materials & Resources criteria for LEED certification and because some reclaimed lumber products are FSC certified, they can qualify for LEED credits under the " certified wood " category. [ 14 ] Besides qualifying for LEED points, reclaimed wood is drawing an increasing number of home and business owners, architects, and contractors to choosing reclaimed wood flooring for a few significant reasons: [ 15 ] Bamboo flooring is made from a fast-growing renewable "timber" ( bamboo is actually a grass). It is natural anti-bacterial, water-resistant and extremely durable. DIY installation is easy, as bamboo flooring is available with tongue-and-groove technology familiar in hardwood/laminate alternatives. Bamboo flooring is often more expensive than laminate , though it is generally cheaper than traditional hardwood flooring. Some bamboo floors are less sustainable than others, as they contain the toxic substance formaldehyde (rather than natural-base adhesives). [ 17 ] Cork flooring is made by removing the bark of the Cork Oak (Quercus Suber) without harming the tree (if harvested correctly); as such, it is a renewable and sustainable resource. It is naturally anti-microbial and has excellent insulation properties, ensuring minimal heat loss and comfortable warm walking surface. Cork is resilient and 'springs back' preventing imprints due to heavy traffic and furniture, it also provides excellent noise insulation. While cork itself is low in volatile organic compounds (VOC) emissions, it is important to check the finish applied. Cork is not suitable for bathrooms, as it absorbs moisture. [ 18 ] [ 19 ] Linoleum is made from dried and milled flax seeds mixed with other plant material (pine resins, wood flour , ground cork) with a jute backing, all completely natural materials which come from renewable sources and are 100% biodegradable. All by products and waste is milled and used. Linoleum does not fade, as the pigments are embedded in the structure. It is anti-static, repelling dirt, dust and other small particles, making it hypoallergenic – for this reason it is often used by people with respiratory issues (asthma, allergies). It is also fire-resistant and does not require additional fire-retardants finish. [ 20 ] Rubber flooring used to be made from a rubber tree , a 100% renewable resource. Today styrene-butadiene rubber (SBR), a general-purpose synthetic rubber , produced from a copolymer of styrene and butadiene is used for "rubber flooring". It is easy to install and maintain, is anti-static and provides effective sound insulation and vibration reduction. Rubber flooring is also resistant to fading and cigarette burns. Most rubber flooring is made from synthetic rubber, which is not a sustainable product. [ 21 ] There are carpets which are sustainable, using natural fibers such as cotton, sisal, wool, jute and coconut husk. Handmade Citapore rugs include a wide range of sustainable flooring material as these rugs are generally made from cotton (both virgin and recycled), jute, rayon and cotton chennile. It is also possible to have carpet made completely from recycled polyethylene terephthalate used for food/drink containers. Recycled nylon is also a common material used and the process takes carpet made with nylon 6 fibers and recycles it into brand new nylon carpet. This process can be repeated numerous times and in 2009 alone, Shaw's Evergreen facility recycled over 100 million pounds of carpet. This is sustainable and it reduces material sent to landfill; further it uses dyeing methods that are less polluting and require less energy than other flooring. This flooring is sustainable when used alongside eco-friendly adhesive, as some products may have toxic finishes added (stain/fireproofing) that are not considered sustainable. [ 22 ] Coconut timber is a hardwood substitute from coconut palm trees. Coconut palm wood flooring is cheaper than teak, with the wood hardness comparable to mahogany. Coconut palm wood is made from matured (60 to 80 years old) coconut palm trees that no longer bear fruits.
https://en.wikipedia.org/wiki/Sustainable_flooring
Sustainable Remediation is a term adopted internationally and encompasses sustainable approaches, as described by the Brundtland Report , to the investigation, assessment and management (including institutional controls) of potentially contaminated land and groundwater . [ 1 ] The process of identifying sustainable remediation is defined by The UK Sustainable remediation Forum [ 2 ] as “ the practice of demonstrating, in terms of environmental , economic and social indicators, that the benefit of undertaking remediation is greater than its impact, and that the optimum remediation solution is selected through the use of a balanced decision-making process .” Sustainable remediation is the practice of considering the effects of implementing an environmental cleanup and incorporating options to minimize the footprint of the cleanup actions. [ 3 ] Opportunities for green and sustainable practices exist throughout the site remediation process of remedial investigation, design, construction, operation, and monitoring. Five core elements are evaluated as part of the environmental footprint analysis including 1) energy, 2) air and atmosphere, 3) materials and waste, 4) land and ecosystem, and 5) water. [ 4 ] The cleanup remedy is evaluated for each core element to 1) minimize total energy use and maximize renewable energy use, 2) minimize air pollutants and greenhouse gas emissions, 3) minimize water use and impacts to water resources, 4) reduce, reuse, and recycle materials and waste, and 5) minimize land use and protect ecosystems. [ 5 ]
https://en.wikipedia.org/wiki/Sustainable_remediation
Sustained load cracking , or SLC , is a metallurgical phenomenon that occasionally develops in pressure vessels and structural components under stress for sustained periods of time. [ 1 ] It is particularly noted in aluminium pressure vessels such as diving cylinders . [ 2 ] [ 3 ] Sustained load cracking is not a manufacturing defect; it is a phenomenon associated with certain alloys and service conditions: Crack growth is reported to be very slow by Luxfer , a major manufacturer of aluminium high-pressure cylinders. [ 4 ] Cracks are reported to develop over periods in the order of 8 or more years before reaching a stage where the cylinder is likely to leak, which allows timely detection by properly trained inspectors using eddy-current crack-detection equipment. [ 5 ] SLC cracks have been detected in cylinders produced by several manufacturers, including Luxfer, Walter Kidde, and CIG gas cylinders. Most of the cracking has been observed in the neck and shoulder areas of cylinders, though some cracks in the cylindrical part have also been reported. [ 1 ] The phenomenon was first noticed in 1983 in hoop-wound fibre-reinforced aluminium alloy cylinders, which burst in use in the USA. The alloy was 6351 with a relatively high lead content (400 ppm), but even after the lead content was lowered, the problem recurred, and subsequently the problem was detected in monolithic aluminium cylinders. [ 6 ] [ 5 ] The first incidence of an SLC crack in the cylindrical part of a cylinder was reported in 1999. [ 1 ] Neck cracks are readily observed during inspection, but body and shoulder cracks are more difficult to detect. [ 1 ] Neck thread cracks can be non-destructively tested using eddy-current crack-detection equipment. This is reported to be reliable for alloy 6351, but false positives have been reported for tests on alloy 6061. [ 5 ] All of these forms of crack development are the result of the cylinder being subject to high pressure for prolonged periods. The cracks are intergranular and occur at grain boundaries. There is no evidence of stress corrosion or fatigue. [ 1 ] [ 5 ] The presence of a relatively high lead content has been identified as a contributory factor. Cracking at the grain boundaries is accelerated in the presence of lead. The presence of bismuth is also suspected to be contributory. [ 1 ] Alloy composition has also been found to be a factor. Alloy 6061 has shown good resistance to SLC, as have alloys 5283 and 7060. Manufacturing defects such as folds on the inside surface have been shown to be harmful, particularly for parallel-threaded cylinders. Grain size has been shown to be of relatively minor importance. [ 1 ] [ 5 ]
https://en.wikipedia.org/wiki/Sustained_load_cracking
Sutton's law states that when diagnosing, one should first consider the obvious. It suggests that one should first conduct those tests which could confirm (or rule out) the most likely diagnosis. It is taught in medical schools to suggest to medical students that they might best order tests in that sequence which is most likely to result in a quick diagnosis, hence treatment, while minimizing unnecessary costs. It is also applied in pharmacology , when choosing a drug to treat a specific disease you want the drug to reach the disease. It is applicable to any process of diagnosis , e.g. debugging computer programs . Computer-aided diagnosis provides a statistical and quantitative approach. A more thorough analysis will consider the false positive rate of the test and the possibility that a less likely diagnosis might have more serious consequences. A competing principle is the idea of performing simple tests before more complex and expensive tests, moving from bedside tests to blood results and simple imaging such as ultrasound and then more complex such as MRI then specialty imaging. The law can also be applied in prioritizing tests when resources are limited, so a test for a treatable condition should be performed before an equally probable but less treatable condition. The law is named after the bank robber Willie Sutton , who reputedly replied to a reporter's inquiry as to why he robbed banks by saying "because that's where the money is." In Sutton's 1976 book Where the Money Was , Sutton denies having said this, [ 1 ] but added that "If anybody had asked me, I'd have probably said it. That's what almost anybody would say.... it couldn't be more obvious." [ 2 ] A similar idea is contained in the physician's adage, " When you hear hoofbeats, think horses, not zebras ."
https://en.wikipedia.org/wiki/Sutton's_law
Suzaku (formerly ASTRO-EII ) was an X-ray astronomy satellite developed jointly by the Institute of Space and Aeronautical Science at JAXA and NASA 's Goddard Space Flight Center to probe high-energy X-ray sources, such as supernova explosions , black holes and galactic clusters . It was launched on 10 July 2005 aboard the M-V launch vehicle on the M-V-6 mission. After its successful launch, the satellite was renamed Suzaku after the mythical Vermilion bird of the South . [ 4 ] Just weeks after launch, on 29 July 2005, the first of a series of cooling system malfunctions occurred. These ultimately caused the entire reservoir of liquid helium to boil off into space by 8 August 2005. This effectively shut down the X-ray Spectrometer-2 (XRS-2), which was the spacecraft's primary instrument. The two other instruments, the X-ray Imaging Spectrometer (XIS) and the Hard X-ray Detector (HXD), were unaffected by the malfunction. As a result, another XRS was integrated into the Hitomi X-ray satellite , launched in 2016, which also was lost weeks after launch. A Hitomi successor, XRISM , launched on 7 September 2023, with an X-ray Spectrometer (Resolve) onboard as the primary instrument. On 26 August 2015, JAXA announced that communications with Suzaku had been intermittent since 1 June 2015 and that the resumption of scientific operations would take a lot of work to accomplish, given the spacecraft's condition. [ 5 ] Mission operators decided to complete the mission imminently, as Suzaku had exceeded its design lifespan by eight years at this point. The mission came to an end on 2 September 2015, when JAXA commanded the radio transmitters on Suzaku to switch themselves off. [ 6 ] [ 7 ] Suzaku carried high spectroscopic resolution, very wide energy band instruments for detecting signals ranging from soft X-rays up to gamma-rays (0.3–600 keV ). High-resolution spectroscopy and wide-band are essential factors in physically investigating high-energy astronomical phenomena, such as black holes and supernovas . One such feature, the K-line (x-ray) , may be key to more direct imaging of black holes. Suzaku discovered "fossil" light from a supernova remnant. [ 9 ] Suzaku was a replacement for ASTRO-E , which was lost in a launch failure. The M-V launch vehicle on the M-V-4 mission launched on 10 February 2000 at 01:30:00 UTC . It experienced a failure of 1st stage engine nozzle 42 seconds into the launch, causing control system breakdown and underperformance. [ 10 ] [ 11 ] Later stages could not compensate for underperformance, leaving payload in 250 miles (400 km) x 50 miles (80 km) orbit and subsequent reentry and crashed with its payload into the Indian Ocean . [ 12 ] [ 13 ]
https://en.wikipedia.org/wiki/Suzaku_(satellite)
Organometallic chemistry Catalysis Mechanistic chemistry Suzanne A. Blum is an American professor of chemistry at the University of California, Irvine . Blum works on mechanistic chemistry, most recently focusing on borylation reactions and the development of single-molecule and single-particle fluorescence microscopy to study organic chemistry and catalysis. She received the American Chemical Society 's Arthur C. Cope Scholar Award in 2023. [ 1 ] Blum studied chemistry as an undergraduate at the University of Michigan . She participated in multiple teaching and research projects, winning outstanding American Chemical Society student chapter, the UM Alumni Leadership award, and a National Science Foundation fellowship to attend graduate school at the University of California, Berkeley , where she earned a PhD working with Robert G. Bergman . [ 2 ] Blum published multiple first-author papers and received teaching awards throughout her tenure at the University of California, Berkeley . She completed a postdoctoral fellowship at Harvard Medical School in 2006. [ 3 ] Prof. Blum began her independent research career in 2006 at the University of California, Irvine (UCI). Blum’s research focuses on the development and mechanistic study of reactions in organic, organometallic, catalytic, and materials chemistry, and on monitoring reaction intermediates by a combination of traditional spectroscopy and fluorescence microscopy methods. While many of her initial independent research publications were based on activated complexes of gold or palladium catalysts, [ 4 ] she has more recently focused on borylation reactions to make advanced oxygen-, nitrogen-, or sulfur-containing heterocycles, [ 5 ] amenable to pharmaceutical and agricultural derivation. Since starting her independent career, Blum developed single-molecule and single-particle techniques, often borrowed from biological or physical contexts, to study chemical processes, including to observe intermediates in "classical" reactions. [ 6 ] [ 7 ] [ 8 ] Blum was elected Fellow of the American Association for the Advancement of Science (AAAS) in 2017 for distinguished contributions to molecular chemistry, particularly for the development of synthetic methods and of fluorescence microscopy tools to study chemical processes. [ 9 ]
https://en.wikipedia.org/wiki/Suzanne_Blum
The Suzuki reaction or Suzuki coupling is an organic reaction that uses a palladium complex catalyst to cross-couple a boronic acid to an organohalide . [ 1 ] [ 2 ] [ 3 ] It was first published in 1979 by Akira Suzuki , and he shared the 2010 Nobel Prize in Chemistry with Richard F. Heck and Ei-ichi Negishi for their contribution to the discovery and development of noble metal catalysis in organic synthesis . [ 4 ] This reaction is sometimes telescoped with the related Miyaura borylation ; the combination is the Suzuki–Miyaura reaction . It is widely used to synthesize poly olefins , styrenes , and substituted biphenyls . The general scheme for the Suzuki reaction is shown below, where a carbon–carbon single bond is formed by coupling a halide (R 1 -X) with an organoboron species (R 2 -BY 2 ) using a palladium catalyst and a base . The organoboron species is usually synthesized by hydroboration or carboboration , allowing for rapid generation of molecular complexity. Several reviews have been published describing advancements and the development of the Suzuki reaction. [ 5 ] [ 6 ] [ 7 ] The mechanism of the Suzuki reaction is best viewed from the perspective of the palladium catalyst. The catalytic cycle is initiated by the formation of an active Pd 0 catalytic species, A . This participates in the oxidative addition of palladium to the halide reagent 1 to form the organopalladium intermediate B . Reaction ( metathesis ) with base gives intermediate C , which via transmetalation [ 8 ] with the boron- ate complex D (produced by reaction of the boronic acid reagent 2 with base) forms the transient organopalladium species E . Reductive elimination step leads to the formation of the desired product 3 and restores the original palladium catalyst A which completes the catalytic cycle . The Suzuki coupling takes place in the presence of a base and for a long time the role of the base was not fully understood. The base was first believed to form a trialkyl borate (R 3 B-OR), in the case of a reaction of a trialkylborane (BR 3 ) and alkoxide ( − OR); this species could be considered as being more nucleophilic and then more reactive towards the palladium complex present in the transmetalation step. [ 9 ] [ 10 ] [ 11 ] Duc and coworkers investigated the role of the base in the reaction mechanism for the Suzuki coupling and they found that the base has three roles: Formation of the palladium complex [ArPd(OR)L 2 ], formation of the trialkyl borate and the acceleration of the reductive elimination step by reaction of the alkoxide with the palladium complex. [ 9 ] In most cases the oxidative addition is the rate determining step of the catalytic cycle. [ 12 ] During this step, the palladium catalyst is oxidized from palladium(0) to palladium(II). The catalytically active palladium species A is coupled with the aryl halide substrate 1 to yield an organopalladium complex B . As seen in the diagram below, the oxidative addition step breaks the carbon - halogen bond where the palladium is now bound to both the halogen (X) as well as the R 1 group. Oxidative addition proceeds with retention of stereochemistry with vinyl halides , while giving inversion of stereochemistry with allylic and benzylic halides. [ 13 ] The oxidative addition initially forms the cis –palladium complex, which rapidly isomerizes to the trans-complex. [ 14 ] The Suzuki coupling occurs with retention of configuration on the double bonds for both the organoboron reagent or the halide. [ 15 ] However, the configuration of that double bond, cis or trans is determined by the cis -to- trans isomerization of the palladium complex in the oxidative addition step where the trans palladium complex is the predominant form. When the organoboron is attached to a double bond and it is coupled to an alkenyl halide the product is a diene as shown below. Transmetalation is an organometallic reaction where ligands are transferred from one species to another. In the case of the Suzuki coupling the ligands are transferred from the organoboron species D to the palladium(II) complex C where the base that was added in the prior step is exchanged with the R 2 substituent on the organoboron species to give the new palladium(II) complex E . The exact mechanism of transmetalation for the Suzuki coupling remains to be discovered. The organoboron compounds do not undergo transmetalation in the absence of base and it is therefore widely believed that the role of the base is to activate the organoboron compound as well as facilitate the formation of R 1 -Pd ll -O t Bu intermediate ( C ) from oxidative addition product R 1 -Pd ll -X ( B ). [ 12 ] The final step is the reductive elimination step where the palladium(II) complex ( E ) eliminates the product ( 3 ) and regenerates the palladium(0) catalyst ( A ). Using deuterium labelling , Ridgway et al. have shown the reductive elimination proceeds with retention of stereochemistry. [ 16 ] The ligand plays an important role in the Suzuki reaction. Typically, the phosphine ligand is used in the Suzuki reaction. Phosphine ligand increases the electron density at the metal center of the complex and therefore helps in the oxidative addition step. In addition, the bulkiness of substitution of the phosphine ligand helps in the reductive elimination step. However, N -heterocyclic carbene ligands have recently been used in this cross coupling, due to the instability of the phosphine ligand under Suzuki reaction conditions. [ 17 ] N -Heterocyclic carbenes are more electron rich and bulky than the phosphine ligand. Therefore, both the steric and electronic factors of the N -heterocyclic carbene ligand help to stabilize active Pd(0) catalyst. [ 18 ] The advantages of Suzuki coupling over other similar reactions include availability of common boronic acids, mild reaction conditions, and its less toxic nature. Boronic acids are less toxic and safer for the environment than organotin and organozinc compounds . It is easy to remove the inorganic by-products from the reaction mixture. Further, this reaction is preferable because it uses relatively cheap and easily prepared reagents. Being able to use water as a solvent [ 19 ] makes this reaction more economical, eco-friendly, and practical to use with a variety of water-soluble reagents. A wide variety of reagents can be used for the Suzuki coupling, e.g., aryl or vinyl boronic acids and aryl or vinyl halides. Work has also extended the scope of the reaction to incorporate alkyl bromides. [ 20 ] In addition to many different type of halides being possible for the Suzuki coupling reaction, the reaction also works with pseudohalides such as triflates (OTf), as replacements for halides . The relative reactivity for the coupling partner with the halide or pseudohalide is: R 2 –I > R 2 –OTf > R 2 –Br ≫ R 2 –Cl. Boronic esters and organotrifluoroborate salts may be used instead of boronic acids. The catalyst can also be a palladium nanomaterial-based catalyst . [ 21 ] With a novel organophosphine ligand ( SPhos ), a catalyst loading of down to 0.001 mol% has been reported. [ 22 ] These advances and the overall flexibility of the process have made the Suzuki coupling widely accepted for chemical synthesis. The Suzuki coupling reaction is scalable and cost-effective for use in the synthesis of intermediates for pharmaceuticals or fine chemicals . [ 23 ] The Suzuki reaction was once limited by high levels of catalyst and the limited availability of boronic acids . Replacements for halides were also found, increasing the number of coupling partners for the halide or pseudohalide as well. Scaled up reactions have been carried out in the synthesis of a number of important biological compounds such as CI-1034 which used triflate and boronic acid coupling partners which was run on an 80 kilogram scale with a 95% yield. [ 24 ] Another example is the coupling of 3-pyridylborane and 1-bromo-3-(methylsulfonyl)benzene that formed an intermediate that was used in the synthesis of a potential central nervous system agent. The coupling reaction to form the intermediate produced 278 kilograms in a 92.5% yield. [ 15 ] [ 23 ] Significant efforts have been put into the development of heterogeneous catalysts for the Suzuki CC reaction, motivated by the performance gains in the industrial process (eliminating the catalyst separation from the substrate), and recently a Pd single atom heterogeneous catalyst has been shown to outperform the industry default homogeneous Pd(PPh 3 ) 4 catalyst. [ 25 ] The Suzuki coupling has been frequently used in syntheses of complex compounds. [ 26 ] [ 27 ] The Suzuki coupling has been used on a citronellal derivative for the synthesis of caparratriene , a natural product that is highly active against leukemia: [ 28 ] Various catalytic uses of metals other than palladium (especially nickel) have been developed. [ 29 ] The first nickel catalyzed cross-coupling reaction was reported by Percec and co-workers in 1995 using aryl mesylates and boronic acids. [ 30 ] Even though a higher amount of nickel catalyst was needed for the reaction, around 5 mol %, nickel is not as expensive or as precious a metal as palladium . The nickel catalyzed Suzuki coupling reaction also allowed a number of compounds that did not work or worked worse for the palladium catalyzed system than the nickel-catalyzed system. [ 29 ] The use of nickel catalysts has allowed for electrophiles that proved challenging for the original Suzuki coupling using palladium, including substrates such as phenols, aryl ethers, esters, phosphates, and fluorides. [ 29 ] Investigation into the nickel catalyzed cross-coupling continued and increased the scope of the reaction after these first examples were shown and the research interest grew. Miyaura and Inada reported in 2000 that a cheaper nickel catalyst could be utilized for the cross-coupling , using triphenylphosphine (PPh 3 ) instead of the more expensive ligands previously used. [ 31 ] However, the nickel-catalyzed cross-coupling still required high catalyst loadings (3-10%), required excess ligand (1-5 equivalents) and remained sensitive to air and moisture. [ 29 ] Advancements by Han and co-workers have tried to address that problem by developing a method using low amounts of nickel catalyst (<1 mol%) and no additional equivalents of ligand. [ 32 ] It was also reported by Wu and co-workers in 2011 that a highly active nickel catalyst for the cross-coupling of aryl chlorides could be used that only required 0.01-0.1 mol% of nickel catalyst. They also showed that the catalyst could be recycled up to six times with virtually no loss in catalytic activity. [ 33 ] The catalyst was recyclable because it was a phosphine nickel nanoparticle catalyst (G 3 DenP-Ni) that was made from dendrimers . Advantages and disadvantages apply to both the palladium and nickel-catalyzed Suzuki coupling reactions. Apart from Pd and Ni catalyst system, cheap and non-toxic metal sources like iron and copper [ 34 ] have been used in Suzuki coupling reaction. The Bedford research group [ 35 ] and the Nakamura research group [ 36 ] have extensively worked on developing the methodology of iron catalyzed Suzuki coupling reaction. Ruthenium is another metal source that has been used in Suzuki coupling reaction. [ 37 ] Nickel catalysis can construct C-C bonds from amides. Despite the inherently inert nature of amides as synthons, the following methodology can be used to prepare C-C bonds. The coupling procedure is mild and tolerant of myriad functional groups, including: amines, ketones, heterocycles, groups with acidic protons. This technique can also be used to prepare bioactive molecules and to unite heterocycles in controlled ways through shrewd sequential cross-couplings. A general review of the reaction scheme is given below. [ 38 ] The synthesis of a tubulin -binding compound ( antiproliferative agent) was carried out using a trimethoxy benzamide and an indolyl pinacol atoboron coupling partner on a gram scale. [ 38 ] Aryl boronic acids are comparatively cheaper than other organoboranes and a wide variety of aryl boronic acids are commercially available. Hence, it has been widely used in Suzuki reaction as an organoborane partner. Aryltrifluoroborate salts are another class of organoboranes that are frequently used because they are less prone to protodeboronation compared to aryl boronic acids . They are easy to synthesize and can be easily purified. [ 39 ] Aryltrifluoroborate salts can be formed from boronic acids by the treatment with potassium hydrogen fluoride which can then be used in the Suzuki coupling reaction. [ 40 ] The Suzuki coupling reaction is different from other coupling reactions in that it can be run in biphasic organic-water, [ 41 ] water-only, [ 19 ] or no solvent. [ 42 ] This increased the scope of coupling reactions, as a variety of water-soluble bases, catalyst systems, and reagents could be used without concern over their solubility in organic solvent. Use of water as a solvent system is also attractive because of the economic and safety advantages. Frequently used in solvent systems for Suzuki coupling are toluene , [ 43 ] THF , [ 44 ] dioxane , [ 44 ] and DMF . [ 45 ] The most frequently used bases are K 2 CO 3 , [ 41 ] KO t Bu , [ 46 ] Cs 2 CO 3 , [ 47 ] K 3 PO 4 , [ 48 ] NaOH , [ 49 ] and NEt 3 . [ 50 ]
https://en.wikipedia.org/wiki/Suzuki_reaction
In condensed matter physics , the Su–Schrieffer–Heeger ( SSH ) model or SSH chain is a one-dimensional lattice model that presents topological features . [ 1 ] It was devised by Wu-Pei Su, John Robert Schrieffer , and Alan J. Heeger in 1979, to describe the increase of electrical conductivity of polyacetylene polymer chain when doped, based on the existence of solitonic defects. [ 2 ] [ 3 ] It is a quantum mechanical tight binding approach, that describes the hopping of spinless electrons in a chain with two alternating types of bonds. [ 1 ] Electrons in a given site can only hop to adjacent sites. [ 1 ] Depending on the ratio between the hopping energies of the two possible bonds, the system can be either in metallic phase (conductive) or in an insulating phase . The finite SSH chain can behave as a topological insulator , depending on the boundary conditions at the edges of the chain. For the finite chain, there exists an insulating phase, that is topologically non-trivial and allows for the existence of edge states that are localized at the boundaries. [ 1 ] The model describes a half-filled one-dimensional lattice, with two sites per unit cell, A and B , which correspond to a single electron per unit cell. In this configuration each electron can either hop inside the unit cell or hop to an adjacent cell through nearest neighbor sites. As with any 1D model, with two sites per cell, there will be two bands in the dispersion relation (usually called optical and acoustic bands). If the bands do not touch, there is a band gap. If the gap lies at the Fermi level , then the system is considered to be an insulator . The tight binding Hamiltonian in a chain with N sites can be written as [ 1 ] where h.c. denotes the Hermitian conjugate , v is the energy required to hop from a site A to B inside the unit cell, and w is the energy required to hop between unit cells. Here the Fermi energy is fixed to zero. The dispersion relation for the bulk can be obtained through a Fourier transform . Taking periodic boundary conditions | N + 1 , X ⟩ = | 1 , X ⟩ {\displaystyle |N+1,X\rangle =|1,X\rangle } , where X = A , B {\displaystyle X=A,B} , we pass to k -space by doing which results in the following Hamiltonian where the eigenenergies are easily calculated as and the corresponding eigenstates are where The eigenenergies are symmetrical under swap of v ↔ w {\displaystyle v\leftrightarrow w} , and the dispersion relation is mostly gapped (insulator) except when v = w {\displaystyle v=w} (metal). By analyzing the energies, the problem is apparently symmetric about v = w {\displaystyle v=w} , the v > w {\displaystyle v>w} has the same dispersion as v < w {\displaystyle v<w} . Nevertheless, not all properties of the system are symmetrical, for example the eigenvectors are very different under swap of v ↔ w {\displaystyle v\leftrightarrow w} . It can be shown for example that the Berry connection integrated over the Brillouin zone k ∈ { − π , π } {\displaystyle k\in \{-\pi ,\pi \}} , produces different winding numbers : [ 1 ] showing that the two insulating phases, v > w {\displaystyle v>w} and v < w {\displaystyle v<w} , are topologically different (small changes in v and w change A − ( k ) {\displaystyle A_{-}(k)} but not g {\displaystyle g} over the Brillouin zone). The winding number remains undefined for the metallic case v = w {\displaystyle v=w} . This difference in topology means that one cannot pass from an insulating phase to another without closing the gap (passing by the metallic phase). This phenomenon is called a topological phase transition. [ 1 ] The physical consequences of having different winding number become more apparent for a finite chain with an even number of N {\displaystyle N} lattice sites. It is much harder to diagonalize the Hamiltonian analytically in the finite case due to the lack of translational symmetry. [ 1 ] There exist two limiting cases for the finite chain, either v = 0 {\displaystyle v=0} or w = 0 {\displaystyle w=0} . In both of these cases, the chain is clearly an insulator as the chain is broken into dimers (dimerized). However one of the two cases would consist of N / 2 {\displaystyle N/2} dimers, while the other case would consist of ( N − 2 ) / 2 {\displaystyle (N-2)/2} dimers and two unpaired sites at the edges of the chain. In the latter case, as there is no on-site energy, if an electron finds itself on any of the two edge sites, its energy would be zero. So either the case v = 0 {\displaystyle v=0} or the case w = 0 {\displaystyle w=0} would necessarily have two eigenstates with zero energy, while the other case would not have zero-energy eigenstates. Contrary to the bulk case, the two limiting cases are not symmetrical in their spectrum. By plotting the eigenstates of the finite chain as function of position, one can show that there are two distinct kinds of states. For non-zero eigenenergies, the corresponding wavefunctions would be delocalized all along the chain while the zero energy eigenstates would portray localized amplitudes at the edge sites. The latter are called edge states. Even if the eigenenergies lie in the gap, the edge states are localized and correspond to an insulating phase. By plotting the spectrum as a function of v {\displaystyle v} for a fixed value of w {\displaystyle w} , the spectrum is divided into two insulating regions divided by the metallic intersection at w = v {\displaystyle w=v} . The spectrum would be gapped in both insulating regions, but one of the regions would show zero energy eigenstates and the other region would not, corresponding to the dimerized cases. The existence of edge states in one region and not in the other demonstrate the difference between insulating phases and it is this sharp transition at w = v {\displaystyle w=v} that correspond to a topological phase transition. [ 1 ] The bulk case allows to predict which insulating region would present edge states, depending on the value of the winding number in the bulk case. For the region where the winding number is g = 1 {\displaystyle g=1} in the bulk, the corresponding finite chain with an even number of sites would present edge states, while for the region where the winding number is g = 0 {\displaystyle g=0} in the bulk case, the corresponding finite chain would not. This relation between winding numbers in the bulk and edge states in the finite chain is called the bulk-edge correspondence . [ 1 ]
https://en.wikipedia.org/wiki/Su–Schrieffer–Heeger_model
The book Svenska Spindlar or Aranei Svecici ( Swedish and Latin , respectively, for "Swedish spiders") is one of the major works of the Swedish arachnologist and entomologist Carl Alexander Clerck and was first published in Stockholm in the year 1757. It was the first comprehensive book on the spiders of Sweden and one of the first regional monographs of a group of animals worldwide. The full title of the work is Svenska Spindlar uti sina hufvud-slägter indelte samt under några och sextio särskildte arter beskrefne och med illuminerade figurer uplyste – Aranei Svecici, descriptionibus et figuris æneis illustrati, ad genera subalterna redacti, speciebus ultra LX determinati , [ 1 ] ("Swedish spiders into their main genera separated, and as sixty and a few particular species described and with illuminated figures illustrated") and included 162 pages of text (eight pages were unpaginated) and six colour plates. It was published in Swedish, with a Latin translation printed in a slightly smaller font below the Swedish text. Clerck described in detail 67 species of Swedish spiders, [ 2 ] and for the first time in a zoological work consistently applied binomial nomenclature as proposed by Carl Linnaeus . Linnaeus had originally invented this system for botanical names in his 1753 work Species Plantarum , and presented it again in 1758 in the 10th edition of Systema Naturae for more than 4,000 animal species. Svenska Spindlar is the only pre-Linnaean source to be recognised as a taxonomic authority for such names. Clerck explained in the last (9th of the 2nd part) chapter of his work that in contrast to previous authors he used the term "spider" in the strict sense, for animals possessing eight eyes and separated prosoma and opisthosoma , and that his concept of this group of animals did not include Opiliones (because they had two eyes and a broadly joined prosoma and opisthosoma) and other groups of arachnids . For all spiders Clerck used a single generic name ( Araneus ), to which was added a specific name which consisted of only one word. Each species was presented in the Swedish text with their Latin scientific names, [ 3 ] followed by detailed information containing the exact dates when he had found the animals, and a detailed description of eyes, legs and body. The differences between the sexes were also described. Each species was illustrated in impressively accurate drawings printed on coloured copper plates which were bound at the end of the volume. Because of the exceptionally thorough treatment of the spider species, the scientific names proposed by Clerck (which were adopted by Carl Linnaeus in his Systema Naturae in 1758 with only minor modifications) had traditionally been recognized by arachnologists as binomial and available. In 1959 the ICZN Commission decided that Clerck's work should be available for zoological nomenclature, [ 4 ] but the International Code of Zoological Nomenclature did not mention Clerck's work. [ Note 1 ] Only after 1999 was this officially recognized in the Code. [ 5 ] This means that in case of doubt the spelling of a spider name as from Clerck's 1757 work has priority over that proposed by Linnaeus in 1758 (an example is Araneus instead of Aranea ), [ 6 ] and that Clerck's spiders were the first animals in modern zoology to have obtained an available scientific name in the Linnean system. [ 5 ] In the late 19th century, Clerck's 1757 work was commonly accepted as the first application of binomial nomenclature to spiders. [ 7 ] In 1959 the ICZN Commission ruled that the date 1758 should be used for Clerck's names, [ 4 ] this date 1758 was repeated to apply to Clerck's names in the 4th edition of the International Code of Zoological Nomenclature in 1999. [ 5 ] In a complete binomial name with author and year, the year corresponds to the year of publication of the original source. Since 2000, the ICZN Code includes an exception of this very basic rule. [ 8 ] From the beginning on the new provision in the Code has been misunderstood by many researchers who believed that by setting the date for Clerck's work to 1758 (overriding its true date 1757) and the date for Systema Naturae to 1 January 1758, the priority was changed. In 2007, a case was even brought before the Commission because the researchers were no longer sure whether the generic name should be Araneus Clerck or Aranea Linnaeus . [ 9 ] In their judgement the year 1758 for Clerck's Svenska Spindlar could be interpreted in a way that the Linnean work from 1 January 1758 should have priority. In 2009 the Commission saw itself forced to repeat once more, although this was already explicit in the Code's Article 3.1, that the name Araneus established by Clerck shall have priority and be used for the genus. [ 10 ] Svenska Spindlar lists the following 67 species of spider; their current identities follow Platnick (2000–2010). [ 11 ] Chapter 2 ( Araneidae , Tetragnathidae ) Chapter 3 ( Theridiidae , Nesticidae , Linyphiidae ) Chapter 4 ( Agelenidae , Clubionidae ) Chapter 5 ( Lycosidae , Pisauridae ) Chapter 6 ( Salticidae ) Chapter 7 ( Thomisidae , Philodromidae , Sparassidae ) Chapter 8 ( Cybaeidae )
https://en.wikipedia.org/wiki/Svenska_Spindlar