text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Mohammad Khaja Nazeeruddin (born 1957 in Thumboor, Andhra Pradesh, India) is an Indian-Swiss chemist and materials scientist who conducts research on Perovskite solar cells , dye-sensitized solar cells, and light-emitting diodes. He is a professor at EPFL (École Polytechnique Fédérale de Lausanne) and the director of the Laboratory for Molecular Engineering of Functional Materials at School of Basic Sciences. [ 1 ] [ 2 ] [ 3 ]
Nazeeruddin received a PhD in chemistry from the Osmania University in Hyderabad , India . He served as a lecturer at Osmania University for two years. He then joined the Central Salt and Marine Chemicals Research Institute in Bhavnagar , India . In 1987 he joined EPFL, first as a postdoctoral fellow, and then held several positions as research fellow for seven years. In 2012, he was promoted to "Maître d’ Enseignement et de Recherche" (senior lecturer). Since 2014 he has served as full professor at EPFL (École Polytechnique Fédérale de Lausanne) and head of the Laboratory for Molecular Engineering of Functional Materials at School of Basic Sciences based at EPFL's Valais campus. [ 1 ] [ 2 ] [ 3 ]
He has also had several affiliated and voluntary positions while working and employed at EPFL, such as World Class University Professor (2009–2014) and BKPLUS 21 (2014–2019) at the Department of Advanced Materials, Chemistry of the Korea University , and visiting professor at King Abdulaziz University in Jeddah , Saudi Arabia (2014–2021) and at the North China Electric Power University (2014–2021), and an eminent professor at Brunei . [ 4 ] [ 5 ] [ 6 ]
Nazeeruddin's research focuses on chemical engineering of functional materials for photovoltaic and light-emitting applications such as Perovskite and dye-sensitized solar cells , and light-emitting diodes . [ 7 ]
His team conducts research in the area of inorganic chemistry of ruthenium sensitizers that convert solar energy through the use of high surface area nanocrystalline mesoscopic films of oxide semiconductors. These tailored-sensitizers have attracted additional research in dye-sensitized solar cell research. They have synthesized several ruthenium sensitizers (N3, N719 and N749), [ 8 ] [ 9 ] [ 10 ] donor-π-bridge-acceptor porphyrin sensitizers, [ 11 ] and near IR sensitizers. [ 12 ] A further field of their research encompass organic light-emitting diodes (OLEDs) that are used in the fabrication of digital displays. [ 13 ] They contributed novel blue, green, and red phosphorescent iridium emitters for OLEDs. [ 14 ]
His laboratory is located at the EPFL-Sion Energy center, focusing on organic, inorganic lead halide perovskite solar cells and Light Emitting Diodes research. His laboratory has fabricated blue, green, and red PLEDs with unprecedentedly high external quantum efficiency. His group has investigated solutions (one-step and sequential deposition) and sublimation-deposited perovskite solar cells and obtained a power conversion efficiency of 25%. His group has developed a Perovskite Solar Cells Module (area 26.02 cm2) with an efficiency of 22.4%. Their research has been covered in several international news outlets. [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] However, it is important to note that these cells and modules are not commercially deployed and much more research and development needs to be done and long-term stability of the cells and modules have yet to be demonstrated. There are numerous leading researchers world-wide working on this problem. It is important to note that perovskite-based solar cell technology in general is still in the developmental stage and essentially no perovskite-based solar modules have been commercially deployed for harnessing solar power.
Nazeeruddin is a co-author on numerous peer-reviewed papers and book chapters, and is a co-inventor on many patents. [ 22 ] According to Thomson Reuters he has been named a "Highly Cited Researcher" in chemistry, materials science and engineering in 2016 and 2017, and was included in the list of "World's Most Influential Scientific Minds" from all scientific domains. [ 23 ] [ 24 ] As stated in the ISI listing, he is one of the most cited chemists with more than 196,000 citations and an h-index of 190. [ 25 ] His group has earned worldwide recognition and leadership in Perovskite solar cells . The Times Higher Education named him among "the top 10 researchers in the world working on the high impact perovskite materials and devices". [ 26 ] Nazeeruddin was included as one of the Top 2% Most-Cited Scientists in the world on the list published by Stanford University in October 2022.
He is an elected member of the European Academy of Sciences , [ 27 ] and a fellow of the Royal Society of Chemistry . [ 28 ] Fellow of Telangana Academy of Sciences, and Member of the Swiss Chemical Society. He was awarded the 34th Khwarizmi International Award in Basic Sciences, 2021.
Since 2018 he has been a jury member of the Rei Jaume I foundation in Spain. [ 29 ]
He is the recipient of the best paper award from the journal Inorganics , [ 30 ] the EPFL Excellence Prize (1998 and 2006), the Brazilian FAPESP fellowship award (1999), the Japanese Government Science & Technology Agency Fellowship (1998), and Government of India National Scholar award (1987-1989). [ 31 ]
He is editor in chief of Chemistry of Inorganic Materials , an advisory board member at Advanced Functional Materials , [ 32 ] an associated editor at Energy Chem , [ 33 ] an editorial advisory board member at Scientific Reports , [ 34 ] an editorial advisory board member at RRL Solar , [ 35 ] and an editorial advisory board member at Artificial Photosynthesis . | https://en.wikipedia.org/wiki/Mohammad_Khaja_Nazeeruddin |
Mohammad Saleh Zarepour is an Iranian philosopher and senior lecturer at the Department of Philosophy of the University of Manchester .
He is a winner of Philip Leverhulme Prize (2023) and is known for his works on medieval Islamic philosophy , philosophy of religion , philosophy of language , philosophy of mathematics , and philosophy of logic . [ 1 ] [ 2 ] [ 3 ] [ 4 ] Zarepour is a Life Member of Clare Hall College . [ 5 ]
This biography of an Iranian philosopher is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mohammad_Saleh_Zarepour |
Mohammed Munim al-Izmerly was an Iraqi chemistry professor who allegedly experimented with poisons on prisoners while Saddam Hussein was president of Iraq and died while in US custody in early February 2004, ten months after his arrest.
In an October 6, 2005 report by Charles A. Duelfer , a CIA adviser who led the arms-hunting Iraq Survey Group , Izmerly is alleged to have been a key figure in training other Iraqi chemists trying to make poison gas for military use in the 1970s, the leader of the effort to produce mustard gas , and in the 1980s was chief of the chemical section of the Iraq Intelligence Service. According to the report, Izmerly's ex-colleagues told interrogators that al-Izmerly was head of human experiments and tested substances for use on assassination targets by giving poisoned food or injections to about 100 political and other prisoners. The report states that Izmerly admitted giving poison to 20 people as part of the experimental program.
The US security forces initially claimed in a note on the body bag containing Izmerly's body, which was delivered to the Baghdad morgue in February 2004, at an estimated two weeks after his death, stating that the death was due to brainstem compression . Prof Izmerly's family stated that three weeks earlier, they had visited him in the US prison at Baghdad airport and that he had seemed in good health .
An autopsy was commissioned by Izmerly's family and carried out by Dr Faik Amin Baker, director of Baghdad hospital's forensic department, who said that Izmerly's death was caused by a sudden hit to the back of his head and that the cause of death was blunt trauma. It was uncertain exactly how he died, but someone had hit him from behind, possibly with a bar or a pistol . It was also found that US doctors had made a 20 cm incision in his skull, apparently in an attempt to save his life after the initial blow.
Izmerly's daughter, Rana Izmerly, alleged to the Guardian that her father was murdered: The evidence is clear. It suggests the Americans killed him and then tried to hide what they had done. ... You offer no proof that he did something wrong, you refuse him a lawyer and then you kill him. Why? Izmerly's family presented its autopsy findings to an Iraqi judge and reported that the judge claimed he lacked the power to investigate, stating You can't do anything to the coalition. What happened is history . | https://en.wikipedia.org/wiki/Mohammed_Munim_al-Izmerly |
Mohammed Nasser Al Ahbabi is the Director General of United Arab Emirates Space Agency . [ 1 ] Before joining the UAE Space Agency, Ahbabi was part of a UAE Armed Forces think tank project, where he worked alongside military and government stakeholders, on concepts and technologies in Smart Defense and Cyber Warfare, amongst others. He has an active role in ITU-R and has served as the head of YAHSAT MilSatCom Project. [ 2 ] [ 3 ]
In 1998, Mohammed Nasser Al Ahbabi obtained a degree in electronic engineering from the University of California , United States of America . [ 2 ] He obtained a masters degree in communications from the University of Southampton , United Kingdom in 2001, and a Ph.D in Laser and Fibre Optics [ 4 ] from the same university in 2005. [ 3 ]
Mohammed Nasser Al Ahbabi initially served as a telecommunications officer for the UAE Armed Forces. Concurrently, he worked as a coordinator for Dubai Internet City . From 2005 to 2012, he was a telecommunications officer at Sharyan Al Doea Network, and a project manager in the military division at Al Yah Satellite Communications . He is a part [ 5 ] of the Hope Mars Mission team, which plans to send the Hope Space exploration probe into Mars' orbit by 2020.
Mohammed Nasser Al Ahbabi has been ranked 43 in the Top 100 Most Powerful Arabs 2018 list compiled by Gulf Business. [ 6 ] He was ranked 13 in Richtopia's list of the world’s 100 most influential figures in the space exploration sector. [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Mohammed_Nasser_Al_Ahbabi |
On May 31, 1989, seven amateur astronomers held an organizational meeting in the Solar Classroom at Hamilton College in Clinton, New York, to plan the formation of an astronomy club. By-laws were prepared by Richard Somer, approved by the group, and the Mohawk Valley Astronomical Society was born. The founders were Richard Somer, Arlene Somer, Joe Perry, Sam Falvo, Dan Pavese, John Ossowski, and Phil Marasco.
Over the next several weeks, Dick Somer designed and printed brochures for prospective new members. The group discussed ideas for publicity campaigns to promote the Club, and considered schedules for Club and public observing sessions. Dan Pavese suggested "Telescopic Topics" as the name of the newsletter, and it was adopted.
The first public meeting was held in Hamilton College's Solar Classroom on July 12, and the club was off to a good start. The previous day, an article appeared in the Utica "Observer-Dispatch" inviting the public to attend, and the local community responded heartily. Over 60 people came to the first meeting. The evening's speakers presented slides and photographs of the sky's most noted features, from the Moon to distant galaxies, the Milky Way, Comet Halley as seen from the Mohawk Valley, and a brilliant aurora. Once the meeting adjourned, members and visitors used the telescope in Hamilton College's Observatory to view the Moon and Saturn.
August 1989 marked the first publication of the MVAS newsletter, "Telescopic Topics." The newsletter is published monthly and provides a summary of the minutes of the previous month's meeting, reminders of upcoming observing sessions, and astronomy conferences as well as the latest news from the scientific community such as updates on NASA missions and the discovery of new celestial objects. [ 1 ]
Membership in MVAS is open to anyone with an interest in astronomy, from beginners to the more experienced.
Meetings are held the second Wednesday of each month (except March and July) at 7:30 PM at the Kirkland Senior Center, 2 Mill St., Clark Mills, NY (directions and map) unless otherwise noted. The April meeting is held on a Saturday evening in conjunction with a social event at a local restaurant. The July Star-B-Que is held on a weekend. All meetings are open to the public. Guests are encouraged to attend. Refreshments are provided by MVAS members.
Stargazing events and talks are held throughout the area.
The club publishes a monthly newsletter called "Telescopic Topics." | https://en.wikipedia.org/wiki/Mohawk_Valley_Astronomical_Society |
Mohr's circle is a two-dimensional graphical representation of the transformation law for the Cauchy stress tensor .
Mohr's circle is often used in calculations relating to mechanical engineering for materials' strength , geotechnical engineering for strength of soils , and structural engineering for strength of built structures. It is also used for calculating stresses in many planes by reducing them to vertical and horizontal components. These are called principal planes in which principal stresses are calculated; Mohr's circle can also be used to find the principal planes and the principal stresses in a graphical representation, and is one of the easiest ways to do so. [ 1 ]
After performing a stress analysis on a material body assumed as a continuum , the components of the Cauchy stress tensor at a particular material point are known with respect to a coordinate system . The Mohr circle is then used to determine graphically the stress components acting on a rotated coordinate system, i.e., acting on a differently oriented plane passing through that point.
The abscissa and ordinate ( σ n {\displaystyle \sigma _{\mathrm {n} }} , τ n {\displaystyle \tau _{\mathrm {n} }} ) of each point on the circle are the magnitudes of the normal stress and shear stress components, respectively, acting on the rotated coordinate system. In other words, the circle is the locus of points that represent the state of stress on individual planes at all their orientations, where the axes represent the principal axes of the stress element.
19th-century German engineer Karl Culmann was the first to conceive a graphical representation for stresses while considering longitudinal and vertical stresses in horizontal beams during bending . His work inspired fellow German engineer Christian Otto Mohr (the circle's namesake), who extended it to both two- and three-dimensional stresses and developed a failure criterion based on the stress circle. [ 2 ]
Alternative graphical methods for the representation of the stress state at a point include the Lamé's stress ellipsoid and Cauchy's stress quadric .
The Mohr circle can be applied to any symmetric 2x2 tensor matrix, including the strain and moment of inertia tensors.
Internal forces are produced between the particles of a deformable object, assumed as a continuum , as a reaction to applied external forces, i.e., either surface forces or body forces . This reaction follows from Euler's laws of motion for a continuum, which are equivalent to Newton's laws of motion for a particle. A measure of the intensity of these internal forces is called stress . Because the object is assumed as a continuum, these internal forces are distributed continuously within the volume of the object.
In engineering, e.g., structural , mechanical , or geotechnical , the stress distribution within an object, for instance stresses in a rock mass around a tunnel, airplane wings, or building columns, is determined through a stress analysis . Calculating the stress distribution implies the determination of stresses at every point (material particle) in the object. According to Cauchy , the stress at any point in an object (Figure 2), assumed as a continuum, is completely defined by the nine stress components σ i j {\displaystyle \sigma _{ij}} of a second order tensor of type (2,0) known as the Cauchy stress tensor , σ {\displaystyle {\boldsymbol {\sigma }}} :
After the stress distribution within the object has been determined with respect to a coordinate system ( x , y ) {\displaystyle (x,y)} , it may be necessary to calculate the components of the stress tensor at a particular material point P {\displaystyle P} with respect to a rotated coordinate system ( x ′ , y ′ ) {\displaystyle (x',y')} , i.e., the stresses acting on a plane with a different orientation passing through that point of interest —forming an angle with the coordinate system ( x , y ) {\displaystyle (x,y)} (Figure 3). For example, it is of interest to find the maximum normal stress and maximum shear stress, as well as the orientation of the planes where they act upon. To achieve this, it is necessary to perform a tensor transformation under a rotation of the coordinate system. From the definition of tensor , the Cauchy stress tensor obeys the tensor transformation law . A graphical representation of this transformation law for the Cauchy stress tensor is the Mohr circle for stress.
In two dimensions, the stress tensor at a given material point P {\displaystyle P} with respect to any two perpendicular directions is completely defined by only three stress components. For the particular coordinate system ( x , y ) {\displaystyle (x,y)} these stress components are: the normal stresses σ x {\displaystyle \sigma _{x}} and σ y {\displaystyle \sigma _{y}} , and the shear stress τ x y {\displaystyle \tau _{xy}} . From the balance of angular momentum, the symmetry of the Cauchy stress tensor can be demonstrated. This symmetry implies that τ x y = τ y x {\displaystyle \tau _{xy}=\tau _{yx}} . Thus, the Cauchy stress tensor can be written as:
The objective is to use the Mohr circle to find the stress components σ n {\displaystyle \sigma _{\mathrm {n} }} and τ n {\displaystyle \tau _{\mathrm {n} }} on a rotated coordinate system ( x ′ , y ′ ) {\displaystyle (x',y')} , i.e., on a differently oriented plane passing through P {\displaystyle P} and perpendicular to the x {\displaystyle x} - y {\displaystyle y} plane (Figure 4). The rotated coordinate system ( x ′ , y ′ ) {\displaystyle (x',y')} makes an angle θ {\displaystyle \theta } with the original coordinate system ( x , y ) {\displaystyle (x,y)} .
To derive the equation of the Mohr circle for the two-dimensional cases of plane stress and plane strain , first consider a two-dimensional infinitesimal material element around a material point P {\displaystyle P} (Figure 4), with a unit area in the direction parallel to the y {\displaystyle y} - z {\displaystyle z} plane, i.e., perpendicular to the page or screen.
From equilibrium of forces on the infinitesimal element, the magnitudes of the normal stress σ n {\displaystyle \sigma _{\mathrm {n} }} and the shear stress τ n {\displaystyle \tau _{\mathrm {n} }} are given by:
However, knowing that
we obtain
Now, from equilibrium of forces in the direction of τ n {\displaystyle \tau _{\mathrm {n} }} ( y ′ {\displaystyle y'} -axis) (Figure 4), and knowing that the area of the plane where τ n {\displaystyle \tau _{\mathrm {n} }} acts is d A {\displaystyle dA} , we have:
However, knowing that
we obtain
Both equations can also be obtained by applying the tensor transformation law on the known Cauchy stress tensor, which is equivalent to performing the static equilibrium of forces in the direction of σ n {\displaystyle \sigma _{\mathrm {n} }} and τ n {\displaystyle \tau _{\mathrm {n} }} .
Expanding the right hand side, and knowing that σ x ′ = σ n {\displaystyle \sigma _{x'}=\sigma _{\mathrm {n} }} and τ x ′ y ′ = τ n {\displaystyle \tau _{x'y'}=\tau _{\mathrm {n} }} , we have:
However, knowing that
we obtain
τ n = − ( σ x − σ y ) sin θ cos θ + τ x y ( cos 2 θ − sin 2 θ ) {\displaystyle \tau _{\mathrm {n} }=-(\sigma _{x}-\sigma _{y})\sin \theta \cos \theta +\tau _{xy}\left(\cos ^{2}\theta -\sin ^{2}\theta \right)}
However, knowing that
we obtain
It is not necessary at this moment to calculate the stress component σ y ′ {\displaystyle \sigma _{y'}} acting on the plane perpendicular to the plane of action of σ x ′ {\displaystyle \sigma _{x'}} as it is not required for deriving the equation for the Mohr circle.
These two equations are the parametric equations of the Mohr circle. In these equations, 2 θ {\displaystyle 2\theta } is the parameter, and σ n {\displaystyle \sigma _{\mathrm {n} }} and τ n {\displaystyle \tau _{\mathrm {n} }} are the coordinates. This means that by choosing a coordinate system with abscissa σ n {\displaystyle \sigma _{\mathrm {n} }} and ordinate τ n {\displaystyle \tau _{\mathrm {n} }} , giving values to the parameter θ {\displaystyle \theta } will place the points obtained lying on a circle.
Eliminating the parameter 2 θ {\displaystyle 2\theta } from these parametric equations will yield the non-parametric equation of the Mohr circle. This can be achieved by rearranging the equations for σ n {\displaystyle \sigma _{\mathrm {n} }} and τ n {\displaystyle \tau _{\mathrm {n} }} , first transposing the first term in the first equation and squaring both sides of each of the equations then adding them. Thus we have
where
This is the equation of a circle (the Mohr circle) of the form
with radius r = R {\displaystyle r=R} centered at a point with coordinates ( a , b ) = ( σ a v g , 0 ) {\displaystyle (a,b)=(\sigma _{\mathrm {avg} },0)} in the ( σ n , τ n ) {\displaystyle (\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} coordinate system.
There are two separate sets of sign conventions that need to be considered when using the Mohr Circle: One sign convention for stress components in the "physical space", and another for stress components in the "Mohr-Circle-space". In addition, within each of the two set of sign conventions, the engineering mechanics ( structural engineering and mechanical engineering ) literature follows a different sign convention from the geomechanics literature. There is no standard sign convention, and the choice of a particular sign convention is influenced by convenience for calculation and interpretation for the particular problem in hand. A more detailed explanation of these sign conventions is presented below.
The previous derivation for the equation of the Mohr Circle using Figure 4 follows the engineering mechanics sign convention. The engineering mechanics sign convention will be used for this article .
From the convention of the Cauchy stress tensor (Figure 3 and Figure 4), the first subscript in the stress components denotes the face on which the stress component acts, and the second subscript indicates the direction of the stress component. Thus τ x y {\displaystyle \tau _{xy}} is the shear stress acting on the face with normal vector in the positive direction of the x {\displaystyle x} -axis, and in the positive direction of the y {\displaystyle y} -axis.
In the physical-space sign convention, positive normal stresses are outward to the plane of action (tension), and negative normal stresses are inward to the plane of action (compression) (Figure 5).
In the physical-space sign convention, positive shear stresses act on positive faces of the material element in the positive direction of an axis. Also, positive shear stresses act on negative faces of the material element in the negative direction of an axis. A positive face has its normal vector in the positive direction of an axis, and a negative face has its normal vector in the negative direction of an axis. For example, the shear stresses τ x y {\displaystyle \tau _{xy}} and τ y x {\displaystyle \tau _{yx}} are positive because they act on positive faces, and they act as well in the positive direction of the y {\displaystyle y} -axis and the x {\displaystyle x} -axis, respectively (Figure 3). Similarly, the respective opposite shear stresses τ x y {\displaystyle \tau _{xy}} and τ y x {\displaystyle \tau _{yx}} acting in the negative faces have a negative sign because they act in the negative direction of the x {\displaystyle x} -axis and y {\displaystyle y} -axis, respectively.
In the Mohr-circle-space sign convention, normal stresses have the same sign as normal stresses in the physical-space sign convention: positive normal stresses act outward to the plane of action, and negative normal stresses act inward to the plane of action.
Shear stresses, however, have a different convention in the Mohr-circle space compared to the convention in the physical space. In the Mohr-circle-space sign convention, positive shear stresses rotate the material element in the counterclockwise direction, and negative shear stresses rotate the material in the clockwise direction. This way, the shear stress component τ x y {\displaystyle \tau _{xy}} is positive in the Mohr-circle space, and the shear stress component τ y x {\displaystyle \tau _{yx}} is negative in the Mohr-circle space.
Two options exist for drawing the Mohr-circle space, which produce a mathematically correct Mohr circle:
Plotting positive shear stresses upward makes the angle 2 θ {\displaystyle 2\theta } on the Mohr circle have a positive rotation clockwise, which is opposite to the physical space convention. That is why some authors [ 3 ] prefer plotting positive shear stresses downward, which makes the angle 2 θ {\displaystyle 2\theta } on the Mohr circle have a positive rotation counterclockwise, similar to the physical space convention for shear stresses.
To overcome the "issue" of having the shear stress axis downward in the Mohr-circle space, there is an alternative sign convention where positive shear stresses are assumed to rotate the material element in the clockwise direction and negative shear stresses are assumed to rotate the material element in the counterclockwise direction (Figure 5, option 3). This way, positive shear stresses are plotted upward in the Mohr-circle space and the angle 2 θ {\displaystyle 2\theta } has a positive rotation counterclockwise in the Mohr-circle space. This alternative sign convention produces a circle that is identical to the sign convention #2 in Figure 5 because a positive shear stress τ n {\displaystyle \tau _{\mathrm {n} }} is also a counterclockwise shear stress, and both are plotted downward. Also, a negative shear stress τ n {\displaystyle \tau _{\mathrm {n} }} is a clockwise shear stress, and both are plotted upward.
This article follows the engineering mechanics sign convention for the physical space and the alternative sign convention for the Mohr-circle space (sign convention #3 in Figure 5)
Assuming we know the stress components σ x {\displaystyle \sigma _{x}} , σ y {\displaystyle \sigma _{y}} , and τ x y {\displaystyle \tau _{xy}} at a point P {\displaystyle P} in the object under study, as shown in Figure 4, the following are the steps to construct the Mohr circle for the state of stresses at P {\displaystyle P} :
The magnitude of the principal stresses are the abscissas of the points C {\displaystyle C} and E {\displaystyle E} (Figure 6) where the circle intersects the σ n {\displaystyle \sigma _{\mathrm {n} }} -axis. The magnitude of the major principal stress σ 1 {\displaystyle \sigma _{1}} is always the greatest absolute value of the abscissa of any of these two points. Likewise, the magnitude of the minor principal stress σ 2 {\displaystyle \sigma _{2}} is always the lowest absolute value of the abscissa of these two points. As expected, the ordinates of these two points are zero, corresponding to the magnitude of the shear stress components on the principal planes. Alternatively, the values of the principal stresses can be found by
where the magnitude of the average normal stress σ avg {\displaystyle \sigma _{\text{avg}}} is the abscissa of the centre O {\displaystyle O} , given by
and the length of the radius R {\displaystyle R} of the circle (based on the equation of a circle passing through two points), is given by
The maximum and minimum shear stresses correspond to the ordinates of the highest and lowest points on the circle, respectively. These points are located at the intersection of the circle with the vertical line passing through the center of the circle, O {\displaystyle O} . Thus, the magnitude of the maximum and minimum shear stresses are equal to the value of the circle's radius R {\displaystyle R}
As mentioned before, after the two-dimensional stress analysis has been performed we know the stress components σ x {\displaystyle \sigma _{x}} , σ y {\displaystyle \sigma _{y}} , and τ x y {\displaystyle \tau _{xy}} at a material point P {\displaystyle P} . These stress components act in two perpendicular planes A {\displaystyle A} and B {\displaystyle B} passing through P {\displaystyle P} as shown in Figure 5 and 6. The Mohr circle is used to find the stress components σ n {\displaystyle \sigma _{\mathrm {n} }} and τ n {\displaystyle \tau _{\mathrm {n} }} , i.e., coordinates of any point D {\displaystyle D} on the circle, acting on any other plane D {\displaystyle D} passing through P {\displaystyle P} making an angle θ {\displaystyle \theta } with the plane B {\displaystyle B} . For this, two approaches can be used: the double angle, and the Pole or origin of planes.
As shown in Figure 6, to determine the stress components ( σ n , τ n ) {\displaystyle (\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} acting on a plane D {\displaystyle D} at an angle θ {\displaystyle \theta } counterclockwise to the plane B {\displaystyle B} on which σ x {\displaystyle \sigma _{x}} acts, we travel an angle 2 θ {\displaystyle 2\theta } in the same counterclockwise direction around the circle from the known stress point B ( σ x , − τ x y ) {\displaystyle B(\sigma _{x},-\tau _{xy})} to point D ( σ n , τ n ) {\displaystyle D(\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} , i.e., an angle 2 θ {\displaystyle 2\theta } between lines O B ¯ {\displaystyle {\overline {OB}}} and O D ¯ {\displaystyle {\overline {OD}}} in the Mohr circle.
The double angle approach relies on the fact that the angle θ {\displaystyle \theta } between the normal vectors to any two physical planes passing through P {\displaystyle P} (Figure 4) is half the angle between two lines joining their corresponding stress points ( σ n , τ n ) {\displaystyle (\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} on the Mohr circle and the centre of the circle.
This double angle relation comes from the fact that the parametric equations for the Mohr circle are a function of 2 θ {\displaystyle 2\theta } . It can also be seen that the planes A {\displaystyle A} and B {\displaystyle B} in the material element around P {\displaystyle P} of Figure 5 are separated by an angle θ = 90 ∘ {\displaystyle \theta =90^{\circ }} , which in the Mohr circle is represented by a 180 ∘ {\displaystyle 180^{\circ }} angle (double the angle).
The second approach involves the determination of a point on the Mohr circle called the pole or the origin of planes . Any straight line drawn from the pole will intersect the Mohr circle at a point that represents the state of stress on a plane inclined at the same orientation (parallel) in space as that line. Therefore, knowing the stress components σ {\displaystyle \sigma } and τ {\displaystyle \tau } on any particular plane, one can draw a line parallel to that plane through the particular coordinates σ n {\displaystyle \sigma _{\mathrm {n} }} and τ n {\displaystyle \tau _{\mathrm {n} }} on the Mohr circle and find the pole as the intersection of such line with the Mohr circle. As an example, let's assume we have a state of stress with stress components σ x , {\displaystyle \sigma _{x},\!} , σ y , {\displaystyle \sigma _{y},\!} , and τ x y , {\displaystyle \tau _{xy},\!} , as shown on Figure 7. First, we can draw a line from point B {\displaystyle B} parallel to the plane of action of σ x {\displaystyle \sigma _{x}} , or, if we choose otherwise, a line from point A {\displaystyle A} parallel to the plane of action of σ y {\displaystyle \sigma _{y}} . The intersection of any of these two lines with the Mohr circle is the pole. Once the pole has been determined, to find the state of stress on a plane making an angle θ {\displaystyle \theta } with the vertical, or in other words a plane having its normal vector forming an angle θ {\displaystyle \theta } with the horizontal plane, then we can draw a line from the pole parallel to that plane (See Figure 7). The normal and shear stresses on that plane are then the coordinates of the point of intersection between the line and the Mohr circle.
The orientation of the planes where the maximum and minimum principal stresses act, also known as principal planes , can be determined by measuring in the Mohr circle the angles ∠BOC and ∠BOE, respectively, and taking half of each of those angles. Thus, the angle ∠BOC between O B ¯ {\displaystyle {\overline {OB}}} and O C ¯ {\displaystyle {\overline {OC}}} is double the angle θ p {\displaystyle \theta _{p}} which the major principal plane makes with plane B {\displaystyle B} .
Angles θ p 1 {\displaystyle \theta _{p1}} and θ p 2 {\displaystyle \theta _{p2}} can also be found from the following equation
This equation defines two values for θ p {\displaystyle \theta _{\mathrm {p} }} which are 90 ∘ {\displaystyle 90^{\circ }} apart (Figure). This equation can be derived directly from the geometry of the circle, or by making the parametric equation of the circle for τ n {\displaystyle \tau _{\mathrm {n} }} equal to zero (the shear stress in the principal planes is always zero).
Assume a material element under a state of stress as shown in Figure 8 and Figure 9, with the plane of one of its sides oriented 10° with respect to the horizontal plane.
Using the Mohr circle, find:
Check the answers using the stress transformation formulas or the stress transformation law.
Solution: Following the engineering mechanics sign convention for the physical space (Figure 5), the stress components for the material element in this example are:
Following the steps for drawing the Mohr circle for this particular state of stress, we first draw a Cartesian coordinate system ( σ n , τ n ) {\displaystyle (\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} with the τ n {\displaystyle \tau _{\mathrm {n} }} -axis upward.
We then plot two points A(50,40) and B(-10,-40), representing the state of stress at plane A and B as show in both Figure 8 and Figure 9. These points follow the engineering mechanics sign convention for the Mohr-circle space (Figure 5), which assumes positive normals stresses outward from the material element, and positive shear stresses on each plane rotating the material element clockwise. This way, the shear stress acting on plane B is negative and the shear stress acting on plane A is positive.
The diameter of the circle is the line joining point A and B. The centre of the circle is the intersection of this line with the σ n {\displaystyle \sigma _{\mathrm {n} }} -axis. Knowing both the location of the centre and length of the diameter, we are able to plot the Mohr circle for this particular state of stress.
The abscissas of both points E and C (Figure 8 and Figure 9) intersecting the σ n {\displaystyle \sigma _{\mathrm {n} }} -axis are the magnitudes of the minimum and maximum normal stresses, respectively; the ordinates of both points E and C are the magnitudes of the shear stresses acting on both the minor and major principal planes, respectively, which is zero for principal planes.
Even though the idea for using the Mohr circle is to graphically find different stress components by actually measuring the coordinates for different points on the circle, it is more convenient to confirm the results analytically. Thus, the radius and the abscissa of the centre of the circle are
and the principal stresses are
The coordinates for both points H and G (Figure 8 and Figure 9) are the magnitudes of the minimum and maximum shear stresses, respectively; the abscissas for both points H and G are the magnitudes for the normal stresses acting on the same planes where the minimum and maximum shear stresses act, respectively.
The magnitudes of the minimum and maximum shear stresses can be found analytically by
and the normal stresses acting on the same planes where the minimum and maximum shear stresses act are equal to σ a v g {\displaystyle \sigma _{\mathrm {avg} }}
We can choose to either use the double angle approach (Figure 8) or the Pole approach (Figure 9) to find the orientation of the principal normal stresses and principal shear stresses.
Using the double angle approach we measure the angles ∠BOC and ∠BOE in the Mohr Circle (Figure 8) to find double the angle the major principal stress and the minor principal stress make with plane B in the physical space. To obtain a more accurate value for these angles, instead of manually measuring the angles, we can use the analytical expression
One solution is: 2 θ p = − 53.13 ∘ {\displaystyle 2\theta _{p}=-53.13^{\circ }} .
From inspection of Figure 8, this value corresponds to the angle ∠BOE. Thus, the minor principal angle is
Then, the major principal angle is
Remember that in this particular example θ p 1 {\displaystyle \theta _{p1}} and θ p 2 {\displaystyle \theta _{p2}} are angles with respect to the plane of action of σ x ′ {\displaystyle \sigma _{x'}} (oriented in the x ′ {\displaystyle x'} -axis)and not angles with respect to the plane of action of σ x {\displaystyle \sigma _{x}} (oriented in the x {\displaystyle x} -axis).
Using the Pole approach, we first localize the Pole or origin of planes. For this, we draw through point A on the Mohr circle a line inclined 10° with the horizontal, or, in other words, a line parallel to plane A where σ y ′ {\displaystyle \sigma _{y'}} acts. The Pole is where this line intersects the Mohr circle (Figure 9). To confirm the location of the Pole, we could draw a line through point B on the Mohr circle parallel to the plane B where σ x ′ {\displaystyle \sigma _{x'}} acts. This line would also intersect the Mohr circle at the Pole (Figure 9).
From the Pole, we draw lines to different points on the Mohr circle. The coordinates of the points where these lines intersect the Mohr circle indicate the stress components acting on a plane in the physical space having the same inclination as the line. For instance, the line from the Pole to point C in the circle has the same inclination as the plane in the physical space where σ 1 {\displaystyle \sigma _{1}} acts. This plane makes an angle of 63.435° with plane B, both in the Mohr-circle space and in the physical space. In the same way, lines are traced from the Pole to points E, D, F, G and H to find the stress components on planes with the same orientation.
To construct the Mohr circle for a general three-dimensional case of stresses at a point, the values of the principal stresses ( σ 1 , σ 2 , σ 3 ) {\displaystyle \left(\sigma _{1},\sigma _{2},\sigma _{3}\right)} and their principal directions ( n 1 , n 2 , n 3 ) {\displaystyle \left(n_{1},n_{2},n_{3}\right)} must be first evaluated.
Considering the principal axes as the coordinate system, instead of the general x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , x 3 {\displaystyle x_{3}} coordinate system, and assuming that σ 1 > σ 2 > σ 3 {\displaystyle \sigma _{1}>\sigma _{2}>\sigma _{3}} , then the normal and shear components of the stress vector T ( n ) {\displaystyle \mathbf {T} ^{(\mathbf {n} )}} , for a given plane with unit vector n {\displaystyle \mathbf {n} } , satisfy the following equations
Knowing that n i n i = n 1 2 + n 2 2 + n 3 2 = 1 {\displaystyle n_{i}n_{i}=n_{1}^{2}+n_{2}^{2}+n_{3}^{2}=1} , we can solve for n 1 2 {\displaystyle n_{1}^{2}} , n 2 2 {\displaystyle n_{2}^{2}} , n 3 2 {\displaystyle n_{3}^{2}} , using the Gauss elimination method which yields
Since σ 1 > σ 2 > σ 3 {\displaystyle \sigma _{1}>\sigma _{2}>\sigma _{3}} , and ( n i ) 2 {\displaystyle (n_{i})^{2}} is non-negative, the numerators from these equations satisfy
These expressions can be rewritten as
which are the equations of the three Mohr's circles for stress C 1 {\displaystyle C_{1}} , C 2 {\displaystyle C_{2}} , and C 3 {\displaystyle C_{3}} , with radii R 1 = 1 2 ( σ 2 − σ 3 ) {\displaystyle R_{1}={\tfrac {1}{2}}(\sigma _{2}-\sigma _{3})} , R 2 = 1 2 ( σ 1 − σ 3 ) {\displaystyle R_{2}={\tfrac {1}{2}}(\sigma _{1}-\sigma _{3})} , and R 3 = 1 2 ( σ 1 − σ 2 ) {\displaystyle R_{3}={\tfrac {1}{2}}(\sigma _{1}-\sigma _{2})} , and their centres with coordinates [ 1 2 ( σ 2 + σ 3 ) , 0 ] {\displaystyle \left[{\tfrac {1}{2}}(\sigma _{2}+\sigma _{3}),0\right]} , [ 1 2 ( σ 1 + σ 3 ) , 0 ] {\displaystyle \left[{\tfrac {1}{2}}(\sigma _{1}+\sigma _{3}),0\right]} , [ 1 2 ( σ 1 + σ 2 ) , 0 ] {\displaystyle \left[{\tfrac {1}{2}}(\sigma _{1}+\sigma _{2}),0\right]} , respectively.
These equations for the Mohr circles show that all admissible stress points ( σ n , τ n ) {\displaystyle (\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} lie on these circles or within the shaded area enclosed by them (see Figure 10). Stress points ( σ n , τ n ) {\displaystyle (\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} satisfying the equation for circle C 1 {\displaystyle C_{1}} lie on, or outside circle C 1 {\displaystyle C_{1}} . Stress points ( σ n , τ n ) {\displaystyle (\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} satisfying the equation for circle C 2 {\displaystyle C_{2}} lie on, or inside circle C 2 {\displaystyle C_{2}} . And finally, stress points ( σ n , τ n ) {\displaystyle (\sigma _{\mathrm {n} },\tau _{\mathrm {n} })} satisfying the equation for circle C 3 {\displaystyle C_{3}} lie on, or outside circle C 3 {\displaystyle C_{3}} . | https://en.wikipedia.org/wiki/Mohr's_circle |
Mohr–Coulomb theory is a mathematical model (see yield surface ) describing the response of brittle materials such as concrete , or rubble piles, to shear stress as well as normal stress. Most of the classical engineering materials follow this rule in at least a portion of their shear failure envelope. Generally the theory applies to materials for which the compressive strength far exceeds the tensile strength . [ 1 ]
In geotechnical engineering it is used to define shear strength of soils and rocks at different effective stresses .
In structural engineering it is used to determine failure load as well as the angle of fracture of a displacement fracture in concrete and similar materials. Coulomb 's friction hypothesis is used to determine the combination of shear and normal stress that will cause a fracture of the material. Mohr's circle is used to determine which principal stresses will produce this combination of shear and normal stress, and the angle of the plane in which this will occur. According to the principle of normality the stress introduced at failure will be perpendicular to the line describing the fracture condition.
It can be shown that a material failing according to Coulomb's friction hypothesis will show the displacement introduced at failure forming an angle to the line of fracture equal to the angle of friction . This makes the strength of the material determinable by comparing the external mechanical work introduced by the displacement and the external load with the internal mechanical work introduced by the strain and stress at the line of failure. By conservation of energy the sum of these must be zero and this will make it possible to calculate the failure load of the construction.
A common improvement of this model is to combine Coulomb's friction hypothesis with Rankine's principal stress hypothesis to describe a separation fracture. [ 2 ] An alternative view derives the Mohr-Coulomb criterion as extension failure . [ 3 ]
The Mohr–Coulomb theory is named in honour of Charles-Augustin de Coulomb and Christian Otto Mohr . Coulomb's contribution was a 1776 essay entitled " Essai sur une application des règles des maximis et minimis à quelques problèmes de statique relatifs à l'architecture "
. [ 2 ] [ 4 ] Mohr developed a generalised form of the theory around the end of the 19th century. [ 5 ] As the generalised form affected the interpretation of the criterion, but not the substance of it, some texts continue to refer to the criterion as simply the ' Coulomb criterion' . [ 6 ]
The Mohr–Coulomb [ 7 ] failure criterion represents the linear envelope that is obtained from a plot of the shear strength of a material versus the applied normal stress. This relation is expressed as
where τ {\displaystyle \tau } is the shear strength, σ {\displaystyle \sigma } is the normal stress, c {\displaystyle c} is the intercept of the failure envelope with the τ {\displaystyle \tau } axis, and tan ( ϕ ) {\displaystyle \tan(\phi )} is the slope of the failure envelope. The quantity c {\displaystyle c} is often called the cohesion and the angle ϕ {\displaystyle \phi } is called the angle of internal friction . Compression is assumed to be positive in the following discussion. If compression is assumed to be negative then σ {\displaystyle \sigma } should be replaced with − σ {\displaystyle -\sigma } .
If ϕ = 0 {\displaystyle \phi =0} , the Mohr–Coulomb criterion reduces to the Tresca criterion . On the other hand, if ϕ = 90 ∘ {\displaystyle \phi =90^{\circ }} the Mohr–Coulomb model is equivalent to the Rankine model. Higher values of ϕ {\displaystyle \phi } are not allowed.
From Mohr's circle we have σ = σ m − τ m sin ϕ ; τ = τ m cos ϕ {\displaystyle \sigma =\sigma _{m}-\tau _{m}\sin \phi ~;~~\tau =\tau _{m}\cos \phi } where τ m = σ 1 − σ 3 2 ; σ m = σ 1 + σ 3 2 {\displaystyle \tau _{m}={\cfrac {\sigma _{1}-\sigma _{3}}{2}}~;~~\sigma _{m}={\cfrac {\sigma _{1}+\sigma _{3}}{2}}} and σ 1 {\displaystyle \sigma _{1}} is the maximum principal stress and σ 3 {\displaystyle \sigma _{3}} is the minimum principal stress.
Therefore, the Mohr–Coulomb criterion may also be expressed as τ m = σ m sin ϕ + c cos ϕ . {\displaystyle \tau _{m}=\sigma _{m}\sin \phi +c\cos \phi ~.}
This form of the Mohr–Coulomb criterion is applicable to failure on a plane that is parallel to the σ 2 {\displaystyle \sigma _{2}} direction.
The Mohr–Coulomb criterion in three dimensions is often expressed as
The Mohr–Coulomb failure surface is a cone with a hexagonal cross section in deviatoric stress space.
The expressions for τ {\displaystyle \tau } and σ {\displaystyle \sigma } can be generalized to three dimensions by developing expressions for the normal stress and the resolved shear stress on a plane of arbitrary orientation with respect to the coordinate axes (basis vectors). If the unit normal to the plane of interest is
where e i , i = 1 , 2 , 3 {\displaystyle \mathbf {e} _{i},~~i=1,2,3} are three orthonormal unit basis vectors, and if the principal stresses σ 1 , σ 2 , σ 3 {\displaystyle \sigma _{1},\sigma _{2},\sigma _{3}} are aligned with the basis vectors e 1 , e 2 , e 3 {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}} , then the expressions for σ , τ {\displaystyle \sigma ,\tau } are
The Mohr–Coulomb failure criterion can then be evaluated using the usual expression τ = σ tan ( ϕ ) + c {\displaystyle \tau =\sigma ~\tan(\phi )+c} for the six planes of maximum shear stress.
where e i , i = 1 , 2 , 3 {\displaystyle \mathbf {e} _{i},~~i=1,2,3} are three orthonormal unit basis vectors. Then the traction vector on the plane is given by
The magnitude of the traction vector is given by
Then the magnitude of the stress normal to the plane is given by
The magnitude of the resolved shear stress on the plane is given by τ = | t | 2 − σ 2 {\displaystyle \tau ={\sqrt {|\mathbf {t} |^{2}-\sigma ^{2}}}} In terms of components, we have
If the principal stresses σ 1 , σ 2 , σ 3 {\displaystyle \sigma _{1},\sigma _{2},\sigma _{3}} are aligned with the basis vectors e 1 , e 2 , e 3 {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}} , then the expressions for σ , τ {\displaystyle \sigma ,\tau } are
The Mohr–Coulomb failure (yield) surface is often expressed in Haigh–Westergaad coordinates . For example, the function σ 1 − σ 3 2 = σ 1 + σ 3 2 sin ϕ + c cos ϕ {\displaystyle {\cfrac {\sigma _{1}-\sigma _{3}}{2}}={\cfrac {\sigma _{1}+\sigma _{3}}{2}}~\sin \phi +c\cos \phi } can be expressed as
Alternatively, in terms of the invariants p , q , r {\displaystyle p,q,r} we can write
where θ = 1 3 arccos [ ( r q ) 3 ] . {\displaystyle \theta ={\cfrac {1}{3}}\arccos \left[\left({\cfrac {r}{q}}\right)^{3}\right]~.}
σ 1 − σ 3 2 = σ 1 + σ 3 2 sin ϕ + c cos ϕ {\displaystyle {\cfrac {\sigma _{1}-\sigma _{3}}{2}}={\cfrac {\sigma _{1}+\sigma _{3}}{2}}~\sin \phi +c\cos \phi } as
The Haigh–Westergaard invariants are related to the principal stresses by
Plugging into the expression for the Mohr–Coulomb yield function gives us
Using trigonometric identities for the sum and difference of cosines and rearrangement gives us the expression of the Mohr–Coulomb yield function in terms of ξ , ρ , θ {\displaystyle \xi ,\rho ,\theta } .
We can express the yield function in terms of p , q {\displaystyle p,q} by using the relations ξ = 3 p ; ρ = 2 3 q {\displaystyle \xi ={\sqrt {3}}~p~;~~\rho ={\sqrt {\cfrac {2}{3}}}~q} and straightforward substitution.
The Mohr–Coulomb yield surface is often used to model the plastic flow of geomaterials (and other cohesive-frictional materials). Many such materials show dilatational behavior under triaxial states of stress which the Mohr–Coulomb model does not include. Also, since the yield surface has corners, it may be inconvenient to use the original Mohr–Coulomb model to determine the direction of plastic flow (in the flow theory of plasticity ).
A common approach is to use a non-associated plastic flow potential that is smooth. An example of such a potential is the function [ citation needed ] g := ( α c y tan ψ ) 2 + G 2 ( ϕ , θ ) q 2 − p tan ϕ {\displaystyle g:={\sqrt {(\alpha c_{\mathrm {y} }\tan \psi )^{2}+G^{2}(\phi ,\theta )~q^{2}}}-p\tan \phi }
where α {\displaystyle \alpha } is a parameter, c y {\displaystyle c_{\mathrm {y} }} is the value of c {\displaystyle c} when the plastic strain is zero (also called the initial cohesion yield stress ), ψ {\displaystyle \psi } is the angle made by the yield surface in the Rendulic plane at high values of p {\displaystyle p} (this angle is also called the dilation angle ), and G ( ϕ , θ ) {\displaystyle G(\phi ,\theta )} is an appropriate function that is also smooth in the deviatoric stress plane.
Cohesion (alternatively called the cohesive strength ) and friction angle values for rocks and some common soils are listed in the tables below. | https://en.wikipedia.org/wiki/Mohr–Coulomb_theory |
The Mohs scale ( / m oʊ z / MOHZ ) of mineral hardness is a qualitative ordinal scale , from 1 to 10, characterizing scratch resistance of minerals through the ability of harder material to scratch softer material.
The scale was introduced in 1812 by the German geologist and mineralogist Friedrich Mohs , in his book Versuch einer Elementar-Methode zur naturhistorischen Bestimmung und Erkennung der Fossilien (English: Attempt at an elementary method for the natural-historical determination and recognition of fossils); [ 1 ] [ 2 ] [ a ] it is one of several definitions of hardness in materials science , some of which are more quantitative. [ 3 ]
The method of comparing hardness by observing which minerals can scratch others is of great antiquity, having been mentioned by Theophrastus in his treatise On Stones , c. 300 BC , followed by Pliny the Elder in his Naturalis Historia , c. AD 77 . [ 4 ] [ 5 ] [ 6 ] The Mohs scale is useful for identification of minerals in the field , but is not an accurate predictor of how well materials endure in an industrial setting. [ 7 ]
The Mohs scale of mineral hardness is based on the ability of one natural sample of mineral to visibly scratch another mineral. Minerals are chemically pure solids found in nature. Rocks are mixtures of one or more minerals.
Diamond was the hardest known naturally occurring mineral when the scale was designed, and defines the top of the scale, arbitrarily set at 10. The hardness of a material is measured against the scale by finding the hardest material that the given material can scratch, or the softest material that can scratch the given material. For example, if some material is scratched by apatite but not by fluorite , its hardness on the Mohs scale would be between 4 and 5. [ 8 ]
Technically, "scratching" a material for the purposes of the Mohs scale means creating non- elastic dislocations visible to the naked eye. Frequently, materials that are lower on the Mohs scale can create microscopic, non-elastic dislocations on materials that have a higher Mohs number. While these microscopic dislocations are permanent and sometimes detrimental to the harder material's structural integrity, they are not considered "scratches" for the determination of a Mohs scale number. [ 9 ]
Each of the ten hardness values in the Mohs scale is represented by a reference mineral , most of which are widespread in rocks.
The Mohs scale is an ordinal scale . For example, corundum (9) is twice as hard as topaz (8), but diamond (10) is four times as hard as corundum. [ citation needed ] The table below shows the comparison with the absolute hardness measured by a sclerometer , with images of the reference minerals in the rightmost column. [ 10 ] [ 11 ]
Below is a table of more materials by Mohs scale. Some of them have a hardness between two of the Mohs scale reference minerals. Some solid substances that are not minerals have been assigned a hardness on the Mohs scale. Hardness may be difficult to determine, or may be misleading or meaningless, if a material is a mixture of two or more substances; for example, some sources have assigned a Mohs hardness of 6 or 7 to granite but it is a rock made of several minerals, each with its own Mohs hardness (e.g. topaz-rich granite contains: topaz — Mohs 8, quartz — Mohs 7, orthoclase — Mohs 6, plagioclase — Mohs 6–6.5, mica — Mohs 2–4).
Despite its lack of precision, the Mohs scale is relevant for field geologists, who use it to roughly identify minerals using scratch kits. The Mohs scale hardness of minerals can be commonly found in reference sheets.
Mohs hardness is useful in milling . It allows the assessment of which type of mill and grinding medium will best reduce a given product whose hardness is known. [ 15 ]
Electronic manufacturers use the scale for testing the resilience of flat panel display components (such as cover glass for LCDs or encapsulation for OLEDs ), as well as to evaluate the hardness of touch screens in consumer electronics. [ 16 ]
Comparison between Mohs hardness and Vickers hardness : [ 17 ] | https://en.wikipedia.org/wiki/Mohs_scale |
In organic chemistry , a moiety ( / ˈ m ɔɪ ə t i / MOY -ə-tee ) is a part of a molecule [ 1 ] [ 2 ] that is given a name because it is identified as a part of other molecules as well.
Typically, the term is used to describe the larger and characteristic parts of organic molecules, and it should not be used to describe or name smaller functional groups [ 1 ] [ 2 ] of atoms that chemically react in similar ways in most molecules that contain them. [ 3 ] Occasionally, a moiety may contain smaller moieties and functional groups. [ citation needed ]
A moiety that acts as a branch extending from the backbone of a hydrocarbon molecule is called a substituent or side chain , which typically can be removed from the molecule and substituted with others.
The term is also used in pharmacology , where an active moiety is the part of a molecule responsible for the physiological or pharmacological action of a drug .
In pharmacology , an active moiety is the part of a molecule or ion—excluding appended inactive portions—that is responsible for the physiological or pharmacological action of a drug substance . Inactive appended portions of the drug substance may include either the alcohol or acid moiety of an ester , a salt (including a salt with hydrogen or coordination bonds ), or other noncovalent derivative (such as a complex , chelate , or clathrate ). [ 4 ] [ 5 ] The parent drug may itself be an inactive prodrug and only after the active moiety is released from the parent in free form does it become active. | https://en.wikipedia.org/wiki/Moiety_(chemistry) |
Moiety conservation is the conservation of a subgroup in a chemical species , which is cyclically transferred from one molecule to another. In biochemistry, moiety conservation can have profound effects on the system's dynamics. [ 1 ]
A typical example of a conserved moiety [ 2 ] in biochemistry is the Adenosine diphosphate (ADP) subgroup that remains unchanged when it is phosphorylated to create adenosine triphosphate (ATP) and then dephosphorylated back to ADP forming a conserved cycle. Moiety -conserved cycles in nature exhibit unique network control features which can be elucidated using techniques such as metabolic control analysis . Other examples in metabolism include NAD/NADH, NADP/NADPH, CoA/Acetyl-CoA. Conserved cycles also exist in large numbers in protein signaling networks when proteins get phosphorylated and dephosphorylated .
Most, if not all, of these cycles, are time-scale-dependent. For example, although a protein in a phosphorylation cycle is conserved during the interconversion, over a longer time scale, there will be low levels of protein synthesis and degradation, which change the level of protein moiety. The same applies to cycles involving ATP, NAD, etc. Thus, although the concept of a moiety-conserved cycle in biochemistry is a useful approximation, [ 3 ] over time scales that include significant net synthesis and degradation of the moiety, the approximation is no longer valid. When invoking the conserved-moiety assumption on a particular moiety, we are, in effect, assuming the system is closed to that moiety.
Conserved cycles in a biochemical network can be identified by examination of the stoichiometry matrix , [ 4 ] [ 5 ] [ 6 ] N {\displaystyle {\boldsymbol {N}}} . The stoichiometry matrix for a simple cycle with species A and AP is given by:
N = [ 1 − 1 − 1 1 ] {\displaystyle {\boldsymbol {N}}={\begin{bmatrix}1&-1\\-1&1\end{bmatrix}}}
The rates of change of A and AP can be written using the equation:
[ d A d t d A P d t ] = [ 1 − 1 − 1 1 ] [ v 1 v 2 ] {\displaystyle {\begin{bmatrix}{\frac {dA}{dt}}\\{\frac {dAP}{dt}}\end{bmatrix}}=\left[{\begin{array}{rr}1&-1\\-1&1\end{array}}\right]\left[{\begin{array}{r}v_{1}\\v_{2}\end{array}}\right]}
Expanding the expression leads to:
d A d t = v 1 − v 2 d A P d t = v 2 − v 1 {\displaystyle {\begin{aligned}{\frac {dA}{dt}}&=v_{1}-v_{2}\\[4pt]{\frac {dAP}{dt}}&=v_{2}-v_{1}\end{aligned}}}
Note that d A / d t + d A P / d t = 0 {\displaystyle dA/dt+dAP/dt=0} . This means that A + A P = T {\displaystyle A+AP=T} , where T {\displaystyle T} is the total mass of moiety A {\displaystyle A} .
Given an arbitrary system:
N v = d x d t {\displaystyle {\boldsymbol {N}}{\boldsymbol {v}}={\frac {d{\boldsymbol {x}}}{dt}}}
elementary row operations can be applied to both sides such that the stoichiometric matrix is reduced to its echelon form , M {\displaystyle {\boldsymbol {M}}} giving:
[ M 0 ] v = E d x d t {\displaystyle {\begin{bmatrix}{\boldsymbol {M}}\\{\boldsymbol {0}}\end{bmatrix}}{\boldsymbol {v}}={\boldsymbol {E}}{\frac {d{\boldsymbol {x}}}{dt}}}
The elementary operations are captured in the E {\displaystyle {\boldsymbol {E}}} matrix. We can partition E {\displaystyle {\boldsymbol {E}}} to match the echelon matrix where the zero rows begin such that:
[ M 0 ] v = [ X Y ] d x d t {\displaystyle {\begin{bmatrix}{\boldsymbol {M}}\\{\boldsymbol {0}}\end{bmatrix}}{\boldsymbol {v}}={\begin{bmatrix}{\boldsymbol {X}}\\{\boldsymbol {Y}}\end{bmatrix}}{\frac {d{\boldsymbol {x}}}{dt}}}
By multiplying out the lower partition, we obtain:
Y d x d t = 0 {\displaystyle {\boldsymbol {Y}}{\frac {d{\boldsymbol {x}}}{dt}}=0}
The Y {\displaystyle {\boldsymbol {Y}}} matrix will contain entries corresponding to the conserved cycle participants.
The presence of conserved moieties can affect how computer simulation models are constructed. [ 7 ] [ 8 ] Moiety-conserved cycles will reduce the number of differential equations required to solve a system. For example, a simple cycle has only one independent variable. The other variable can be computed using the difference between the total mass and the independent variable. The set of differential equations for the two-cycle is given by:
d A d t = v 1 − v 2 d A P d t = v 2 − v 1 {\displaystyle {\begin{aligned}{\frac {dA}{dt}}&=v_{1}-v_{2}\\[4pt]{\frac {dAP}{dt}}&=v_{2}-v_{1}\end{aligned}}}
These can be reduced to one differential equation and one linear algebraic equation:
A P = T − A d A d t = v 1 − v 2 {\displaystyle {\begin{aligned}AP&=T-A\\[4pt]{\frac {dA}{dt}}&=v_{1}-v_{2}\end{aligned}}} | https://en.wikipedia.org/wiki/Moiety_conservation |
In mathematics, physics, and art, moiré patterns ( UK : / ˈ m w ɑː r eɪ / MWAH -ray , US : / m w ɑː ˈ r eɪ / mwah- RAY , [ 1 ] French: [mwaʁe] ⓘ ) or moiré fringes [ 2 ] are large-scale interference patterns that can be produced when a partially opaque ruled pattern with transparent gaps is overlaid on another similar pattern. For the moiré interference pattern to appear, the two patterns must not be completely identical, but rather displaced, rotated, or have slightly different pitch.
Moiré patterns appear in many situations. In printing, the printed pattern of dots can interfere with the image. In television and digital photography, a pattern on an object being photographed can interfere with the shape of the light sensors to generate unwanted artifacts. They are also sometimes created deliberately; in micrometers , they are used to amplify the effects of very small movements.
In physics, its manifestation is wave interference like that seen in the double-slit experiment and the beat phenomenon in acoustics .
The term originates from moire ( moiré in its French adjectival form), a type of textile , traditionally made of silk but now also made of cotton or synthetic fiber , with a rippled or "watered" appearance. Moire, or "watered textile", is made by pressing two layers of the textile when wet. The similar but imperfect spacing of the threads creates a characteristic pattern which remains after the fabric dries.
In French, the noun moire is in use from the 17th century, for "watered silk". It was a loan of the English mohair (attested 1610). In French usage, the noun gave rise to the verb moirer , "to produce a watered textile by weaving or pressing", by the 18th century. The adjective moiré formed from this verb is in use from at least 1823.
Moiré patterns are often an artifact of images produced by various digital imaging and computer graphics techniques, for example when scanning a halftone picture or ray tracing a checkered plane (the latter being a special case of aliasing , due to undersampling a fine regular pattern). [ 3 ] This can be overcome in texture mapping through the use of mipmapping and anisotropic filtering .
The drawing on the upper right shows a moiré pattern. The lines could represent fibers in moiré silk, or lines drawn on paper or on a computer screen. The nonlinear interaction of the optical patterns of lines creates a real and visible pattern of roughly parallel dark and light bands, the moiré pattern, superimposed on the lines. [ 4 ]
The moiré effect also occurs between overlapping transparent objects. [ 5 ] For example, an invisible phase mask is made of a transparent polymer with a wavy thickness profile. As light shines through two overlaid masks of similar phase patterns, a broad moiré pattern occurs on a screen some distance away. This phase moiré effect and the classical moiré effect from opaque lines are two ends of a continuous spectrum in optics, which is called the universal moiré effect. The phase moiré effect is the basis for a type of broadband interferometer in x-ray and particle wave applications. It also provides a way to reveal hidden patterns in invisible layers.
Line moiré is one type of moiré pattern; a pattern that appears when superposing two transparent layers containing correlated opaque patterns. Line moiré is the case when the superposed patterns comprise straight or curved lines. When moving the layer patterns, the moiré patterns transform or move at a faster speed. This effect is called optical moiré speedup.
More complex line moiré patterns are created if the lines are curved or not exactly parallel.
Shape moiré is one type of moiré pattern demonstrating the phenomenon of moiré magnification. [ 6 ] [ 7 ] 1D shape moiré is the particular simplified case of 2D shape moiré. One-dimensional patterns may appear when superimposing an opaque layer containing tiny horizontal transparent lines on top of a layer containing a complex shape which is periodically repeating along the vertical axis .
Moiré patterns revealing complex shapes, or sequences of symbols embedded in one of the layers (in form of periodically repeated compressed shapes) are created with shape moiré, otherwise called band moiré patterns. One of the most important properties of shape moiré is its ability to magnify tiny shapes along either one or both axes, that is, stretching. A common 2D example of moiré magnification occurs when viewing a chain-link fence through a second chain-link fence of identical design. The fine structure of the design is visible even at great distances.
Consider two patterns made of parallel and equidistant lines, e.g., vertical lines. The step of the first pattern is p , the step of the second is p + δp , with 0 < δp < p .
If the lines of the patterns are superimposed at the left of the figure, the shift between the lines increases when going to the right. After a given number of lines, the patterns are opposed: the lines of the second pattern are between the lines of the first pattern. If we look from a far distance, we have the feeling of pale zones when the lines are superimposed (there is white between the lines), and of dark zones when the lines are "opposed".
The middle of the first dark zone is when the shift is equal to p / 2 . The n th line of the second pattern is shifted by n δp compared to the n th line of the first network. The middle of the first dark zone thus corresponds to n ⋅ δ p = p 2 {\displaystyle n\cdot \delta p={\frac {p}{2}}} that is n = p 2 δ p . {\displaystyle n={\frac {p}{2\delta p}}.} The distance d between the middle of a pale zone and a dark zone is d = n ⋅ ( p + δ p ) = p 2 2 δ p + p 2 {\displaystyle d=n\cdot (p+\delta p)={\frac {p^{2}}{2\delta p}}+{\frac {p}{2}}} the distance between the middle of two dark zones, which is also the distance between two pale zones, is 2 d = p 2 δ p + p {\displaystyle 2d={\frac {p^{2}}{\delta p}}+p} From this formula, we can see that:
The principle of the moiré is similar to the Vernier scale .
The essence of the moiré effect is the (mainly visual) perception of a distinctly different third pattern which is caused by inexact superimposition of two similar patterns. The mathematical representation of these patterns is not trivially obtained and can seem somewhat arbitrary. In this section we shall give a mathematical example of two parallel patterns whose superimposition forms a moiré pattern, and show one way (of many possible ways) these patterns and the moiré effect can be rendered mathematically.
The visibility of these patterns is dependent on the medium or substrate in which they appear, and these may be opaque (as for example on paper) or transparent (as for example in plastic film). For purposes of discussion we shall assume the two primary patterns are each printed in greyscale ink on a white sheet, where the opacity (e.g., shade of grey) of the "printed" part is given by a value between 0 (white) and 1 (black) inclusive, with 1 / 2 representing neutral grey. Any value less than 0 or greater than 1 using this grey scale is essentially "unprintable".
We shall also choose to represent the opacity of the pattern resulting from printing one pattern atop the other at a given point on the paper as the average (i.e. the arithmetic mean) of each pattern's opacity at that position, which is half their sum, and, as calculated, does not exceed 1. (This choice is not unique. Any other method to combine the functions that satisfies keeping the resultant function value within the bounds [0,1] will also serve; arithmetic averaging has the virtue of simplicity—with hopefully minimal damage to one's concepts of the printmaking process.)
We now consider the "printing" superimposition of two almost similar, sinusoidally varying, grey-scale patterns to show how they produce a moiré effect in first printing one pattern on the paper, and then printing the other pattern over the first, keeping their coordinate axes in register. We represent the grey intensity in each pattern by a positive opacity function of distance along a fixed direction (say, the x-coordinate) in the paper plane, in the form
f = 1 + sin ( k x ) 2 {\displaystyle f={\frac {1+\sin(kx)}{2}}}
where the presence of 1 keeps the function positive definite, and the division by 2 prevents function values greater than 1.
The quantity k represents the periodic variation (i.e., spatial frequency) of the pattern's grey intensity, measured as the number of intensity cycles per unit distance. Since the sine function is cyclic over argument changes of 2π , the distance increment Δ x per intensity cycle (the wavelength) obtains when k Δ x = 2π , or Δ x = 2π / k .
Consider now two such patterns, where one has a slightly different periodic variation from the other:
f 1 = 1 + sin ( k 1 x ) 2 f 2 = 1 + sin ( k 2 x ) 2 {\displaystyle {\begin{aligned}f_{1}&={\frac {1+\sin(k_{1}x)}{2}}\\[4pt]f_{2}&={\frac {1+\sin(k_{2}x)}{2}}\end{aligned}}}
such that k 1 ≈ k 2 .
The average of these two functions, representing the superimposed printed image, evaluates as follows (see reverse identities here : Prosthaphaeresis ):
f 3 = f 1 + f 2 2 = 1 2 + sin ( k 1 x ) + sin ( k 2 x ) 4 = 1 + sin ( A x ) cos ( B x ) 2 {\displaystyle {\begin{aligned}f_{3}&={\frac {f_{1}+f_{2}}{2}}\\[5pt]&={\frac {1}{2}}+{\frac {\sin(k_{1}x)+\sin(k_{2}x)}{4}}\\[5pt]&={\frac {1+\sin(Ax)\cos(Bx)}{2}}\end{aligned}}}
where it is easily shown that
A = k 1 + k 2 2 {\displaystyle A={\frac {k_{1}+k_{2}}{2}}}
and
B = k 1 − k 2 2 . {\displaystyle B={\frac {k_{1}-k_{2}}{2}}.}
This function average, f 3 , clearly lies in the range [0,1]. Since the periodic variation A is the average of and therefore close to k 1 and k 2 , the moiré effect is distinctively demonstrated by the sinusoidal envelope "beat" function cos( Bx ) , whose periodic variation is half the difference of the periodic variations k 1 and k 2 (and evidently much lower in frequency).
Other one-dimensional moiré effects include the classic beat frequency tone which is heard when two pure notes of almost identical pitch are sounded simultaneously. This is an acoustic version of the moiré effect in the one dimension of time: the original two notes are still present—but the listener's perception is of two pitches that are the average of and half the difference of the frequencies of the two notes. Aliasing in sampling of time-varying signals also belongs to this moiré paradigm.
Consider two patterns with the same step p , but the second pattern is rotated by an angle α . Seen from afar, we can also see darker and paler lines: the pale lines correspond to the lines of nodes , that is, lines passing through the intersections of the two patterns.
If we consider a cell of the lattice formed, we can see that it is a rhombus with the four sides equal to d = p / sin α ; (we have a right triangle whose hypotenuse is d and the side opposite to the angle α is p ).
The pale lines correspond to the small diagonal of the rhombus. As the diagonals are the bisectors of the neighbouring sides, we can see that the pale line makes an angle equal to α / 2 with the perpendicular of each pattern's line.
Additionally, the spacing between two pale lines is D , half of the long diagonal. The long diagonal 2 D is the hypotenuse of a right triangle and the sides of the right angle are d (1 + cos α ) and p . The Pythagorean theorem gives: ( 2 D ) 2 = d 2 ( 1 + cos α ) 2 + p 2 {\displaystyle (2D)^{2}=d^{2}(1+\cos \alpha )^{2}+p^{2}} that is: ( 2 D ) 2 = p 2 sin 2 α ( 1 + cos α ) 2 + p 2 = p 2 ⋅ ( ( 1 + cos α ) 2 sin 2 α + 1 ) {\displaystyle {\begin{aligned}(2D)^{2}&={\frac {p^{2}}{\sin ^{2}\alpha }}(1+\cos \alpha )^{2}+p^{2}\\[5pt]&=p^{2}\cdot \left({\frac {(1+\cos \alpha )^{2}}{\sin ^{2}\alpha }}+1\right)\end{aligned}}} thus ( 2 D ) 2 = 2 p 2 ⋅ 1 + cos α sin 2 α D = p 2 sin α 2 . {\displaystyle {\begin{aligned}(2D)^{2}&=2p^{2}\cdot {\frac {1+\cos \alpha }{\sin ^{2}\alpha }}\\[5pt]D&={\frac {\frac {p}{2}}{\sin {\frac {\alpha }{2}}}}.\end{aligned}}}
When α is very small ( α < π / 6 ) the following small-angle approximations can be made: sin α ≈ α cos α ≈ 1 {\displaystyle {\begin{aligned}\sin \alpha &\approx \alpha \\\cos \alpha &\approx 1\end{aligned}}} thus D ≈ p α . {\displaystyle D\approx {\frac {p}{\alpha }}.}
We can see that the smaller α is, the farther apart the pale lines; when both patterns are parallel ( α = 0 ), the spacing between the pale lines is infinite (there is no pale line).
There are thus two ways to determine α : by the orientation of the pale lines and by their spacing α ≈ p D {\displaystyle \alpha \approx {\frac {p}{D}}} If we choose to measure the angle, the final error is proportional to the measurement error. If we choose to measure the spacing, the final error is proportional to the inverse of the spacing. Thus, for the small angles, it is best to measure the spacing.
In graphic arts and prepress , the usual technology for printing full-color images involves the superimposition of halftone screens. These are regular rectangular dot patterns—often four of them, printed in cyan, yellow, magenta, and black. Some kind of moiré pattern is inevitable, but in favorable circumstances the pattern is "tight"; that is, the spatial frequency of the moiré is so high that it is not noticeable. In the graphic arts, the term moiré means an excessively visible moiré pattern. Part of the prepress art consists of selecting screen angles and halftone frequencies which minimize moiré. The visibility of moiré is not entirely predictable. The same set of screens may produce good results with some images, but visible moiré with others.
Moiré patterns are commonly seen on television screens when a person is wearing a shirt or jacket of a particular weave or pattern, such as a houndstooth jacket. This is due to interlaced scanning in televisions and non-film cameras, referred to as interline twitter . As the person moves about, the moiré pattern is quite noticeable. Because of this, newscasters and other professionals who regularly appear on TV are instructed to avoid clothing which could cause the effect.
Photographs of a TV screen taken with a digital camera often exhibit moiré patterns. Since both the TV screen and the digital camera use a scanning technique to produce or to capture pictures with horizontal scan lines, the conflicting sets of lines cause the moiré patterns. To avoid the effect, the digital camera can be aimed at an angle of 30 degrees to the TV screen.
The moiré effect is used in shoreside beacons called "Inogon leading marks" or "Inogon lights", manufactured by Inogon Licens AB, Sweden, to designate the safest path of travel for ships heading to locks, marinas, ports, etc., or to indicate underwater hazards (such as pipelines or cables). The moiré effect creates arrows that point towards an imaginary line marking the hazard or line of safe passage; as navigators pass over the line, the arrows on the beacon appear to become vertical bands before changing back to arrows pointing in the reverse direction. [ 8 ] [ 9 ] [ 10 ] An example can be found in the UK on the eastern shore of Southampton Water , opposite Fawley oil refinery ( 50°51′21.63″N 1°19′44.77″W / 50.8560083°N 1.3291028°W / 50.8560083; -1.3291028 ). [ 11 ] Similar moiré effect beacons can be used to guide mariners to the centre point of an oncoming bridge; when the vessel is aligned with the centreline, vertical lines are visible.
Inogon lights are deployed at airports to help pilots on the ground keep to the centreline while docking on stand. [ 12 ]
In manufacturing industries, these patterns are used for studying microscopic strain in materials: by deforming a grid with respect to a reference grid and measuring the moiré pattern, the stress levels and patterns can be deduced. This technique is attractive because the scale of the moiré pattern is much larger than the deflection that causes it, making measurement easier.
The moiré effect can be used in strain measurement: the operator just has to draw a pattern on the object, and superimpose the reference pattern to the deformed pattern on the deformed object.
A similar effect can be obtained by the superposition of a holographic image of the object to the object itself: the hologram is the reference step, and the difference with the object are the deformations, which appear as pale and dark lines.
Some image scanner computer programs provide an optional filter , called a "descreen" filter, to remove moiré pattern artifacts which would otherwise be produced when scanning printed halftone images to produce digital images. [ 13 ]
Many banknotes exploit the tendency of digital scanners to produce moiré patterns by including fine circular or wavy designs that are likely to exhibit a moiré pattern when scanned and printed. [ 14 ]
In super-resolution microscopy , the moiré pattern can be used to obtain images with a resolution higher than the diffraction limit , using a technique known as structured illumination microscopy . [ 2 ]
In scanning tunneling microscopy , moiré fringes appear if surface atomic layers have a different crystal structure than the bulk crystal. This can for example be due to surface reconstruction of the crystal, or when a thin layer of a second crystal is on the surface, e.g. single-layer, [ 15 ] [ 16 ] double-layer graphene , [ 17 ] or Van der Waals heterostructure of graphene and hBN, [ 18 ] [ 19 ] or bismuth and antimony nanostructures. [ 20 ]
In transmission electron microscopy (TEM), translational moiré fringes can be seen as parallel contrast lines formed in phase-contrast TEM imaging by the interference of diffracting crystal lattice planes that are overlapping, and which might have different spacing and/or orientation. [ 21 ] Most of the moiré contrast observations reported in the literature are obtained using high-resolution phase contrast imaging in TEM. However, if probe aberration-corrected high-angle annular dark field scanning transmission electron microscopy (HAADF-STEM) imaging is used, more direct interpretation of the crystal structure in terms of atom types and positions is obtained. [ 21 ] [ 22 ]
In condensed matter physics, the moiré phenomenon is commonly discussed for two-dimensional materials . The effect occurs when there is mismatch between the lattice parameter or angle of the 2D layer and that of the underlying substrate, [ 15 ] [ 16 ] or another 2D layer, such as in 2D material heterostructures. [ 19 ] [ 20 ] The phenomenon is exploited as a means of engineering the electronic structure or optical properties of materials, [ 23 ] which some call moiré materials. The often significant changes in electronic properties when twisting two atomic layers and the prospect of electronic applications has led to the name twistronics of this field. A prominent example is in twisted bi-layer graphene , which forms a moiré pattern and at a particular magic angle exhibits superconductivity and other important electronic properties. [ 24 ]
In materials science , known examples exhibiting moiré contrast are thin films [ 25 ] or nanoparticles of MX-type (M = Ti, Nb; X = C, N) overlapping with austenitic matrix. Both phases, MX and the matrix, have face-centered cubic crystal structure and cube-on-cube orientation relationship. However, they have significant lattice misfit of about 20 to 24% (based on the chemical composition of alloy), which produces a moiré effect. [ 22 ] | https://en.wikipedia.org/wiki/Moiré_pattern |
Moisture is the presence of a liquid, especially water, often in trace amounts. Moisture is defined as water in the adsorbed or absorbed phase. [ 1 ] Small amounts of water may be found, for example, in the air ( humidity ), in foods, and in some commercial products. Moisture also refers to the amount of water vapor present in the air. The soil also includes moisture. [ 2 ]
Control of moisture in products can be a vital part of the process of the product. There is a substantial amount of moisture in what seems to be dry matter . Ranging in products from cornflake cereals to washing powders , moisture can play an important role in the final quality of the product. There are two main aspects of concern in moisture control in products: allowing too much moisture or too little of it. For example, adding some water to cornflake cereal, which is sold by weight, reduces costs and prevents it from tasting too dry, but adding too much water can affect the crunchiness of the cereal and the freshness because water content contributes to bacteria growth. Water content of some foods is also manipulated to reduce the number of calories .
Moisture has different effects on different products, influencing the final quality of the product. Wood pellets , for instance, are made by taking remainders of wood and grinding them to make compact pellets, which are sold as a fuel. They need to have a relatively low water content for combustion efficiency . The more moisture that is allowed in the pellet, the more smoke that will be released when the pellet is burned.
The need to measure water content of products has given rise to a new area of science, aquametry . There are many ways to measure moisture in products, such as different wave measurement (light and audio), electromagnetic fields , capacitive methods, and the more traditional weighing and drying technique. | https://en.wikipedia.org/wiki/Moisture |
Moisture expansion is the tendency of matter to change its volume in response to a change in moisture content. The macroscopic effect is similar to that of thermal expansion but the microscopic causes are very different. Moisture expansion is caused by hygroscopy .
This physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Moisture_expansion |
In hydrology , moisture recycling or precipitation recycling refer to the process by which a portion of the precipitated water that evapotranspired from a given area contributes to the precipitation over the same area. Moisture recycling is thus a component of the hydrologic cycle . The ratio of the locally derived precipitation ( P L ) to total precipitation ( P ) is known as the recycling ratio , ρ : [ 1 ] Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \rho = \frac{P_L}{P}.}
The recycling ratio is a diagnostic measure of the potential for interactions between land surface hydrology and regional climate . [ 2 ] [ 3 ] [ 4 ] [ 5 ] Land use changes, such as deforestation or agricultural intensification, have the potential to change the amount of precipitation that falls in a region. The recycling ratio for the entire world is one, and for a single point is zero. Estimates for the recycling ratio for the Amazon basin range from 24% to 56%, and for the Mississippi basin from 21% to 24%. [ 6 ]
The concept of moisture recycling has been integrated into the concept of the precipitationshed . A precipitationshed is the upwind ocean and land surface that contributes evaporation to a given, downwind location's precipitation. In much the same way that a watershed is defined by a topographically explicit area that provides surface runoff , the precipitationshed is a statistically defined area within which evaporation, traveling via moisture recycling, provides precipitation for a specific point.
This climatology -related article is a stub . You can help Wikipedia by expanding it .
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Moisture_recycling |
Mojibake ( Japanese : 文字化け ; IPA: [mod͡ʑibake] , 'character transformation') is the garbled or gibberish text that is the result of text being decoded using an unintended character encoding . [ 1 ] The result is a systematic replacement of symbols with completely unrelated ones, often from a different writing system .
This display may include the generic replacement character ⟨�⟩ in places where the binary representation is considered invalid. A replacement can also involve multiple consecutive symbols, as viewed in one encoding, when the same binary code constitutes one symbol in the other encoding. This is either because of differing constant length encoding (as in Asian 16-bit encodings vs European 8-bit encodings), or the use of variable length encodings (notably UTF-8 and UTF-16 ).
Failed rendering of glyphs due to either missing fonts or missing glyphs in a font is a different issue that is not to be confused with mojibake. Symptoms of this failed rendering include blocks with the code point displayed in hexadecimal or using the generic replacement character. Importantly, these replacements are valid and are the result of correct error handling by the software.
To correctly reproduce the original text that was encoded, the correspondence between the encoded data and the notion of its encoding must be preserved (i.e. the source and target encoding standards must be the same). As mojibake is the instance of non-compliance between these, it can be achieved by manipulating the data itself, or just relabelling it.
Mojibake is often seen with text data that have been tagged with a wrong encoding; it may not even be tagged at all, but moved between computers with different default encodings. A major source of trouble are communication protocols that rely on settings on each computer rather than sending or storing metadata together with the data.
The differing default settings between computers are in part due to differing deployments of Unicode among operating system families, and partly the legacy encodings' specializations for different writing systems of human languages. Whereas Linux distributions mostly switched to UTF-8 in 2004, [ 2 ] Microsoft Windows generally uses UTF-16, and sometimes uses 8-bit code pages for text files in different languages.
For some writing systems , such as Japanese , several encodings have historically been employed, causing users to see mojibake relatively often. As an example, the word mojibake itself ("文字化け") stored as EUC-JP might be incorrectly displayed as "ハクサ�ス、ア", "ハクサ嵂ス、ア" ( MS-932 ), or "ハクサ郾ス、ア" if interpreted as Shift-JIS, or as "ʸ»ú²½¤±" in software that assumes text to be in the Windows-1252 or ISO 8859-1 encodings, usually labelled Western or Western European . This is further exacerbated if other locales are involved: the same text stored as UTF-8 appears as "譁�蟄怜喧縺�" if interpreted as Shift-JIS, as "æ–‡å—化ã‘" if interpreted as Western, or (for example) as "鏂囧瓧鍖栥亼" if interpreted as being in a GBK (Mainland China) locale.
If the encoding is not specified, it is up to the software to decide it by other means. Depending on the type of software, the typical solution is either configuration or charset detection heuristics, both of which are prone to mis-prediction.
The encoding of text files is affected by locale setting, which depends on the user's language and brand of operating system , among other conditions. Therefore, the assumed encoding is systematically wrong for files that come from a computer with a different setting, or even from a differently localized piece of software within the same system. For Unicode, one solution is to use a byte order mark , but many parsers do not tolerate this for source code or other machine-readable text. Another solution is to store the encoding as metadata in the file system; file systems that support extended file attributes can store this as user.charset . [ 3 ] This also requires support in software that wants to take advantage of it, but does not disturb other software.
While some encodings are easy to detect, such as UTF-8, there are many that are hard to distinguish (see charset detection ). For example, a web browser may not be able to distinguish between a page coded in EUC-JP and another in Shift-JIS if the encoding is not assigned explicitly using HTTP headers sent along with the documents, or using the document's meta tags that are used to substitute for missing HTTP headers if the server cannot be configured to send the proper HTTP headers; see character encodings in HTML .
Mojibake also occurs when the encoding is incorrectly specified. This often happens between encodings that are similar. For example, the Eudora email client for Windows was known to send emails labelled as ISO 8859-1 that were in reality Windows-1252 . [ 4 ] Windows-1252 contains extra printable characters in the C1 range (the most frequently seen being curved quotation marks and extra dashes ), that were not displayed properly in software complying with the ISO standard; this especially affected software running under other operating systems such as Unix .
Of the encodings still in common use, many originated from taking ASCII and appending atop it; as a result, these encodings are partially compatible with each other. Examples of this include Windows-1252 and ISO 8859-1. People thus may mistake the expanded encoding set they are using with plain ASCII.
When there are layers of protocols, each trying to specify the encoding based on different information, the least certain information may be misleading to the recipient.
For example, consider a web server serving a static HTML file over HTTP. The character set may be communicated to the client in any number of 3 ways:
Much older hardware is typically designed to support only one character set and the character set typically cannot be altered. The character table contained within the display firmware will be localized to have characters for the country the device is to be sold in, and typically the table differs from country to country. As such, these systems will potentially display mojibake when loading text generated on a system from a different country. Likewise, many early operating systems do not support multiple encoding formats and thus will end up displaying mojibake if made to display non-standard text – early versions of Microsoft Windows and Palm OS for example, are localized on a per-country basis and will only support encoding standards relevant to the country the localized version will be sold in, and thus will display mojibake if a file containing a text in a different encoding format from the version that the OS is designed to support is opened.
Applications using UTF-8 as a default encoding may achieve a greater degree of interoperability because of its widespread use and backward compatibility with US-ASCII . UTF-8 also has the ability to be directly recognised by a simple algorithm, so that well written software should be able to avoid mixing UTF-8 up with other encodings.
The difficulty of resolving an instance of mojibake varies depending on the application within which it occurs and the causes of it. Two of the most common applications in which mojibake may occur are web browsers and word processors . Modern browsers and word processors often support a wide array of character encodings. Browsers often allow a user to change their rendering engine's encoding setting on the fly, while word processors allow the user to select the appropriate encoding when opening a file. It may take some trial and error for users to find the correct encoding.
The problem gets more complicated when it occurs in an application that normally does not support a wide range of character encoding, such as in a non-Unicode computer game. In this case, the user must change the operating system's encoding settings to match that of the game. However, changing the system-wide encoding settings can also cause Mojibake in pre-existing applications. In Windows XP or later, a user also has the option to use Microsoft AppLocale , an application that allows the changing of per-application locale settings. Even so, changing the operating system encoding settings is not possible on earlier operating systems such as Windows 98 ; to resolve this issue on earlier operating systems, a user would have to use third party font rendering applications.
Mojibake in English texts generally occurs in punctuation, such as em dashes (—), en dashes (–), and curly quotes (“, ”, ‘, ’), but rarely in character text, since most encodings agree with ASCII on the encoding of the English alphabet . For example, the pound sign £ will appear as £ if it was encoded by the sender as UTF-8 but interpreted by the recipient as one of the Western European encodings ( CP1252 or ISO 8859-1 ). If iterated using CP1252, this can lead to £ , £ , £ , £ , and so on.
Similarly, the right single quotation mark (’), when encoded in UTF-8 and decoded using Windows-1252, becomes ’ , ’ , ’ , and so on.
In older eras, some computers had vendor-specific encodings which caused mismatch also for English text. Commodore brand 8-bit computers used PETSCII encoding, particularly notable for inverting the upper and lower case compared to standard ASCII . PETSCII printers worked fine on other computers of the era, but inverted the case of all letters. IBM mainframes use the EBCDIC encoding which does not match ASCII at all.
The alphabets of the North Germanic languages , Catalan , Romanian , Finnish , French , German , Italian , Portuguese and Spanish are all extensions of the Latin alphabet . The additional characters are typically the ones that become corrupted, making texts only mildly unreadable with mojibake:
... and their uppercase counterparts, if applicable.
These are languages for which the ISO 8859-1 character set (also known as Latin 1 or Western ) has been in use. However, ISO 8859-1 has been obsoleted by two competing standards, the backward compatible Windows-1252 , and the slightly altered ISO 8859-15 . Both add the Euro sign € and the French œ, but otherwise any confusion of these three character sets does not create mojibake in these languages. Furthermore, it is always safe to interpret ISO 8859-1 as Windows-1252, and fairly safe to interpret it as ISO 8859-15, in particular with respect to the Euro sign, which replaces the rarely used currency sign (¤). However, with the advent of UTF-8 , mojibake has become more common in certain scenarios, e.g. exchange of text files between UNIX and Windows computers, due to UTF-8's incompatibility with Latin-1 and Windows-1252. But UTF-8 has the ability to be directly recognised by a simple algorithm, so that well written software should be able to avoid mixing UTF-8 up with other encodings, so this was most common when many had software not supporting UTF-8. Most of these languages were supported by MS-DOS default CP437 and other machine default encodings, except ASCII, so problems when buying an operating system version were less common. Windows and MS-DOS are not compatible, however.
In Swedish, Norwegian, Danish and German, vowels are rarely repeated, and it is usually obvious when one character gets corrupted, e.g. the second letter in the Swedish word kärlek ("love") when it is encoded in UTF-8 but decoded in Western, producing "kärlek", or für in German, which becomes "für". This way, even though the reader has to guess what the original letter is, almost all texts remain legible. Finnish, on the other hand, frequently uses repeating vowels in words like hääyö ("wedding night") which can make corrupted text very hard to read (e.g. hääyö appears as "hääyö"). Icelandic has ten possibly confounding characters, and Faroese has eight, making many words almost completely unintelligible when corrupted (e.g. Icelandic þjóðlöð , "outstanding hospitality", appears as "þjóðlöð").
In German, Buchstabensalat ("letter salad") is a common term for this phenomenon, in Spanish, deformación (literally "deformation") is used, and in Portuguese, desformatação (literally "deformatting") is used.
Some users transliterate their writing when using a computer, either by omitting the problematic diacritics, or by using digraph replacements (å → aa, ä/æ → ae, ö/ø → oe, ü → ue etc.). Thus, an author might write "ueber" instead of "über", which is standard practice in German when umlauts are not available. The latter practice seems to be better tolerated in the German language sphere than in the Nordic countries . For example, in Norwegian, digraphs are associated with archaic Danish, and may be used jokingly. However, digraphs are useful in communication with other parts of the world. As an example, the Norwegian football player Ole Gunnar Solskjær had his last name spelled "SOLSKJAER" on his uniform when he played for Manchester United .
An artifact of UTF-8 misinterpreted as ISO 8859-1 , " Ring meg nå " being rendered as "Ring meg nÃ¥", was seen in 2014 in an SMS scam targeting Norway. [ 5 ]
The same problem occurs also in Romanian, see these examples:
Users of Central and Eastern European languages can also be affected. Because most computers were not connected to any network during the mid- to late-1980s, there were different character encodings for every language with diacritical characters (see ISO/IEC 8859 and KOI-8 ), often also varying by operating system.
In Hungarian , the phenomenon is referred to as betűszemét , meaning "letter garbage". Hungarian has been particularly susceptible as it contains the accented letters á, é, í, ó, ú, ö, ü (all present in the Latin-1 character set), plus the two characters ő and ű which are not in Latin-1. These two characters can be correctly encoded in Latin-2, Windows-1250, and Unicode. However, before Unicode became common in e-mail clients, e-mails containing Hungarian text often had the letters ő and ű corrupted, sometimes to the point of unrecognizability. It is common to respond to a corrupted e-mail with the nonsense phrase "Árvíztűrő tükörfúrógép" (literally "Flood-resistant mirror-drilling machine") which contains all accented characters used in Hungarian.
Prior to the creation of ISO 8859-2 in 1987, users of various computing platforms used their own character encodings such as AmigaPL on Amiga, Atari Club on Atari ST and Masovia, IBM CP852 , Mazovia and Windows CP1250 on IBM PCs. Polish companies selling early DOS computers created their own mutually-incompatible ways to encode Polish characters and simply reprogrammed the EPROMs of the video cards (typically CGA , EGA , or Hercules ) to provide hardware code pages with the needed glyphs for Polish – arbitrarily located with no reference whatsoever to where other computer sellers had placed them.
The situation began to improve when, after pressure from academic and user groups, ISO 8859-2 succeeded as the "Internet standard" with limited support of the dominant vendors' software (today largely replaced by Unicode). With the numerous problems caused by the variety of encodings, even today some users tend to refer to Polish diacritical characters as krzaczki ( [ˈkʂät͜ʂ.ki] , lit. "little shrubs").
Mojibake is colloquially called krakozyabry ( кракозя́бры [krɐkɐˈzʲæbrɪ̈] ) in Russian , which was and remains complicated by several systems for encoding Cyrillic . [ 6 ] The Soviet Union and early Russian Federation developed KOI encodings ( Kod Obmena Informatsiey , Код Обмена Информацией , which translates to "Code for Information Exchange"). This began with Cyrillic-only 7-bit KOI7 , based on ASCII but with Latin and some other characters replaced with Cyrillic letters. Then came 8-bit KOI8 encoding that is an ASCII extension which encodes Cyrillic letters only with high-bit set octets corresponding to 7-bit codes from KOI7. It is for this reason that KOI8 text, even Russian, remains partially readable after stripping the eighth bit, which was considered as a major advantage in the age of 8BITMIME -unaware email systems. For example, the words " Школа русского языка " ( shkola russkogo yazyka ), when encoded in KOI8 and passed through the high bit stripping process, end up being rendered as "[KOLA RUSSKOGO qZYKA". Eventually, KOI8 gained different flavors for Russian and Bulgarian ( KOI8-R ), Ukrainian ( KOI8-U ), Belarusian (KOI8-RU), and even Tajik (KOI8-T).
Meanwhile, in the West, Code page 866 supported Ukrainian and Belarusian , as well as Russian and Bulgarian in MS-DOS . For Microsoft Windows , Code Page 1251 added support for Serbian and other Slavic variants of Cyrillic .
Most recently, the Unicode encoding includes code points for virtually all characters in all languages, including all Cyrillic characters.
Before Unicode, it was necessary to match text encoding with a font using the same encoding system; failure to do this produced unreadable gibberish whose specific appearance varied depending on the exact combination of text and font encoding. For example, attempting to view non-Unicode Cyrillic text using a font that is limited to the Latin alphabet, or using the default ("Western") encoding, typically results in text that consists almost entirely of capitalized vowels with diacritical marks (e.g. KOI8 " Библиотека " ( biblioteka , library) becomes "âÉÂÌÉÏÔÅËÁ", while "Школа русского языка" ( shkola russkogo yazyka , Russian-language school) becomes "ûËÏÌÁ ÒÕÓÓËÏÇÏ ÑÚÙËÁ"). Using Code Page 1251 to view text in KOI8, or vice versa, results in garbled text that consists mostly of capital letters (KOI8 and Code Page 1251 share the same ASCII region, but KOI8 has uppercase letters in the region where Code Page 1251 has lowercase, and vice versa).
During the early years of the Russian sector of the World Wide Web, both KOI8 and Code Page 1251 were common. Nearly all websites now use Unicode, but as of November 2023, [update] an estimated 0.35% of all web pages worldwide—all languages included—are still encoded in Code Page 1251, while less than 0.003% of sites are still encoded in KOI8-R. [ 7 ] [ 8 ] Though the HTML standard includes the ability to specify the encoding for any given web page in its source, [ 9 ] this is sometimes neglected, forcing the user to switch encodings in the browser manually.
In Bulgarian, mojibake is often called majmunica ( маймуница ), meaning "monkey's [alphabet]". In Serbian , it is called đubre ( ђубре ), meaning " trash ". Unlike the former USSR, South Slavs never used something like KOI8, and Code Page 1251 was the dominant Cyrillic encoding before Unicode; therefore, these languages experienced fewer encoding incompatibility troubles than Russian. In the 1980s, Bulgarian computers used their own MIK encoding , which is superficially similar to, albeit although incompatible with, CP866.
Croatian , Bosnian , Serbian (the seceding varieties of Serbo-Croatian language) and Slovenian add to the basic Latin alphabet the letters š, đ, č, ć, ž, and their capital counterparts Š, Đ, Č, Ć, Ž (only č/Č, š/Š and ž/Ž are officially used in Slovenian, although others are used when needed, mostly in foreign names). All of these letters are defined in Latin-2 and Windows-1250 , while only some (š, Š, ž, Ž, Đ) exist in the usual OS-default Windows-1252 , and are there because of some other languages.
Although Mojibake can occur with any of these characters, the letters that are not included in Windows-1252 are much more prone to errors. Thus, even nowadays, "šđčćž ŠĐČĆŽ" is often displayed as "šðèæž ŠÐÈÆŽ", although ð, È, and Æ are never used in Slavic languages.
When confined to basic ASCII (most user names, for example), common replacements are: š→s, đ→dj, č→c, ć→c, ž→z (capital forms analogously, with Đ→Dj or Đ→DJ depending on word case). All of these replacements introduce ambiguities, so reconstructing the original from such a form is usually done manually if required.
The Windows-1252 encoding is important because the English versions of the Windows operating system are most widespread, not localized ones. [ citation needed ] The reasons for this include a relatively small and fragmented market, increasing the price of high quality localization, a high degree of software piracy (in turn caused by high price of software compared to income), which discourages localization efforts, and people preferring English versions of Windows and other software. [ citation needed ]
The drive to differentiate Croatian from Serbian, Bosnian from Croatian and Serbian, and now even Montenegrin from the other three creates many problems. There are many different localizations, using different standards and of different quality. There are no common translations for the vast amount of computer terminology originating in English. In the end, people use English loanwords ("kompjuter" for "computer", "kompajlirati" for "compile," etc.), and if they are unaccustomed to the translated terms, they may not understand what some option in a menu is supposed to do based on the translated phrase. Therefore, people who understand English, as well as those who are accustomed to English terminology (who are most, because English terminology is also mostly taught in schools because of these problems) regularly choose the original English versions of non-specialist software.
When Cyrillic script is used (for Macedonian and partially Serbian ), the problem is similar to other Cyrillic-based scripts .
Newer versions of English Windows allow the code page to be changed (older versions require special English versions with this support), but this setting can be and often was incorrectly set. For example, Windows 98 and Windows Me can be set to most non-right-to-left single-byte code pages including 1250, but only at install time.
Another type of mojibake occurs when text encoded in a single-byte encoding is erroneously parsed in a multi-byte encoding, such as one of the encodings for East Asian languages . With this kind of mojibake more than one (typically two) characters are corrupted at once. For example, if the Swedish word kärlek is encoded in Windows-1252 but decoded using GBK, it will appear as "k鋜lek", where " är " is parsed as "鋜". Compared to the above mojibake, this is harder to read, since letters unrelated to the problematic å, ä or ö are missing, and is especially problematic for short words starting with å, ä or ö (e.g. "än" becomes "鋘"). Since two letters are combined, the mojibake also seems more random (over 50 variants compared to the normal three, not counting the rarer capitals). In some rare cases, an entire text string which happens to include a pattern of particular word lengths, such as the sentence " Bush hid the facts ", may be misinterpreted.
In Vietnamese , the phenomenon is called chữ ma ( Hán–Nôm : 𡨸魔, "ghost characters") or loạn mã (from Chinese 乱码, luànmǎ ). It can occur when a computer tries to decode text encoded in UTF-8 as Windows-1258 , TCVN3 or VNI. In Vietnam, chữ ma was commonly seen on computers that ran pre-Vista versions of Windows or cheap mobile phones.
In Japan , mojibake is especially problematic as there are many different Japanese text encodings. Alongside Unicode encodings (UTF-8 and UTF-16), there are other standard encodings, such as Shift-JIS (Windows machines) and EUC-JP (UNIX systems). Even to this day, mojibake is often encountered by both Japanese and non-Japanese people when attempting to run software written for the Japanese market.
In Chinese , the same phenomenon is called Luàn mǎ ( Pinyin , Simplified Chinese 乱码 , Traditional Chinese 亂碼 , meaning 'chaotic code'), and can occur when computerised text is encoded in one Chinese character encoding but is displayed using the wrong encoding. When this occurs, it is often possible to fix the issue by switching the character encoding without loss of data. The situation is complicated because of the existence of several Chinese character encoding systems in use, the most common ones being: Unicode , Big5 , and Guobiao (with several backward compatible versions), and the possibility of Chinese characters being encoded using Japanese encoding.
It is relatively easy to identify the original encoding when luànmǎ occurs in Guobiao encodings:
An additional problem in Chinese occurs when rare or antiquated characters, many of which are still used in personal or place names, do not exist in some encodings. Examples of this are:
Newspapers have dealt with missing characters in various ways, including using image editing software to synthesize them by combining other radicals and characters; using a picture of the personalities (in the case of people's names), or simply substituting homophones in the hope that readers would be able to make the correct inference.
A similar effect can occur in Brahmic or Indic scripts of South Asia , used in such Indo-Aryan or Indic languages as Hindustani (Hindi-Urdu), Bengali , Punjabi , Marathi , and others, even if the character set employed is properly recognized by the application. This is because, in many Indic scripts, the rules by which individual letter symbols combine to create symbols for syllables may not be properly understood by a computer missing the appropriate software, even if the glyphs for the individual letter forms are available.
One example of this is the old Wikipedia logo , which attempts to show the character analogous to "wi" (the first syllable of "Wikipedia") on each of many puzzle pieces. The puzzle piece meant to bear the Devanagari character for "wi" instead used to display the "wa" character followed by an unpaired "i" modifier vowel, easily recognizable as mojibake generated by a computer not configured to display Indic text. [ 11 ] The logo as redesigned as of May 2010 [ref] has fixed these errors.
The idea of Plain Text requires the operating system to provide a font to display Unicode codes. This font is different from OS to OS for Singhala and it makes orthographically incorrect glyphs for some letters (syllables) across all operating systems. For instance, the 'reph', the short form for 'r' is a diacritic that normally goes on top of a plain letter. However, it is wrong to go on top of some letters like 'ya' or 'la' in specific contexts. For Sanskritic words or names inherited by modern languages, such as कार्य, IAST: kārya , or आर्या, IAST: āryā , it is apt to put it on top of these letters. By contrast, for similar sounds in modern languages which result from their specific rules, it is not put on top, such as the word करणाऱ्या, IAST: karaṇāryā , a stem form of the common word करणारा/री, IAST: karaṇārā/rī , in the Marathi language . [ 12 ] But it happens in most operating systems. This appears to be a fault of internal programming of the fonts. In Mac OS and iOS, the muurdhaja l (dark l) and 'u' combination and its long form both yield wrong shapes. [ citation needed ]
Some Indic and Indic-derived scripts, most notably Lao , were not officially supported by Windows XP until the release of Vista . [ 13 ] However, various sites have made free-to-download fonts.
Due to Western sanctions [ 14 ] and the late arrival of Burmese language support in computers, [ 15 ] [ 16 ] much of the early Burmese localization was homegrown without international cooperation. The prevailing means of Burmese support is via the Zawgyi font , a font that was created as a Unicode font but was in fact only partially Unicode compliant. [ 16 ] In the Zawgyi font, some codepoints for Burmese script were implemented as specified in Unicode , but others were not. [ 17 ] The Unicode Consortium refers to this as ad hoc font encodings . [ 18 ] With the advent of mobile phones, mobile vendors such as Samsung and Huawei simply replaced the Unicode compliant system fonts with Zawgyi versions. [ 15 ]
Due to these ad hoc encodings, communications between users of Zawgyi and Unicode would render as garbled text. To get around this issue, content producers would make posts in both Zawgyi and Unicode. [ 19 ] Myanmar government designated 1 October 2019 as "U-Day" to officially switch to Unicode. [ 14 ] The full transition was estimated to take two years. [ 20 ]
In certain writing systems of Africa , unencoded text is unreadable. Texts that may produce mojibake include those from the Horn of Africa such as the Ge'ez script in Ethiopia and Eritrea , used for Amharic , Tigre , and other languages, and the Somali language , which employs the Osmanya alphabet . In Southern Africa , the Mwangwego alphabet is used to write languages of Malawi and the Mandombe alphabet was created for the Democratic Republic of the Congo , but these are not generally supported. Various other writing systems native to West Africa present similar problems, such as the N'Ko alphabet , used for Manding languages in Guinea , and the Vai syllabary , used in Liberia .
Another affected language is Arabic (see below ), in which text becomes completely unreadable when the encodings do not match.
The examples in this article do not have UTF-8 as browser setting, because UTF-8 is easily recognisable, so if a browser supports UTF-8 it should recognise it automatically, and not try to interpret something else as UTF-8. | https://en.wikipedia.org/wiki/Mojibake |
Mokume-gane ( 木目金 ) is a Japanese metalworking procedure which produces a mixed-metal laminate with distinctive layered patterns; the term is also used to refer to the resulting laminate itself. The term mokume-gane translates closely to 'wood grain metal' or 'wood eye metal' and describes the way metal takes on the appearance of natural wood grain. [ 1 ] Mokume-gane fuses several layers of differently coloured precious metals together to form a sandwich of alloys called a "billet." The billet is then manipulated in such a way that a pattern resembling wood grain emerges over its surface. Numerous ways of working mokume-gane create diverse patterns. Once the metal has been rolled into a sheet or bar, several techniques are used to produce a range of effects.
Mokume-gane has been used to create many artistic objects. Though the technique was first developed for production of decorative sword fittings, the craft is today mostly used in the production of jewelry and hollowware . [ 2 ]
First developed in 17th-century Japan , mokume-gane was originally used for swords. As the customary Japanese sword stopped serving as a weapon and became largely a status symbol, a demand arose for elaborate decorative handles and sheaths. [ 3 ]
To meet this demand, Denbei Shoami (1651–1728), a master metalworker from Akita prefecture , invented the mokume-gane process. He initially called his product guri bori , as the technique in its simplest form resembled guri , a type of carved lacquerwork with alternating layers of red and black. Other historical names for it were kasumi-uchi (cloud metal) , itame-gane (wood-grain metal) , and yosefuki . [ 4 ]
The early components of mokume-gane were relatively soft metals and alloys (gold, copper, silver, shakudō , shibuichi , and kuromido ) which would form liquid phase diffusion bonds with one another without completely melting. This was useful in the traditional techniques of fusing and soldering the layers together. [ 3 ]
Over time, the practice of mokume-gane faded. The katana industry dried up in the late 19th century, with the Meiji Restoration returning ruling power to the emperor, following the dissolution of the shogunate government and the end of the samurai class. The public display of swords as a sign of samurai status was outlawed. After this, the few metalsmiths who practiced mokume-gane along with most other sword related artisans largely transferred their skills to create other objects. [ 2 ]
Tiffany & Co 's silver division under the direction of Edward C. Moore began to experiment with mokume-gane techniques around 1877, and at the Paris exposition of 1878, Tiffany's grand prize-winning display of Moore's "Japanesque" silver wares included a magnificent "Conglomerate Vase" with asymmetrical panels of mokume-gane . Moore and Tiffany's silver smiths continued to develop its popular mokume-gane techniques in preparation for the Paris exposition of 1889, where it displayed a vast array of Japanesque silver, using ever more complex alloys of shakudō , sedo and shibuichi , along with gold and silver, to make laminates of up to twenty-four layers. Tiffany's display again won the grand prize for silver wares, and the company continued to produce its Japanesque silver with mokume-gane techniques up into the 20th century. [ 5 ]
By the mid 20th century, mokume-gane had fallen into heavy obscurity. Japan's movement away from traditional craftwork, paired with the great difficulty of mastering mokume-gane , had brought mokume-gane artisans to the brink of extinction. It reached a point where only scholars and collectors of metalwork were aware of the technique. [ 3 ] It was not until the 1970s, when Hiroko Sato Pijanowski – who learned the craft from Norio Tamagawa [ 6 ] [ better source needed ] – that the craft was reignited in the public eye, as Hiroko and her husband Eugene Pijanowski brought the craft of mokume-gane back to the United States and began teaching it to their students.
Today, jewelry, flatware, hollowware, spinning tops and other artistic objects are made using mokume-gane . [ 2 ]
Modern processes are highly controlled and include a compressive force on the billet. This has allowed the technique to include many nontraditional components such as titanium , platinum , iron , bronze , brass , nickel silver , and various colors of karat gold including yellow, white, sage, and rose hues as well as sterling silver. [ 3 ] At the Santa Fe Symposium, a major annual gathering of jewelers from around the world, there have been several papers presented on new, more predictable, and more economic, methods of producing mokume-gane materials, along with new possibilities for laminating metals such as the use of friction-stir welding.
In liquid phase fusion, metal sheets were stacked and carefully heated; the solid billet of simple stripes could be forged and carved to increase the pattern's complexity. Successful lamination using this process requires a highly skilled smith with a great deal of experience. Bonding in the traditional process is achieved when some or all of the alloys in the stack are heated to the point of becoming partially molten (above solidus) this liquid alloy is what fuses the layers together. Careful heat control and skillful forging are required for this process. [ 3 ]
In attempting to recreate the appearance of traditional mokume-gane , some artisans tried brazing layers together. The sheets were soldered using silver solder or some other brazing alloy. This technique joined the metals, but is difficult to perfect, particularly on larger sheets, because flux inclusions can get trapped or bubbles could form. Commonly, these imperfections need to be cut out, and the metal re-soldered. In addition, brazed sheets also do not display the same levels of ductility and work-ability of diffusion-bonded material.
The modernized process of solid-state bonding typically uses a controlled atmosphere in a temperature-controlled furnace. Mechanical aids such as a hydraulic press or torque plates (bolted clamps) are also typically used to apply compressive force on the billet during lamination. These provide for the implementation of lower temperature solid-state diffusion between the interleaved layers, thus allowing the inclusion of non-traditional materials. [ 3 ]
After the layer fusion, the surface of the billet is cut with chisel to expose lower layers, then flattened. This cutting and flattening process will be repeated over and over again to develop intricate patterns. [ 4 ]
To increase the contrast between the laminate layers, many mokume-gane items are colored by the application of a patina (a controlled corrosion layer) to accentuate or even totally change the colors of the metal's surface.
One example of a traditional Japanese patination for mokume-gane is the use of the niiro process, usually involving rokushō , a complex copper verdigris compound produced specifically for use as a patina. The piece to be patinated is prepared, then immersed in a boiling solution until it reaches the desired color, and each element of a compound piece may be transformed to a different color. Historically, a paste of ground daikon radish was also used to prepare the work for the patina. The paste was applied immediately before the piece is boiled in the rokushō to protect the surface against tarnish and uneven coloring. [ 4 ]
In an accidental but parallel development, Sheffield plate was developed in England. It follows a similar principal of bonded layers, without use of solder, but typically had 2–3 layers, whereas mokume-gane could have many more.
Media related to Mokume-gane at Wikimedia Commons | https://en.wikipedia.org/wiki/Mokume-gane |
Moladi is a South African company specializing in a reusable plastic formwork for use in construction of affordable housing projects worldwide. The process involves creating a mold of the form of the complete structure. This wall mold is then filled with an aerated form of Mortar . The construction process is faster than traditional methods of construction.
The technology has won the Design for Development award of the South African Bureau of Standards Design Institute in 1997, with the institute praising Moladi as:
...an interlocking and modular formwork or shutter system for molding complex monolithic reinforced structures. The modular plastic panels are lightweight and extremely robust. The building method is especially suited for affordable low-cost, mass housing schemes.
In addition to being a part of the drive by the South African government to replace shantytowns with proper houses, Moladi also exports to 26 countries like Panama Democratic Republic of the Congo Tanzania and others.
Moladi solves key low-cost challenges - The company production plant is based in Port Elizabeth
Future of Construction - World Economic Forum - World Bank
"Moladi" . | https://en.wikipedia.org/wiki/Moladi |
In chemistry , molality is a measure of the amount of solute in a solution relative to a given mass of solvent. This contrasts with the definition of molarity which is based on a given volume of solution.
A commonly used unit for molality is the moles per kilogram (mol/kg). A solution of concentration 1 mol/kg is also sometimes denoted as 1 molal . The unit mol/kg requires that molar mass be expressed in kg/mol, instead of the usual g/mol or kg/kmol.
The molality ( b ), of a solution is defined as the amount of substance (in moles ) of solute, n solute , divided by the mass (in kg ) of the solvent , m solvent : [ 1 ]
In the case of solutions with more than one solvent, molality can be defined for the mixed solvent considered as a pure pseudo-solvent. Instead of mole solute per kilogram solvent as in the binary case, units are defined as mole solute per kilogram mixed solvent. [ 2 ]
The term molality is formed in analogy to molarity which is the molar concentration of a solution. The earliest known use of the intensive property molality and of its adjectival unit, the now-deprecated molal , appears to have been published by G. N. Lewis and M. Randall in the 1923 publication of Thermodynamics and the Free Energies of Chemical Substances. [ 3 ] Though the two terms are subject to being confused with one another, the molality and molarity of a dilute aqueous solution are nearly the same, as one kilogram of water (solvent) occupies the volume of 1 liter at room temperature and a small amount of solute has little effect on the volume.
The SI unit for molality is moles per kilogram of solvent.
A solution with a molality of 3 mol/kg is often described as "3 molal", "3 m" or "3 m ". However, following the SI system of units, the National Institute of Standards and Technology , the United States authority on measurement , considers the term "molal" and the unit symbol "m" to be obsolete, and suggests mol/kg or a related unit of the SI. [ 4 ]
The primary advantage of using molality as a measure of concentration is that molality only depends on the masses of solute and solvent, which are unaffected by variations in temperature and pressure. In contrast, solutions prepared volumetrically (e.g. molar concentration or mass concentration ) are likely to change as temperature and pressure change. In many applications, this is a significant advantage because the mass, or the amount, of a substance is often more important than its volume (e.g. in a limiting reagent problem).
Another advantage of molality is the fact that the molality of one solute in a solution is independent of the presence or absence of other solutes.
Unlike all the other compositional properties listed in "Relation" section (below), molality depends on the choice of the substance to be called “solvent” in an arbitrary mixture. If there is only one pure liquid substance in a mixture, the choice is clear, but not all solutions are this clear-cut: in an alcohol–water solution, either one could be called the solvent; in an alloy, or solid solution , there is no clear choice and all constituents may be treated alike. In such situations, mass or mole fraction is the preferred compositional specification.
In what follows, the solvent may be given the same treatment as the other constituents of the solution, such that the molality of the solvent of an n -solute solution, say b 0 , is found to be nothing more than the reciprocal of its molar mass, M 0 (expressed in the unit kg/mol):
For the solutes the expression of molalities is similar:
The expressions linking molalities to mass fractions and mass concentrations contain the molar masses of the solutes M i :
Similarly the equalities below are obtained from the definitions of the molalities and of the other compositional quantities.
The mole fraction of solvent can be obtained from the definition by dividing the numerator and denominator to the amount of solvent n 0 :
Then the sum of ratios of the other mole amounts to the amount of solvent is substituted with expressions from below containing molalities:
giving the result
The conversions to and from the mass fraction , w 1 , of the solute in a single-solute solution are
where b 1 is the molality and M 1 is the molar mass of the solute.
More generally, for an n -solute/one-solvent solution, letting b i and w i be, respectively, the molality and mass fraction of the i -th solute,
where M i is the molar mass of the i th solute, and w 0 is the mass fraction of the solvent, which is expressible both as a function of the molalities as well as a function of the other mass fractions,
Substitution gives:
The conversions to and from the mole fraction , x 1 mole fraction of the solute in a single-solute solution are
where M 0 is the molar mass of the solvent.
More generally, for an n -solute/one-solvent solution, letting x i be the mole fraction of the i th solute,
where x 0 is the mole fraction of the solvent, expressible both as a function of the molalities as well as a function of the other mole fractions:
Substitution gives:
The conversions to and from the molar concentration , c 1 , for one-solute solutions are
where ρ is the mass density of the solution, b 1 is the molality, and M 1 is the molar mass (in kg/mol) of the solute.
For solutions with n solutes, the conversions are
where the molar concentration of the solvent c 0 is expressible both as a function of the molalities as well as a function of the other molarities:
Substitution gives:
The conversions to and from the mass concentration , ρ solute , of a single-solute solution are
or
where ρ is the mass density of the solution, b 1 is the molality, and M 1 is the molar mass of the solute .
For the general n -solute solution, the mass concentration of the i th solute, ρ i , is related to its molality, b i , as follows:
where the mass concentration of the solvent, ρ 0 , is expressible both as a function of the molalities as well as a function of the other mass concentrations:
Substitution gives:
Alternatively, one may use just the last two equations given for the compositional property of the solvent in each of the preceding sections, together with the relationships given below, to derive the remainder of properties in that set:
where i and j are subscripts representing all the constituents, the n solutes plus the solvent.
An acid mixture consists of 0.76, 0.04, and 0.20 mass fractions of 70% HNO 3 , 49% HF, and H 2 O, where the percentages refer to mass fractions of the bottled acids carrying a balance of H 2 O. The first step is determining the mass fractions of the constituents:
The approximate molar masses in kg/mol are
First derive the molality of the solvent, in mol/kg,
and use that to derive all the others by use of the equal ratios:
Actually, b H 2 O cancels out, because it is not needed. In this case, there is a more direct equation: we use it to derive the molality of HF:
The mole fractions may be derived from this result:
Osmolality is a variation of molality that takes into account only solutes that contribute to a solution's osmotic pressure . It is measured in osmoles of the solute per kilogram of water. This unit is frequently used in medical laboratory results in place of osmolarity , because it can be measured simply by depression of the freezing point of a solution, or cryoscopy (see also: osmostat and colligative properties ).
Molality appears in the expression of the apparent (molar) volume of a solute as a function of the molality b of that solute (and density of the solution and solvent):
For multicomponent systems the relation is slightly modified by the sum of molalities of solutes. Also a total molality and a mean apparent molar volume can be defined for the solutes together and also a mean molar mass of the solutes as if they were a single solute. In this case the first equality from above is modified with the mean molar mass M of the pseudosolute instead of the molar mass of the single solute:
The sum of products molalities - apparent molar volumes of solutes in their binary solutions equals the product between the sum of molalities of solutes and apparent molar volume in ternary or multicomponent solution. [ 5 ]
For concentrated ionic solutions the activity coefficient of the electrolyte is split into electric and statistical components.
The statistical part includes molality b, hydration index number h , the number of ions from the dissociation and the ratio r a between the apparent molar volume of the electrolyte and the molar volume of water.
Concentrated solution statistical part of the activity coefficient is: [ 6 ] [ 7 ] [ 8 ]
The molalities of solutes b 1 , b 2 in a ternary solution obtained by mixing two binary aqueous solutions with different solutes (say a sugar and a salt or two different salts) are different than the initial molalities of the solutes b ii in their binary solutions:
b 1 = m 11 M 1 ( m 01 + m 02 ) = n 11 m 01 + m 02 = b 11 1 + m 02 m 01 {\displaystyle b_{1}={\frac {m_{11}}{M_{1}(m_{01}+m_{02})}}={\frac {n_{11}}{m_{01}+m_{02}}}={\frac {b_{11}}{1+{\frac {m_{02}}{m_{01}}}}}} ,
b 2 = m 22 M 2 ( m 01 + m 02 ) = n 22 m 01 + m 02 = b 22 m 01 m 02 + 1 {\displaystyle b_{2}={\frac {m_{22}}{M_{2}(m_{01}+m_{02})}}={\frac {n_{22}}{m_{01}+m_{02}}}={\frac {b_{22}}{{\frac {m_{01}}{m_{02}}}+1}}} ,
b 11 = m 11 M 1 m 01 = n 11 m 01 {\displaystyle b_{11}={\frac {m_{11}}{M_{1}m_{01}}}={\frac {n_{11}}{m_{01}}}} ,
b 22 = m 22 M 2 m 02 = n 22 m 02 {\displaystyle b_{22}={\frac {m_{22}}{M_{2}m_{02}}}={\frac {n_{22}}{m_{02}}}} .
The content of solvent in mass fractions w 01 and w 02 from each solution of masses m s1 and m s2 to be mixed as a function of initial molalities is calculated. Then the amount (mol) of solute from each binary solution is divided by the sum of masses of water after mixing:
b 1 = 1 M 1 w 11 m s 1 w 01 m s 1 + w 02 m s 2 = 1 M 1 w 11 m s 1 ( 1 − w 11 ) m s 1 + ( 1 − w 22 ) m s 2 = 1 M 1 w 11 m s 1 m s 1 + m s 2 − w 11 m s 1 − w 22 m s 2 {\displaystyle b_{1}={\frac {1}{M_{1}}}{\frac {w_{11}m_{s1}}{w_{01}m_{s1}+w_{02}m_{s2}}}={\frac {1}{M_{1}}}{\frac {w_{11}m_{s1}}{(1-w_{11})m_{s1}+(1-w_{22})m_{s2}}}={\frac {1}{M_{1}}}{\frac {w_{11}m_{s1}}{m_{s1}+m_{s2}-w_{11}m_{s1}-w_{22}m_{s2}}}} ,
b 2 = 1 M 2 w 22 m s 2 w 01 m s 1 + w 02 m s 2 = 1 M 2 w 22 m s 2 ( 1 − w 11 ) m s 1 + ( 1 − w 22 ) m s 2 = 1 M 2 w 22 m s 2 m s 1 + m s 2 − w 11 m s 1 − w 22 m s 2 {\displaystyle b_{2}={\frac {1}{M_{2}}}{\frac {w_{22}m_{s2}}{w_{01}m_{s1}+w_{02}m_{s2}}}={\frac {1}{M_{2}}}{\frac {w_{22}m_{s2}}{(1-w_{11})m_{s1}+(1-w_{22})m_{s2}}}={\frac {1}{M_{2}}}{\frac {w_{22}m_{s2}}{m_{s1}+m_{s2}-w_{11}m_{s1}-w_{22}m_{s2}}}} .
Mass fractions of each solute in the initial solutions w 11 and w 22 are expressed as a function of the initial molalities b 11 , b 22 :
w 11 = b 11 M 1 b 11 M 1 + 1 {\displaystyle w_{11}={\frac {b_{11}M_{1}}{b_{11}M_{1}+1}}} ,
w 22 = b 22 M 2 b 22 M 2 + 1 {\displaystyle w_{22}={\frac {b_{22}M_{2}}{b_{22}M_{2}+1}}} .
These expressions of mass fractions are substituted in the final molalitaties:
b 1 = 1 M 1 1 1 w 11 + m s 2 w 11 m s 1 − 1 − w 22 m s 2 w 11 m s 1 {\displaystyle b_{1}={\frac {1}{M_{1}}}{\frac {1}{{\frac {1}{w_{11}}}+{\frac {m_{s2}}{w_{11}m_{s1}}}-1-{\frac {w_{22}m_{s2}}{w_{11}m_{s1}}}}}} ,
b 2 = 1 M 2 1 m s 1 w 22 m s 2 + 1 w 22 − w 11 m s 1 w 22 m s 2 − 1 {\displaystyle b_{2}={\frac {1}{M_{2}}}{\frac {1}{{\frac {m_{s1}}{w_{22}m_{s2}}}+{\frac {1}{w_{22}}}-{\frac {w_{11}m_{s1}}{w_{22}m_{s2}}}-1}}} .
The results for a ternary solution can be extended to a multicomponent solution (with more than two solutes).
The molalities of the solutes in a ternary solution can be expressed also from molalities in the binary solutions and their masses:
b 1 = m 11 M 1 ( m 01 + m 02 ) = n 11 m 01 + m 02 {\displaystyle b_{1}={\frac {m_{11}}{M_{1}(m_{01}+m_{02})}}={\frac {n_{11}}{m_{01}+m_{02}}}} ,
b 2 = m 22 M 2 ( m 01 + m 02 ) = n 22 m 01 + m 02 {\displaystyle b_{2}={\frac {m_{22}}{M_{2}(m_{01}+m_{02})}}={\frac {n_{22}}{m_{01}+m_{02}}}} .
The binary solution molalities are:
b 11 = m 11 M 1 m 01 = n 11 m 01 {\displaystyle b_{11}={\frac {m_{11}}{M_{1}m_{01}}}={\frac {n_{11}}{m_{01}}}} ,
b 22 = m 22 M 2 m 02 = n 22 m 02 {\displaystyle b_{22}={\frac {m_{22}}{M_{2}m_{02}}}={\frac {n_{22}}{m_{02}}}} .
The masses of the solutes determined from the molalities of the solutes and the masses of water can be substituted in the expressions of the masses of solutions:
m s 1 = m 01 + m 11 = m 01 ( 1 + b 11 M 1 ) {\displaystyle m_{s1}=m_{01}+m_{11}=m_{01}(1+b_{11}M_{1})} .
Similarly for the mass of the second solution:
m s 2 = m 02 + m 22 = m 02 ( 1 + b 22 M 2 ) {\displaystyle m_{s2}=m_{02}+m_{22}=m_{02}(1+b_{22}M_{2})} .
One can obtain the masses of water present in the sum from the denominator of the molalities of the solutes in the ternary solutions as functions of binary molalities and masses of solution:
m 01 = m s 1 1 + b 11 M 1 {\displaystyle m_{01}={\frac {m_{s1}}{1+b_{11}M_{1}}}} ,
m 02 = m s 2 1 + b 22 M 2 {\displaystyle m_{02}={\frac {m_{s2}}{1+b_{22}M_{2}}}} .
Thus the ternary molalities are:
b 1 = b 11 m 01 m 01 + m 02 = b 11 1 + m 02 m 01 = b 11 1 + m s 2 m s 1 1 + b 11 M 1 1 + b 22 M 2 {\displaystyle b_{1}={\frac {b_{11}m_{01}}{m_{01}+m_{02}}}={\frac {b_{11}}{1+{\frac {m_{02}}{m_{01}}}}}={\frac {b_{11}}{1+{\frac {m_{s2}}{m_{s1}}}{\frac {1+b_{11}M_{1}}{1+b_{22}M_{2}}}}}} ,
b 2 = b 22 m 02 m 01 + m 02 = b 22 1 + m 01 m 02 = b 22 1 + m s 1 m s 2 1 + b 22 M 2 1 + b 11 M 1 {\displaystyle b_{2}={\frac {b_{22}m_{02}}{m_{01}+m_{02}}}={\frac {b_{22}}{1+{\frac {m_{01}}{m_{02}}}}}={\frac {b_{22}}{1+{\frac {m_{s1}}{m_{s2}}}{\frac {1+b_{22}M_{2}}{1+b_{11}M_{1}}}}}} .
For solutions with three or more solutes the denominator is a sum of the masses of solvent in the n binary solutions which are mixed:
b 1 = m 11 M 1 ( m 01 + m 02 + m 03 + . . . ) = n 11 m 01 + m 02 + . . = b 11 1 + m 02 m 01 + m 03 m 01 + . . . {\displaystyle b_{1}={\frac {m_{11}}{M_{1}(m_{01}+m_{02}+m_{03}+...)}}={\frac {n_{11}}{m_{01}+m_{02}+..}}={\frac {b_{11}}{1+{\frac {m_{02}}{m_{01}}}+{\frac {m_{03}}{m_{01}}}+...}}} ,
b 2 = m 22 M 2 ( m 01 + m 02 + m 03 + . . . ) = n 22 m 01 + m 02 + . . . {\displaystyle b_{2}={\frac {m_{22}}{M_{2}(m_{01}+m_{02}+m_{03}+...)}}={\frac {n_{22}}{m_{01}+m_{02}+...}}} ,
b 3 = m 33 M 3 ( m 01 + m 02 + m 03 + . . . ) = n 33 m 01 + m 02 + . . . {\displaystyle b_{3}={\frac {m_{33}}{M_{3}(m_{01}+m_{02}+m_{03}+...)}}={\frac {n_{33}}{m_{01}+m_{02}+...}}} . | https://en.wikipedia.org/wiki/Molality |
In chemistry , the molar absorption coefficient or molar attenuation coefficient ( ε ) [ 1 ] is a measurement of how strongly a chemical species absorbs, and thereby attenuates , light at a given wavelength . It is an intrinsic property of the species. The SI unit of molar absorption coefficient is the square metre per mole ( m 2 /mol ), but in practice, quantities are usually expressed in terms of M −1 ⋅cm −1 or L⋅mol −1 ⋅cm −1 (the latter two units are both equal to 0.1 m 2 /mol ). In older literature, the cm 2 /mol is sometimes used; 1 M −1 ⋅cm −1 equals 1000 cm 2 /mol. The molar absorption coefficient is also known as the molar extinction coefficient and molar absorptivity , but the use of these alternative terms has been discouraged by the IUPAC . [ 2 ] [ 3 ]
The absorbance of a material that has only one absorbing species also depends on the pathlength and the concentration of the species, according to the Beer–Lambert law
where
Different disciplines have different conventions as to whether absorbance is decadic (10-based) or Napierian (e-based), i.e., defined with respect to the transmission via common logarithm (log 10 ) or a natural logarithm (ln). The molar absorption coefficient is usually decadic. [ 1 ] [ 4 ] When ambiguity exists, it is important to indicate which one applies.
When there are N absorbing species in a solution, the overall absorbance is the sum of the absorbances for each individual species i :
The composition of a mixture of N absorbing species can be found by measuring the absorbance at N wavelengths (the values of the molar absorption coefficient for each species at these wavelengths must also be known). The wavelengths chosen are usually the wavelengths of maximum absorption (absorbance maxima) for the individual species. None of the wavelengths may be an isosbestic point for a pair of species. The set of the following simultaneous equations can be solved to find the concentrations of each absorbing species:
The molar absorption coefficient (in units of M −1 cm −1 ) is directly related to the attenuation cross section (in units of cm 2 ) via the Avogadro constant N A : [ 5 ]
The mass absorption coefficient is equal to the molar absorption coefficient divided by the molar mass of the absorbing species.
where
In biochemistry , the molar absorption coefficient of a protein at 280 nm depends almost exclusively on the number of aromatic residues, particularly tryptophan , and can be predicted from the sequence of amino acids . [ 6 ] Similarly, the molar absorption coefficient of nucleic acids at 260 nm can be predicted given the nucleotide sequence.
If the molar absorption coefficient is known, it can be used to determine the concentration of a protein in solution. | https://en.wikipedia.org/wiki/Molar_absorption_coefficient |
Molar concentration (also called molarity , amount concentration or substance concentration ) is the number of moles of solute per liter of solution. [ 1 ] Specifically, It is a measure of the concentration of a chemical species , in particular, of a solute in a solution , in terms of amount of substance per unit volume of solution. In chemistry , the most commonly used unit for molarity is the number of moles per liter , having the unit symbol mol/L or mol / dm 3 (1000 mol/ m 3 ) in SI units. A solution with a concentration of 1 mol/L is said to be 1 molar , commonly designated as 1 M or 1 M . Molarity is often depicted with square brackets around the substance of interest; for example, the molarity of the hydrogen ion is depicted as [H + ].
Molar concentration or molarity is most commonly expressed in units of moles of solute per litre of solution . [ 2 ] For use in broader applications, it is defined as amount of substance of solute per unit volume of solution, or per unit volume available to the species, represented by lowercase c {\displaystyle c} : [ 3 ]
Here, n {\displaystyle n} is the amount of the solute in moles, [ 4 ] N {\displaystyle N} is the number of constituent particles present in volume V {\displaystyle V} (in litres) of the solution, and N A {\displaystyle N_{\text{A}}} is the Avogadro constant , since 2019 defined as exactly 6.022 140 76 × 10 23 mol −1 . The ratio N V {\displaystyle {\frac {N}{V}}} is the number density C {\displaystyle C} .
In thermodynamics , the use of molar concentration is often not convenient because the volume of most solutions slightly depends on temperature due to thermal expansion . This problem is usually resolved by introducing temperature correction factors , or by using a temperature-independent measure of concentration such as molality . [ 4 ]
The reciprocal quantity represents the dilution (volume) which can appear in Ostwald's law of dilution .
If a molecule or salt dissociates in solution, the concentration refers to the original chemical formula in solution, the molar concentration is sometimes called formal concentration or formality ( F A ) or analytical concentration ( c A ). For example, if a sodium carbonate solution ( Na 2 CO 3 ) has a formal concentration of c ( Na 2 CO 3 ) = 1 mol/L, the molar concentrations are c ( Na + ) = 2 mol/L and c ( CO 2− 3 ) = 1 mol/L because the salt dissociates into these ions. [ 5 ]
In the International System of Units (SI), the coherent unit for molar concentration is mol / m 3 . However, most chemical literature traditionally uses mol / dm 3 , which is the same as mol / L . This traditional unit is often called a molar and denoted by the letter M, for example:
The SI prefix " mega " (symbol M) has the same symbol. However, the prefix is never used alone, so "M" unambiguously denotes molar.
Sub-multiples, such as "millimolar" (mM) and "nanomolar" (nM), consist of the unit preceded by an SI prefix :
The conversion to number concentration C i {\displaystyle C_{i}} is given by
where N A {\displaystyle N_{\text{A}}} is the Avogadro constant .
The conversion to mass concentration ρ i {\displaystyle \rho _{i}} is given by
where M i {\displaystyle M_{i}} is the molar mass of constituent i {\displaystyle i} .
The conversion to mole fraction x i {\displaystyle x_{i}} is given by
where M ¯ {\displaystyle {\overline {M}}} is the average molar mass of the solution, ρ {\displaystyle \rho } is the density of the solution.
A simpler relation can be obtained by considering the total molar concentration, namely, the sum of molar concentrations of all the components of the mixture:
The conversion to mass fraction w i {\displaystyle w_{i}} is given by
For binary mixtures, the conversion to molality b 2 {\displaystyle b_{2}} is
where the solvent is substance 1, and the solute is substance 2.
For solutions with more than one solute, the conversion is
The sum of molar concentrations gives the total molar concentration, namely the density of the mixture divided by the molar mass of the mixture or by another name the reciprocal of the molar volume of the mixture. In an ionic solution, ionic strength is proportional to the sum of the molar concentration of salts.
The sum of products between these quantities equals one:
The molar concentration depends on the variation of the volume of the solution due mainly to thermal expansion. On small intervals of temperature, the dependence is
where c i , T 0 {\displaystyle c_{i,T_{0}}} is the molar concentration at a reference temperature, α {\displaystyle \alpha } is the thermal expansion coefficient of the mixture.
The volume of such a solution is 104.3mL (volume is directly observable); its density is calculated to be 1.07 (111.6g/104.3mL)
The molar concentration of NaCl in the solution is therefore | https://en.wikipedia.org/wiki/Molar_concentration |
The molar conductivity of an electrolyte solution is defined as its conductivity divided by its molar concentration: [ 1 ] [ 2 ]
where
The SI unit of molar conductivity is siemens metres squared per mole (S m 2 mol −1 ). [ 2 ] However, values are often quoted in S cm 2 mol −1 . [ 4 ] In these last units, the value of Λ m may be understood as the conductance of a volume of solution between parallel plate electrodes one centimeter apart and of sufficient area so that the solution contains exactly one mole of electrolyte. [ 5 ]
There are two types of electrolytes: strong and weak. Strong electrolytes usually undergo complete ionization, and therefore they have higher conductivity than weak electrolytes, which undergo only partial ionization. For strong electrolytes , such as salts , strong acids and strong bases , the molar conductivity depends only weakly on concentration. On dilution there is a regular increase in the molar conductivity of strong electrolyte, due to the decrease in solute–solute interaction. Based on experimental data Friedrich Kohlrausch (around the year 1900) proposed the non-linear law for strong electrolytes:
where
This law is valid for low electrolyte concentrations only; it fits into the Debye–Hückel–Onsager equation . [ 6 ]
For weak electrolytes (i.e. incompletely dissociated electrolytes), however, the molar conductivity strongly depends on concentration: The more dilute a solution, the greater its molar conductivity, due to increased ionic dissociation . For example, acetic acid has a higher molar conductivity in dilute aqueous acetic acid than in concentrated acetic acid.
Friedrich Kohlrausch in 1875–1879 established that to a high accuracy in dilute solutions, molar conductivity can be decomposed into contributions of the individual ions. This is known as Kohlrausch's law of independent ionic migration . [ 7 ] For any electrolyte A x B y , the limiting molar conductivity is expressed as x times the limiting molar conductivity of A y + and y times the limiting molar conductivity of B x − .
where:
Kohlrausch's evidence for this law was that the limiting molar conductivities of two electrolytes with two different cations and a common anion differ by an amount which is independent of the nature of the anion. For example, Λ 0 (KX) − Λ 0 (NaX) = 23.4 S cm 2 mol −1 for X = Cl − , I − and 1 / 2 SO 2− 4 . This difference is ascribed to a difference in ionic conductivities between K + and Na + . Similar regularities are found for two electrolytes with a common anion and two cations. [ 8 ]
The molar ionic conductivity of each ionic species is proportional to its electrical mobility ( μ ), or drift velocity per unit electric field, according to the equation
where z is the ionic charge, and F is the Faraday constant . [ 9 ]
The limiting molar conductivity of a weak electrolyte cannot be determined reliably by extrapolation. Instead it can be expressed as a sum of ionic contributions, which can be evaluated from the limiting molar conductivities of strong electrolytes containing the same ions. For aqueous acetic acid as an example, [ 4 ]
Values for each ion may be determined using measured ion transport numbers . For the cation:
and for the anion:
Most monovalent ions in water have limiting molar ionic conductivities in the range of 40–80 S cm 2 mol −1 . For example: [ 4 ]
The order of the values for alkali metals is surprising, since it shows that the smallest cation Li + moves more slowly in a given electric field than Na + , which in turn moves more slowly than K + . This occurs because of the effect of solvation of water molecules: the smaller Li + binds most strongly to about four water molecules so that the moving cation species is effectively Li(H 2 O) + 4 . The solvation is weaker for Na + and still weaker for K + . [ 4 ] The increase in halogen ion mobility from F − to Cl − to Br − is also due to decreasing solvation.
Exceptionally high values are found for H + ( 349.8 S cm 2 mol −1 ) and OH − ( 198.6 S cm 2 mol −1 ), which are explained by the Grotthuss proton-hopping mechanism for the movement of these ions. [ 4 ] The H + also has a larger conductivity than other ions in alcohols , which have a hydroxyl group, but behaves more normally in other solvents, including liquid ammonia and nitrobenzene . [ 4 ]
For multivalent ions, it is usual to consider the conductivity divided by the equivalent ion concentration in terms of equivalents per litre, where 1 equivalent is the quantity of ions that have the same amount of electric charge as 1 mol of a monovalent ion: 1 / 2 mol Ca 2+ , 1 / 2 mol SO 2− 4 , 1 / 3 mol Al 3+ , 1 / 4 mol Fe(CN) 4− 6 , etc. This quotient can be called the equivalent conductivity , although IUPAC has recommended that use of this term be discontinued and the term molar conductivity be used for the values of conductivity divided by equivalent concentration. [ 10 ] If this convention is used, then the values are in the same range as monovalent ions, e.g. 59.5 S cm 2 mol −1 for 1 / 2 Ca 2+ and 80.0 S cm 2 mol −1 for 1 / 2 SO 2− 4 . [ 4 ]
From the ionic molar conductivities of cations and anions, effective ionic radii can be calculated using the concept of Stokes radius . The values obtained for an ionic radius in solution calculated this way can be quite different from the ionic radius for the same ion in crystals, due to the effect of hydration in solution.
Ostwald's law of dilution , which gives the dissociation constant of a weak electrolyte as a function of concentration, can be written in terms of molar conductivity. Thus, the p K a values of acids can be calculated by measuring the molar conductivity and extrapolating to zero concentration. Namely, p K a = p( K / 1 mol/L ) at the zero-concentration limit, where K is the dissociation constant from Ostwald's law. | https://en.wikipedia.org/wiki/Molar_conductivity |
The molar heat capacity of a chemical substance is the amount of energy that must be added, in the form of heat , to one mole of the substance in order to cause an increase of one unit in its temperature . Alternatively, it is the heat capacity of a sample of the substance divided by the amount of substance of the sample; or also the specific heat capacity of the substance times its molar mass . The SI unit of molar heat capacity is joule per kelvin per mole , J⋅K −1 ⋅mol −1 .
Like the specific heat, the measured molar heat capacity of a substance, especially a gas, may be significantly higher when the sample is allowed to expand as it is heated ( at constant pressure , or isobaric ) than when it is heated in a closed vessel that prevents expansion ( at constant volume , or isochoric ). The ratio between the two, however, is the same heat capacity ratio obtained from the corresponding specific heat capacities.
This property is most relevant in chemistry , when amounts of substances are often specified in moles rather than by mass or volume. The molar heat capacity generally increases with the molar mass, often varies with temperature and pressure, and is different for each state of matter . For example, at atmospheric pressure, the (isobaric) molar heat capacity of water just above the melting point is about 76 J⋅K −1 ⋅mol −1 , but that of ice just below that point is about 37.84 J⋅K −1 ⋅mol −1 . While the substance is undergoing a phase transition , such as melting or boiling, its molar heat capacity is technically infinite , because the heat goes into changing its state rather than raising its temperature. The concept is not appropriate for substances whose precise composition is not known, or whose molar mass is not well defined, such as polymers and oligomers of indeterminate molecular size.
A closely related property of a substance is the heat capacity per mole of atoms , or atom-molar heat capacity , in which the heat capacity of the sample is divided by the number of moles of atoms instead of moles of molecules. So, for example, the atom-molar heat capacity of water is 1/3 of its molar heat capacity, namely 25.3 J⋅K −1 ⋅mol −1 .
In informal chemistry contexts, the molar heat capacity may be called just "heat capacity" or "specific heat". However, international standards now recommend that "specific heat capacity" always refer to capacity per unit of mass, to avoid possible confusion. [ 1 ] Therefore, the word "molar", not "specific", should always be used for this quantity.
The molar heat capacity of a substance, which may be denoted by c m , is the heat capacity C of a sample of the substance divided by the amount (moles) n of the substance in the sample:
where Q is the amount of heat needed to raise the temperature of the sample by Δ T . Obviously, this parameter cannot be computed when n is not known or defined.
Like the heat capacity of an object, the molar heat capacity of a substance may vary, sometimes substantially, depending on the starting temperature T of the sample and the pressure P applied to it. Therefore, it should be considered a function c m ( P , T ) of those two variables.
These parameters are usually specified when giving the molar heat capacity of a substance. For example, "H 2 O: 75.338 J⋅K −1 ⋅mol −1 (25 °C, 101.325 kPa)" [ 2 ] When not specified, published values of the molar heat capacity c m generally are valid for some standard conditions for temperature and pressure .
However, the dependency of c m ( P , T ) on starting temperature and pressure can often be ignored in practical contexts, e.g. when working in narrow ranges of those variables. In those contexts one can usually omit the qualifier ( P , T ) and approximate the molar heat capacity by a constant c m suitable for those ranges.
Since the molar heat capacity of a substance is the specific heat c times the molar mass of the substance M / N , its numerical value in SI units is generally smaller than that of the specific heat. Paraffin wax , for example, has a specific heat of about 2500 J⋅K −1 ⋅kg −1 but a molar heat capacity of about 600 J⋅K −1 ⋅mol −1 .
The molar heat capacity is an "intensive" property of a substance, an intrinsic characteristic that does not depend on the size or shape of the amount in consideration. (The qualifier "specific" in front of an extensive property often indicates an intensive property derived from it. [ 3 ] )
The injection of heat energy into a substance, besides raising its temperature, usually causes an increase in its volume and/or its pressure, depending on how the sample is confined. The choice made about the latter affects the measured molar heat capacity, even for the same starting pressure P and starting temperature T . Two particular choices are widely used:
The value of c V ,m is always less than the value of c P ,m . This difference is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume. [ 4 ]
All methods for the measurement of specific heat apply to molar heat capacity as well.
The SI unit of molar heat capacity heat is joule per kelvin per mole (J/(K⋅mol), J/(K mol), J K −1 mol −1 , etc.). Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same as joule per degree Celsius per mole (J/(°C⋅mol)).
In chemistry, heat amounts are still often measured in calories . Confusingly, two units with that name, denoted "cal" or "Cal", have been commonly used to measure amounts of heat:
When heat is measured in these units, the unit of specific heat is usually
The molar heat capacity of a substance has the same dimension as the heat capacity of an object; namely, L 2 ⋅M⋅T −2 ⋅Θ −1 , or M(L/T) 2 /Θ. (Indeed, it is the heat capacity of the object that consists of an Avogadro number of molecules of the substance.) Therefore, the SI unit J⋅K −1 ⋅mol −1 is equivalent to kilogram metre squared per second squared per kelvin (kg⋅m 2 ⋅K −1 ⋅s −2 ).
The temperature of a sample of a substance reflects the average kinetic energy of its constituent particles (atoms or molecules) relative to its center of mass. Quantum mechanics predicts that, at room temperature and ordinary pressures, an isolated atom in a gas cannot store any significant amount of energy except in the form of kinetic energy. Therefore, when a certain number N of atoms of a monatomic gas receives an input Q of heat energy, in a container of fixed volume, the kinetic energy of each atom will increase by Q / N , independently of the atom's mass. This assumption is the foundation of the theory of ideal gases .
In other words, that theory predicts that the molar heat capacity at constant volume c V ,m of all monatomic gases will be the same; specifically,
where R is the ideal gas constant , about 8.31446 J⋅K −1 ⋅mol −1 (which is the product of the Boltzmann constant k B and the Avogadro constant ). And, indeed, the experimental values of c V ,m for the noble gases helium , neon , argon , krypton , and xenon (at 1 atm and 25 °C) are all 12.5 J⋅K −1 ⋅mol −1 , which is 3 / 2 R ; even though their atomic weights range from 4 to 131.
The same theory predicts that the molar heat capacity of a monatomic gas at constant pressure will be
This prediction matches the experimental values, which, for helium through xenon, are 20.78, 20.79, 20.85, 20.95, and 21.01 J⋅K −1 ⋅mol −1 , respectively; [ 5 ] [ 6 ] very close to the theoretical 5 / 2 R = 20.78 J⋅K −1 ⋅mol −1 .
Therefore, the specific heat (per unit of mass, not per mole) of a monatomic gas will be inversely proportional to its (adimensional) relative atomic mass A . That is, approximately,
A polyatomic molecule (consisting of two or more atoms bound together) can store heat energy in other forms besides its kinetic energy. These forms include rotation of the molecule, and vibration of the atoms relative to its center of mass.
These extra degrees of freedom contribute to the molar heat capacity of the substance. Namely, when heat energy is injected into a gas with polyatomic molecules, only part of it will go into increasing their kinetic energy, and hence the temperature; the rest will go to into those other degrees of freedom. Thus, in order to achieve the same increase in temperature, more heat energy will have to be provided to a mol of that substance than to a mol of a monatomic gas. Substances with high atomic count per molecule, like octane , can therefore have a very large heat capacity per mole, and yet a relatively small specific heat (per unit mass). [ 7 ] [ 8 ] [ 9 ]
If the molecule could be entirely described using classical mechanics, then the theorem of equipartition of energy could be used to predict that each degree of freedom would have an average energy in the amount of 1 / 2 kT , where k is the Boltzmann constant, and T is the temperature. If the number of degrees of freedom of the molecule is f , then each molecule would be holding, on average, a total energy equal to 1 / 2 fkT . Then the molar heat capacity (at constant volume) would be
where R is the ideal gas constant. According to Mayer's relation , the molar heat capacity at constant pressure would be
Thus, each additional degree of freedom will contribute 1 / 2 R to the molar heat capacity of the gas (both c V ,m and c P ,m ).
In particular, each molecule of a monatomic gas has only f = 3 degrees of freedom, namely the components of its velocity vector; therefore c V ,m = 3 / 2 R and c P ,m = 5 / 2 R . [ 10 ]
For example, the molar heat capacity of nitrogen N 2 at constant volume is 20.6 J⋅K −1 ⋅mol −1 (at 15 °C, 1 atm), which is 2.49 R . [ 11 ] From the theoretical equation c V ,m = 1 / 2 fR , one concludes that each molecule has f = 5 degrees of freedom. These turn out to be three degrees of the molecule's velocity vector, plus two degrees from its rotation about an axis through the center of mass and perpendicular to the line of the two atoms. The degrees of freedom due to translations and rotations are called the rigid degrees of freedom, since they do not involve any deformation of the molecule.
Because of those two extra degrees of freedom, the molar heat capacity c V ,m of N 2 (20.6 J⋅K −1 ⋅mol −1 ) is greater than that of an hypothetical monatomic gas (12.5 J⋅K −1 ⋅mol −1 ) by a factor of 5 / 3 .
According to classical mechanics, a diatomic molecule like nitrogen should have more degrees of internal freedom, corresponding to vibration of the two atoms that stretch and compress the bond between them.
For thermodynamic purposes, each direction in which an atom can independently vibrate relative to the rest of the molecule introduces two degrees of freedom: one associated with the potential energy from distorting the bonds, and one for the kinetic energy of the atom's motion. In a diatomic molecule like N 2 , there is only one direction for the vibration, and the motions of the two atoms must be opposite but equal; so there are only two degrees of vibrational freedom. That would bring f up to 7, and c V ,m to 3.5 R .
The reason why these vibrations are not absorbing their expected fraction of heat energy input is provided by quantum mechanics . According to that theory, the energy stored in each degree of freedom must increase or decrease only in certain amounts (quanta). Therefore, if the temperature T of the system is not high enough, the average energy that would be available for some of the theoretical degrees of freedom ( kT / f ) may be less than the corresponding minimum quantum. If the temperature is low enough, that may be the case for practically all molecules. One then says that those degrees of freedom are "frozen". The molar heat capacity of the gas will then be determined only by the "active" degrees of freedom — that, for most molecules, can receive enough energy to overcome that quantum threshold. [ 12 ]
For each degree of freedom, there is an approximate critical temperature at which it "thaws" ("unfreezes") and becomes active, thus being able to hold heat energy. For the three translational degrees of freedom of molecules in a gas, this critical temperature is extremely small, so they can be assumed to be always active. For the rotational degrees of freedom, the thawing temperature is usually a few tens of kelvins (although with a very light molecule such as hydrogen the rotational energy levels will be spaced so widely that rotational heat capacity may not completely "unfreeze" until considerably higher temperatures are reached). Vibration modes of diatomic molecules generally start to activate only well above room temperature.
In the case of nitrogen, the rotational degrees of freedom are fully active already at −173 °C (100 K, just 23 K above the boiling point). On the other hand, the vibration modes only start to become active around 350 K (77 °C) Accordingly, the molar heat capacity c P ,m is nearly constant at 29.1 J⋅K −1 ⋅mol −1 from 100 K to about 300 °C. At about that temperature, it starts to increase rapidly, then it slows down again. It is 35.5 J⋅K −1 ⋅mol −1 at 1500 °C, 36.9 at 2500 °C, and 37.5 at 3500 °C. [ 13 ] [ 14 ] The last value corresponds almost exactly to the predicted value for f = 7.
The following is a table of some constant-pressure molar heat capacities c P ,m of various diatomic gases at standard temperature (25 °C = 298 K), at 500 °C, and at 5000 °C, and the apparent number of degrees of freedom f * estimated by the formula f * = 2 c P ,m / R − 2:
(*) At 59 C (boiling point)
The quantum harmonic oscillator approximation implies that the spacing of energy levels of vibrational modes are inversely proportional to the square root of the reduced mass of the atoms composing the diatomic molecule. This fact explains why the vibrational modes of heavier molecules like Br 2 are active at lower temperatures. The molar heat capacity of Br 2 at room temperature is consistent with f = 7 degrees of freedom, the maximum for a diatomic molecule. At high enough temperatures, all diatomic gases approach this value.
Quantum mechanics also explains why the specific heat of monatomic gases is well predicted by the ideal gas theory with the assumption that each molecule is a point mass that has only the f = 3 translational degrees of freedom.
According to classical mechanics, since atoms have non-zero size, they should also have three rotational degrees of freedom, or f = 6 in total. Likewise, the diatomic nitrogen molecule should have an additional rotation mode, namely about the line of the two atoms; and thus have f = 6 too. In the classical view, each of these modes should store an equal share of the heat energy.
However, according to quantum mechanics, the energy difference between the allowed (quantized) rotation states is inversely proportional to the moment of inertia about the corresponding axis of rotation. Because the moment of inertia of a single atom is exceedingly small, the activation temperature for its rotational modes is extremely high. The same applies to the moment of inertia of a diatomic molecule (or a linear polyatomic one) about the internuclear axis, which is why that mode of rotation is not active in general.
On the other hand, electrons and nuclei can exist in excited states and, in a few exceptional cases, they may be active even at room temperature, or even at cryogenic temperatures.
The set of all possible ways to infinitesimally displace the n atoms of a polyatomic gas molecule is a linear space of dimension 3 n , because each atom can be independently displaced in each of three orthogonal axis directions. However, some three of these dimensions are just translation of the molecule by an infinitesimal displacement vector, and others are just rigid rotations of it by an infinitesimal angle about some axis. Still others may correspond to relative rotation of two parts of the molecule about a single bond that connects them.
The independent deformation modes —linearly independent ways to actually deform the molecule, that strain its bonds—are only the remaining dimensions of this space. As in the case diatomic molecules, each of these deformation modes counts as two vibrational degrees of freedom for energy storage purposes: one for the potential energy stored in the strained bonds, and one for the extra kinetic energy of the atoms as they vibrate about the rest configuration of the molecule.
In particular, if the molecule is linear (with all atoms on a straight line), it has only two non-trivial rotation modes, since rotation about its own axis does not displace any atom. Therefore, it has 3 n − 5 actual deformation modes. The number of energy-storing degrees of freedom is then f = 3 + 2 + 2(3 n − 5) = 6 n − 5.
For example, the linear nitrous oxide molecule N≡N=O (with n = 3) has 3 n − 5 = 4 independent infinitesimal deformation modes. Two of them can be described as stretching one of the bonds while the other retains its normal length. The other two can be identified which the molecule bends at the central atom, in the two directions that are orthogonal to its axis. In each mode, one should assume that the atoms get displaced so that the center of mass remains stationary and there is no rotation. The molecule then has f = 6 n − 5 = 13 total energy-storing degrees of freedom (3 translational, 2 rotational, 8 vibrational). At high enough temperature, its molar heat capacity then should be c P ,m = 7.5 R = 62.63 J⋅K −1 ⋅mol −1 . For cyanogen N≡C−C≡N and acetylene H−C≡C−H ( n = 4) the same analysis yields f = 19 and predicts c P ,m = 10.5 R = 87.3 J⋅K −1 ⋅mol −1 .
A molecule with n atoms that is rigid and not linear has 3 translation modes and 3 non-trivial rotation modes, hence only 3 n − 6 deformation modes. It therefore has f = 3 + 3 + 2(3 n − 6) = 6 n − 6 energy-absorbing degrees of freedom (one less than a linear molecule with the same atom count). Water H 2 O ( n = 3) is bent in its non-strained state, therefore it is predicted to have f = 12 degrees of freedom. [ 19 ] Methane CH 4 ( n = 5) is tridimensional, and the formula predicts f = 24.
Ethane H 3 C−CH 3 ( n = 8) has 4 degrees of rotational freedom: two about axes that are perpendicular to the central bond, and two more because each methyl group can rotate independently about that bond, with negligible resistance. Therefore, the number of independent deformation modes is 3 n − 7, which gives f = 3 + 4 + 2(3 n − 7) = 6n − 7 = 41.
The following table shows the experimental molar heat capacities at constant pressure c P ,m of the above polyatomic gases at standard temperature (25 °C = 298 K), at 500 °C, and at 5000 °C, and the apparent number of degrees of freedom f * estimated by the formula f * = 2 c P ,m / R − 2:
(*) At 3000C
In most solids (but not all), the molecules have a fixed mean position and orientation, and therefore the only degrees of freedom available are the vibrations of the atoms. [ 26 ] Thus the specific heat is proportional to the number of atoms (not molecules) per unit of mass, which is the Dulong–Petit law . Other contributions may come from magnetic and electronic degrees of freedom in solids, but these rarely make substantial contributions near room temperature. [ 27 ] [ 28 ] Since each atom of the solid contributes one independent vibration mode, the number of degrees of freedom in n atoms is 6 n . Therefore, the heat capacity of a sample of a solid substance is expected to be 3 RN a , or (24.94 J/K) N a , where N a is the number of moles of atoms in the sample, not molecules. Said another way, the atom-molar heat capacity of a solid substance is expected to be 3 R = 24.94 J⋅K −1 ⋅mol −1 , where a "mol" denotes an amount of the solid that contains the Avogadro number of atoms. [ 29 ]
It follows that, in molecular solids, the heat capacity per mole of molecules will usually be close to 3 nR , where n is the number of atoms per molecule.
Thus n atoms of a solid should in principle store twice as much energy as n atoms of a monatomic gas. One way to look at this result is to observe that the monatomic gas can only store energy as kinetic energy of the atoms, whereas the solid can store it also as potential energy of the bonds strained by the vibrations. The atom-molar heat capacity of a polyatomic gas approaches that of a solid as the number n of atoms per molecule increases.
As in the case of gases, some of the vibration modes will be "frozen out" at low temperatures, especially in solids with light and tightly bound atoms, causing the atom-molar heat capacity to be less than this theoretical limit. Indeed, the atom-molar (or specific) heat capacity of a solid substance tends toward zero, as the temperature approaches absolute zero.
As predicted by the above analysis, the heat capacity per mole of atoms , rather than per mole of molecules, is found to be remarkably constant for all solid substances at high temperatures. This relationship was noticed empirically in 1819, and is called the Dulong–Petit law , after its two discoverers. [ 30 ] [ 31 ] This discovery was an important argument in support of the atomic theory of matter.
Indeed, for solid metallic chemical elements at room temperature, atom-molar heat capacities range from about 2.8 R to 3.4 R . Large exceptions at the lower end involve solids composed of relatively low-mass, tightly bonded atoms, such as beryllium (2.0 R , only of 66% of the theoretical value), and diamond (0.735 R , only 24%). Those conditions imply larger quantum vibrational energy spacing, thus many vibrational modes are "frozen out" at room temperature. Water ice close to the melting point, too, has an anomalously low heat capacity per atom (1.5 R , only 50% of the theoretical value).
At the higher end of possible heat capacities, heat capacity may exceed R by modest amounts, due to contributions from anharmonic vibrations in solids, and sometimes a modest contribution from conduction electrons in metals. These are not degrees of freedom treated in the Einstein or Debye theories.
Since the bulk density of a solid chemical element is strongly related to its molar mass, there exists a noticeable inverse correlation between a solid's density and its specific heat capacity on a per-mass basis.
This is due to a very approximate tendency of atoms of most elements to be about the same size, despite much wider variations in density and atomic weight. These two factors (constancy of atomic volume and constancy of mole-specific heat capacity) result in a good correlation between the volume of any given solid chemical element and its total heat capacity.
Another way of stating this, is that the volume-specific heat capacity ( volumetric heat capacity ) of solid elements is roughly a constant. The molar volume of solid elements is very roughly constant, and (even more reliably) so also is the molar heat capacity for most solid substances. These two factors determine the volumetric heat capacity, which as a bulk property may be striking in consistency. For example, the element uranium is a metal that has a density almost 36 times that of the metal lithium, but uranium's specific heat capacity on a volumetric basis (i.e. per given volume of metal) is only 18% larger than lithium's.
However, the average atomic volume in solid elements is not quite constant, so there are deviations from this principle. For instance, arsenic , which is only 14.5% less dense than antimony , has nearly 59% more specific heat capacity on a mass basis. In other words; even though an ingot of arsenic is only about 17% larger than an antimony one of the same mass, it absorbs about 59% more heat for a given temperature rise. The heat capacity ratios of the two substances closely follows the ratios of their molar volumes (the ratios of numbers of atoms in the same volume of each substance); the departure from the correlation to simple volumes, in this case, is due to lighter arsenic atoms being significantly more closely packed than antimony atoms, instead of similar size. In other words, similar-sized atoms would cause a mole of arsenic to be 63% larger than a mole of antimony, with a correspondingly lower density, allowing its volume to more closely mirror its heat capacity behavior.
Sometimes small impurity concentrations can greatly affect the specific heat, for example in semiconducting ferromagnetic alloys. [ 32 ]
A general theory of the heat capacity of liquids has not yet been achieved, and is still an active area of research. It was long thought that phonon theory is not able to explain the heat capacity of liquids, since liquids only sustain longitudinal, but not transverse phonons, which in solids are responsible for 2/3 of the heat capacity. However, Brillouin scattering experiments with neutrons and with X-rays , confirming an intuition of Yakov Frenkel , [ 33 ] have shown that transverse phonons do exist in liquids, albeit restricted to frequencies above a threshold called the Frenkel frequency . Since most energy is contained in these high-frequency modes, a simple modification of the Debye model is sufficient to yield a good approximation to experimental heat capacities of simple liquids. [ 34 ]
Because of high crystal binding energies, the effects of vibrational mode freezing are observed in solids more often than liquids: for example the heat capacity of liquid water is twice that of ice at near the same temperature, and is again close to the 3 R per mole of atoms of the Dulong–Petit theoretical maximum.
Amorphous materials can be considered a type of liquid at temperatures above the glass transition temperature. Below the glass transition temperature amorphous materials are in the solid (glassy) state form. The specific heat has characteristic discontinuities at the glass transition temperature which are caused by the absence in the glassy state of percolating clusters made of broken bonds (configurons) that are present only in the liquid phase. [ 35 ] Above the glass transition temperature percolating clusters formed by broken bonds enable a more floppy structure and hence a larger degree of freedom for atomic motion which results in a higher heat capacity of liquids. Below the glass transition temperature there are no extended clusters of broken bonds and the heat capacity is smaller because the solid-state (glassy) structure of amorphous material is more rigid.
The discontinuities in the heat capacity are typically used to detect the glass transition temperature where a supercooled liquid transforms to a glass.
Hydrogen-containing polar molecules like ethanol , ammonia , and water have powerful, intermolecular hydrogen bonds when in their liquid phase. These bonds provide another place where heat may be stored as potential energy of vibration, even at comparatively low temperatures. Hydrogen bonds account for the fact that liquid water stores nearly the theoretical limit of 3 R per mole of atoms, even at relatively low temperatures (i.e. near the freezing point of water). | https://en.wikipedia.org/wiki/Molar_heat_capacity |
These tables list values of molar ionization energies , measured in kJ⋅mol −1 . This is the energy per mole necessary to remove electrons from gaseous atoms or atomic ions. The first molar ionization energy applies to the neutral atoms. The second, third, etc., molar ionization energy applies to the further removal of an electron from a singly, doubly, etc., charged ion. For ionization energies measured in the unit eV, see Ionization energies of the elements (data page) . All data from rutherfordium onwards is predicted. | https://en.wikipedia.org/wiki/Molar_ionization_energies_of_the_elements |
In polymer chemistry , the molar mass distribution (or molecular weight distribution ) describes the relationship between the number of moles of each polymer species ( N i ) and the molar mass ( M i ) of that species. [ 1 ] In linear polymers, the individual polymer chains rarely have exactly the same degree of polymerization and molar mass, and there is always a distribution around an average value . The molar mass distribution of a polymer may be modified by polymer fractionation .
Different average values can be defined, depending on the statistical method applied. In practice, four averages are used, representing the weighted mean taken with the mole fraction , the weight fraction, and two other functions which can be related to measured quantities:
M n = ∑ M i N i ∑ N i M w = ∑ M i 2 N i ∑ M i N i M z = ∑ M i 3 N i ∑ M i 2 N i M v = [ ∑ M i 1 + a N i ∑ M i N i ] 1 a {\displaystyle {\begin{aligned}M_{\mathrm {n} }&={\frac {\sum M_{i}N_{i}}{\sum N_{i}}}&&M_{\mathrm {w} }={\frac {\sum M_{i}^{2}N_{i}}{\sum M_{i}N_{i}}}\\M_{\mathrm {z} }&={\frac {\sum M_{i}^{3}N_{i}}{\sum M_{i}^{2}N_{i}}}&&M_{\mathrm {v} }=\left[{\frac {\sum M_{i}^{1+a}N_{i}}{\sum M_{i}N_{i}}}\right]^{\frac {1}{a}}\end{aligned}}}
Here, a is the exponent in the Mark–Houwink equation that relates the intrinsic viscosity to molar mass. [ 2 ]
These different definitions have true physical meaning because different techniques in physical polymer chemistry often measure just one of them. For instance, osmometry measures number average molar mass and small-angle laser light scattering measures mass average molar mass. M v is obtained from viscosimetry and M z by sedimentation in an analytical ultra-centrifuge . The quantity a in the expression for the viscosity average molar mass varies from 0.5 to 0.8 and depends on the interaction between solvent and polymer in a dilute solution. In a typical distribution curve, the average values are related to each other as follows: M n < M v < M w < M z . {\displaystyle M_{n}<M_{v}<M_{w}<M_{z}.} The dispersity (also known as the polydispersity index ) of a sample is defined as M w divided by M n and gives an indication just how narrow a distribution is. [ 2 ] [ 3 ]
The most common technique for measuring molecular mass used in modern times is a variant of high-pressure liquid chromatography (HPLC) known by the interchangeable terms of size exclusion chromatography (SEC) and gel permeation chromatography (GPC). These techniques involve forcing a polymer solution through a matrix of cross-linked polymer particles at a pressure of up to several hundred bar . The limited accessibility of stationary phase pore volume for the polymer molecules results in shorter elution times for high-molecular-mass species. The use of low dispersity standards allows the user to correlate retention time with molecular mass, although the actual correlation is with the Hydrodynamic volume. If the relationship between molar mass and the hydrodynamic volume changes (i.e., the polymer is not exactly the same shape as the standard) then the calibration for mass is in error.
The most common detectors used for size exclusion chromatography include online methods similar to the bench methods used above. By far the most common is the differential refractive index detector that measures the change in refractive index of the solvent. This detector is concentration-sensitive and very molecular-mass-insensitive, so it is ideal for a single-detector GPC system, as it allows the generation of mass v's molecular mass curves. Less common but more accurate and reliable is a molecular-mass-sensitive detector using multi-angle laser-light scattering - see static light scattering . These detectors directly measure the molecular mass of the polymer and are most often used in conjunction with differential refractive index detectors. A further alternative is either low-angle light scattering, which uses a single low angle to determine the molar mass , or Right-angle-light laser scattering in combination with a viscometer, although this latter technique does not give an absolute measure of molar mass but one relative to the structural model used.
The molar mass distribution of a polymer sample depends on factors such as chemical kinetics and work-up procedure. Ideal step-growth polymerization gives a polymer with dispersity of 2. Ideal living polymerization results in a dispersity of 1. By dissolving a polymer an insoluble high molar mass fraction may be filtered off resulting in a large reduction in M w and a small reduction in M n , thus reducing dispersity.
The number average molar mass is a way of determining the molecular mass of a polymer . Polymer molecules, even ones of the same type, come in different sizes (chain lengths, for linear polymers), so the average molecular mass will depend on the method of averaging. The number average molecular mass is the ordinary arithmetic mean or average of the molecular masses of the individual macromolecules. It is determined by measuring the molecular mass of n polymer molecules, summing the masses, and dividing by n . M ¯ n = ∑ i N i M i ∑ i N i {\displaystyle {\bar {M}}_{n}={\frac {\sum _{i}N_{i}M_{i}}{\sum _{i}N_{i}}}} The number average molecular mass of a polymer can be determined by gel permeation chromatography , viscometry via the ( Mark–Houwink equation ), colligative methods such as vapor pressure osmometry , end-group determination or proton NMR . [ 4 ]
High number-average molecular mass polymers may be obtained only with a high fractional monomer conversion in the case of step-growth polymerization , as per the Carothers' equation .
The mass average molar mass (often loosely termed weight average molar mass ) is another way of describing the molar mass of a polymer . Some properties are dependent on molecular size, so a larger molecule will have a larger contribution than a smaller molecule. The mass average molar mass is calculated by M ¯ w = ∑ i N i M i 2 ∑ i N i M i {\displaystyle {\bar {M}}_{w}={\frac {\sum _{i}N_{i}M_{i}^{2}}{\sum _{i}N_{i}M_{i}}}} where N i is the number of molecules of molecular mass M i .
The mass average molecular mass can be determined by static light scattering , small angle neutron scattering , X-ray scattering , and sedimentation velocity .
The ratio of the mass average to the number average is called the dispersity or the polydispersity index. [ 3 ]
The mass-average molecular mass , M w , is also related to the fractional monomer conversion , p , in step-growth polymerization (for the simplest case of linear polymers formed from two monomers in equimolar quantities) as per Carothers' equation : X ¯ w = 1 + p 1 − p M ¯ w = M o ( 1 + p ) 1 − p , {\displaystyle {\bar {X}}_{w}={\frac {1+p}{1-p}}\quad {\bar {M}}_{w}={\frac {M_{o}\left(1+p\right)}{1-p}},} where M o is the molecular mass of the repeating unit.
The z-average molar mass is the third moment or third power average molar mass, which is calculated by
M ¯ z = ∑ M i 3 N i ∑ M i 2 N i {\displaystyle {\bar {M}}_{z}={\frac {\sum M_{i}^{3}N_{i}}{\sum M_{i}^{2}N_{i}}}}
The z-average molar mass can be determined with ultracentrifugation. The melt elasticity of a polymer is dependent on M z . [ 5 ] | https://en.wikipedia.org/wiki/Molar_mass_distribution |
Molar refractivity , [ 1 ] [ 2 ] R m {\displaystyle R_{m}} , is a measure of the total polarizability of a mole of a substance.
For a perfect dielectric which is made of one type of molecule, the molar refractivity is proportional to the polarizability of a single molecule of the substance. For real materials, intermolecular interactions (the effect of the induced dipole moment of one molecule on the field felt by nearby molecules) give rise to a density dependence.
The molar refractivity is commonly expressed as a sum of components, where the leading order is the value for a perfect dielectric, followed by the density-dependent corrections:
The coefficients A , B , C , . . . {\displaystyle A,B,C,...} are called the refractivity virial coefficients. Some research papers are dedicated to finding the values of the subleading coefficients of different substances. In other contexts, the material can be assumed to be approximately perfect, so that the only coefficient of interest is A {\displaystyle A} .
The coefficients depend on the wavelength of the applied field (and on the type and composition of the material), but not on thermodynamic state variables such as temperature or pressure .
The leading order (perfect dielectric) molar refractivity is defined as
where N A ≈ 6.022 × 10 23 {\displaystyle N_{A}\approx 6.022\times 10^{23}} is the Avogadro constant and α m {\displaystyle \alpha _{\mathrm {m} }} is the mean polarizability of a molecule.
Substituting the molar refractivity into the Lorentz-Lorenz formula gives, for gasses
where n {\displaystyle n} is the refractive index , p {\displaystyle p} is the pressure of the gas, R {\displaystyle R} is the universal gas constant , and T {\displaystyle T} is the (absolute) temperature; the ideal gas law was used here to convert the particle density (appearing in the Lorentz-Lorenz formula) to pressure and temperature.
For a gas, n 2 ≈ 1 {\displaystyle n^{2}\approx 1} , so the molar refractivity can be approximated by
As mentioned above, despite the relation imposed by the last expression on A , T , p {\displaystyle A,T,p} and n {\displaystyle n} , the molar refractivity A {\displaystyle A} is a function of the substance itself and not of its conditions, and therefore does not depend on the three state variables appearing in the right hand side of the expression. [ a ]
In terms of density ρ and molecular weight M, it can be shown that: | https://en.wikipedia.org/wiki/Molar_refractivity |
In chemistry and related fields, the molar volume , symbol V m , [ 1 ] or V ~ {\displaystyle {\tilde {V}}} of a substance is the ratio of the volume ( V ) occupied by a substance to the amount of substance ( n ), usually at a given temperature and pressure . It is also equal to the molar mass ( M ) divided by the mass density ( ρ ): V m = V n = M ρ {\displaystyle V_{\text{m}}={\frac {V}{n}}={\frac {M}{\rho }}}
The molar volume has the SI unit of cubic metres per mole (m 3 /mol), [ 1 ] although it is more typical to use the units cubic decimetres per mole (dm 3 /mol) for gases , and cubic centimetres per mole (cm 3 /mol) for liquids and solids .
The molar volume of a substance i is defined as its molar mass divided by its density ρ i 0 : V m , i = M i ρ i 0 {\displaystyle V_{\rm {m,i}}={M_{i} \over \rho _{i}^{0}}} For an ideal mixture containing N components, the molar volume of the mixture is the weighted sum of the molar volumes of its individual components. For a real mixture the molar volume cannot be calculated without knowing the density: V m = ∑ i = 1 N x i M i ρ m i x t u r e {\displaystyle V_{\rm {m}}={\frac {\displaystyle \sum _{i=1}^{N}x_{i}M_{i}}{\rho _{\mathrm {mixture} }}}} There are many liquid–liquid mixtures, for instance mixing pure ethanol and pure water , which may experience contraction or expansion upon mixing. This effect is represented by the quantity excess volume of the mixture, an example of excess property .
Molar volume is related to specific volume by the product with molar mass . This follows from above where the specific volume is the reciprocal of the density of a substance: V m , i = M i ρ i 0 = M i v i {\displaystyle V_{\rm {m,i}}={M_{i} \over \rho _{i}^{0}}=M_{i}v_{i}}
For ideal gases , the molar volume is given by the ideal gas equation ; this is a good approximation for many common gases at standard temperature and pressure .
The ideal gas equation can be rearranged to give an expression for the molar volume of an ideal gas: V m = V n = R T P {\displaystyle V_{\rm {m}}={\frac {V}{n}}={\frac {RT}{P}}} Hence, for a given temperature and pressure, the molar volume is the same for all ideal gases and is based on the gas constant : R = 8.314 462 618 153 24 m 3 ⋅Pa⋅K −1 ⋅mol −1 , or about 8.205 736 608 095 96 × 10 −5 m 3 ⋅atm⋅K −1 ⋅mol −1 .
The molar volume of an ideal gas at 100 kPa (1 bar ) is
The molar volume of an ideal gas at 1 atmosphere of pressure is
For crystalline solids , the molar volume can be measured by X-ray crystallography .
The unit cell volume ( V cell ) may be calculated from the unit cell parameters, whose determination is the first step in an X-ray crystallography experiment (the calculation is performed automatically by the structure determination software). This is related to the molar volume by V m = N A V c e l l Z {\displaystyle V_{\rm {m}}={{N_{\rm {A}}V_{\rm {cell}}} \over {Z}}} where N A is the Avogadro constant and Z is the number of formula units in the unit cell. The result is normally reported as the "crystallographic density".
Ultra-pure silicon is routinely made for the electronics industry , and the measurement of the molar volume of silicon, both by X-ray crystallography and by the ratio of molar mass to mass density, has attracted much attention since the pioneering work at NIST in 1974. [ 2 ] The interest stems from that accurate measurements of the unit cell volume, atomic weight and mass density of a pure crystalline solid provide a direct determination of the Avogadro constant. [ 3 ]
The CODATA recommended value for the molar volume of silicon is 1.205 883 199 (60) × 10 −5 m 3 ⋅mol −1 , with a relative standard uncertainty of 4.9 × 10 −8 . [ 4 ] | https://en.wikipedia.org/wiki/Molar_volume |
Mold health issues refer to the harmful health effects of molds ("moulds" in British English) and their mycotoxins .
Molds are ubiquitous in the biosphere, and mold spores are a common component of household and workplace dust. The vast majority of molds are not hazardous to humans, and reaction to molds can vary between individuals, with relatively minor allergic reactions being the most common. [ 1 ] The United States Centers for Disease Control and Prevention (CDC) reported in its June 2006 report, 'Mold Prevention Strategies and Possible Health Effects in the Aftermath of Hurricanes and Major Floods,' that "excessive exposure to mold-contaminated materials can cause adverse health effects in susceptible persons regardless of the type of mold or the extent of contamination." [ 2 ] When mold spores are present in abnormally high quantities, they can present especially hazardous health risks to humans after prolonged exposure, including allergic reactions or poisoning by mycotoxins , [ 3 ] or causing fungal infection ( mycosis ). [ 4 ]
People who are atopic (sensitive), already have allergies , asthma , or compromised immune systems [ 5 ] and occupy damp or moldy buildings [ 6 ] are at an increased risk of health problems such as inflammatory responses to mold spores, metabolites such as mycotoxins, and other components. [ 7 ] Other problems are respiratory and/or immune system responses including respiratory symptoms, respiratory infections , exacerbation of asthma, and rarely hypersensitivity pneumonitis , allergic alveolitis , chronic rhinosinusitis and allergic fungal sinusitis . A person's reaction to mold depends on their sensitivity and other health conditions, the amount of mold present, length of exposure, and the type of mold or mold products.
The five most common genera of indoor molds are Cladosporium , Penicillium , Aspergillus , Alternaria , and Trichoderma .
Damp environments that allow mold to grow can also allow the proliferation of bacteria and release volatile organic compounds .
Symptoms of mold exposure can include: [ 8 ]
Adverse respiratory health effects are associated with occupancy in buildings with moisture and mold damage. [ 9 ] Infants in homes with mold have a much greater risk of developing asthma and allergic rhinitis . [ 10 ] [ 11 ] Infants may develop respiratory symptoms due to exposure to a specific type of fungal mold, called Penicillium . Signs that an infant may have mold-related respiratory problems include (but are not limited to) a persistent cough and wheeze. Increased exposure increases the probability of developing respiratory symptoms during their first year of life. [ 12 ] As many as 21% of asthma cases may result from exposure to mold. [ 6 ]
Mold exposures have a variety of health effects depending on the person. Some people are more sensitive to mold than others. Exposure to mold can cause several health issues such as; throat irritation, nasal stuffiness, eye irritation, cough, and wheezing, as well as skin irritation in some cases. Exposure to mold may also cause heightened sensitivity depending on the time and nature of exposure. People at higher risk for mold allergies are people with chronic lung illnesses and weak immune systems, which can often result in more severe reactions when exposed to mold. [ 13 ]
There has been sufficient evidence that damp indoor environments are correlated with upper respiratory tract symptoms such as coughing, and wheezing in people with asthma. [ 14 ]
Among children and adolescents, the most common health effect post-flooding was lower respiratory tract symptoms, though there was a lack of association with measurements of total fungi. [ 15 ] Another study found that these respiratory symptoms were positively associated with exposure to water damaged homes, exposure included being inside without participating in clean up. [ 15 ] Despite lower respiratory effects among all children, there was a significant difference in health outcomes between children with pre-existing conditions and children without. [ 15 ] Children with pre-existing conditions were at greater risk that can likely be attributed to the greater disruption of care in the face of flooding and natural disaster. [ 15 ] [ 16 ]
Although mold is the primary focus post flooding for residents, the effects of dampness [ 17 ] alone must also be considered. According to the Institute of Medicine, there is a significant association between dampness in the home and wheeze, cough, and upper respiratory symptoms. [ 18 ] A later analysis determined that 30% to 50% of asthma-related health outcomes are associated with not only mold, but also dampness in buildings. [ 18 ]
While there is a proven correlation between mold exposure and the development of upper and lower respiratory syndromes, there are still fewer incidences of negative health effects than one might expect. [ 19 ] Barbeau and colleagues suggested that studies do not show a greater impact from mold exposure for several reasons: 1) the types of health effects are not severe and are therefore not caught; 2) people whose homes have flooded find alternative housing to prevent exposure; 3) self-selection, the healthier people participated in mold clean-up and were less likely to get sick; 4) exposures were time-limited as result of remediation efforts and; 5) the lack of access to health care post-flooding may result in fewer illnesses being discovered and reported for their association with mold. [ 19 ] There are also certain notable scientific limitations in studying the exposure effects of dampness and molds on individuals because there are currently no known biomarkers that can prove that a person was exclusively exposed to molds. [ 20 ] Thus, it is currently impossible to prove correlation between mold exposure and symptoms. [ 20 ] [ 21 ]
Health problems associated with high levels of airborne mold spores include allergic reactions , asthma episodes, irritations of the eye, nose and throat, sinus congestion, and other respiratory problems. [ 22 ] Several studies and reviews have suggested that childhood exposure to dampness and mold might contribute to the development of asthma. [ 23 ] [ 24 ] [ 25 ] [ 26 ] For example, residents of homes with mold are at an elevated risk for both respiratory infections and bronchitis. [ 27 ] When mold spores are inhaled by an immunocompromised individual, some mold spores may begin to grow on living tissue, [ 28 ] attaching to cells along the respiratory tract and causing further problems. [ 29 ] [ 30 ] Generally, when this occurs, the illness is an epiphenomenon and not the primary pathology. Also, mold may produce mycotoxins, either before or after exposure to humans, potentially causing toxicity.
A serious health threat from mold exposure for immunocompromised individuals is systemic fungal infection (systemic mycosis ). Immunocompromised individuals exposed to high levels of mold, or individuals with chronic exposure may become infected. [ 31 ] [ 32 ] Sinuses and digestive tract infections are most common; lung and skin infections are also possible. Mycotoxins may or may not be produced by the invading mold.
Dermatophytes are the parasitic fungi that cause skin infections such as athlete's foot and tinea cruris . Most dermatophyte fungi take the form of mold, as opposed to a yeast, with an appearance (when cultured) that is similar to other molds.
Opportunistic infection by molds [ 33 ] such as Talaromyces marneffei and Aspergillus fumigatus is a common cause of illness and death among immunocompromised people, including people with AIDS or asthma . [ 34 ] [ 35 ]
The most common form of hypersensitivity is caused by the direct exposure to inhaled mold spores that can be dead or alive or hyphal fragments which can lead to allergic asthma or allergic rhinitis . [ 36 ] The most common effects are rhinorrhea (runny nose), watery eyes, coughing and asthma attacks. Another form of hypersensitivity is hypersensitivity pneumonitis . Exposure can occur at home, at work or in other settings. [ 36 ] [ 37 ] It is predicted that about 5% of people have some airway symptoms due to allergic reactions to molds in their lifetimes. [ 38 ]
Hypersensitivity may also be a reaction toward an established fungal infection in allergic bronchopulmonary aspergillosis .
Molds excrete toxic compounds called mycotoxins , secondary metabolites produced by fungi under certain environmental conditions. These environmental conditions affect the production of mycotoxins at the transcription level. Temperature, water activity and pH, strongly influence mycotoxin biosynthesis by increasing the level of transcription within the fungal spore. It has also been found that low levels of fungicides can boost mycotoxin synthesis. [ 39 ] [ 40 ] Mycotoxins can be harmful or lethal to humans and animals when exposure is high enough. [ 41 ] [ 42 ]
Extreme exposure to very high levels of mycotoxins can lead to neurological problems and, in some cases, death; fortunately, such exposures rarely to never occur in normal exposure scenarios, even in residences with serious mold problems. [ 43 ] Prolonged exposure, such as daily workplace exposure, can be particularly harmful. [ 44 ]
It is thought that all molds may produce mycotoxins, [ 45 ] and thus all molds may be potentially toxic if large enough quantities are ingested, or the human becomes exposed to extreme quantities of mold. Mycotoxins are not produced all the time, but only under specific growing conditions. Mycotoxins are harmful or lethal to humans and animals. [ 46 ] [ 47 ]
Mycotoxins can be found on the mold spore and mold fragments, and therefore they can also be found on the substrate upon which the mold grows. Routes of entry for these insults can include ingestion, dermal exposure, and inhalation.
Aflatoxin is an example of a mycotoxin. It is a cancer-causing poison produced by certain fungi in or on foods and feeds, especially in field corn and peanuts. [ 48 ]
The primary sources of mold exposure are from the indoor air in buildings with substantial mold growth and the ingestion of food with mold growths.
While mold and related microbial agents can be found both inside and outside, specific factors can lead to significantly higher levels of these microbes, creating a potential health hazard. Several notable factors are water damage in buildings, the use of building materials which provide a suitable substrate and source of food to amplify mold growth, relative humidity, and energy-efficient building designs, which can prevent proper circulation of outside air and create a unique ecology in the built environment. [ 49 ] [ 50 ] [ 51 ] [ 52 ] A common issue with mold hazards in the household can be the placement of furniture, resulting in a lack of ventilation of the nearby wall. The simplest method of avoiding mold in a home so affected is to move the furniture in question.
More than half of adult workers in moldy/humid buildings suffer from nasal or sinus symptoms due to mold exposure. [ 11 ]
Prevention of mold exposure and its ensuing health issues begins with the prevention of mold growth in the first place by avoiding a mold-supporting environment. Extensive flooding and water damage can support extensive mold growth. Following hurricanes, homes with greater flood damage, especially those with more than 3 feet (0.91 m) of indoor flooding, demonstrated far higher levels of mold growth compared with homes with little or no flooding. [ 53 ]
It is useful to perform an assessment of the location and extent of the mold hazard in a structure. Various practices of remediation can be followed to mitigate mold issues in buildings, the most important of which is to reduce moisture levels. [ 54 ] Removal of affected materials after the source of moisture has been reduced and/or eliminated may be necessary, as some materials cannot be remediated. [ 55 ] Thus, the concept of mold growth, assessment, and remediation is essential in preventing health issues arising due to the presence of dampness and mold.
Molds may excrete liquids or low-volatility gases, but the concentrations are so low that frequently they cannot be detected even with sensitive analytical sampling techniques. Sometimes, these by-products are detectable by odor, in which case they are referred to as "ergonomic odors", meaning the odors are noticeable but do not indicate toxicologically significant exposures.
Molds that are often found on meat and poultry include members of the genera Alternaria , Aspergillus , Botrytis , Cladosporium , Fusarium , Geotrichum , Mortierella , Mucor , Neurospora , Paecilomyces , Penicillium , and Rhizopus . [ 56 ] Grain crops in particular incur considerable losses both in field and storage due to pathogens, post-harvest spoilage, and insect damage. A number of common microfungi are important agents of post-harvest spoilage, notably members of the genera Aspergillus , Fusarium , and Penicillium . [ 56 ] A number of these produce mycotoxins (soluble, non-volatile toxins produced by a range of microfungi that demonstrate specific and potent toxic properties on human and animal cells [ 57 ] ) that can render foods unfit for consumption. When ingested, inhaled, or absorbed through skin, mycotoxins may cause or contribute to a range of effects from reduced appetite and general malaise to acute illness or death in rare cases. [ 58 ] [ 59 ] [ 60 ] Mycotoxins may also contribute to cancer. Dietary exposure to the mycotoxin aflatoxin B1, commonly produced by growth of the fungus Aspergillus flavus on improperly stored ground nuts in many areas of the developing world, is known to independently (and synergistically with Hepatitis B virus) induce liver cancer. [ 61 ] Mycotoxin-contaminated grain and other food products have a significant impact on human and animal health globally. According to the World Health Organization, roughly 25% of the world's food may be contaminated by mycotoxins. [ 58 ]
Prevention of mold exposure from food is generally to consume food that has no mold growths on it. [ 48 ] Also, mold growth in the first place can be prevented by the same concept of mold growth, assessment, and remediation that prevents air exposure. Also, it is especially useful to clean the inside of the refrigerator and to ensure dishcloths, towels, sponges, and mops are clean. [ 48 ]
Ruminants are considered to have increased resistance to some mycotoxins, presumably due to the superior mycotoxin-degrading capabilities of their gut microbiota. [ 58 ] The passage of mycotoxins through the food chain may also have important consequences on human health. [ 62 ] For example, in China in December 2011, high levels of carcinogen aflatoxin M1 in Mengniu brand milk were found to be associated with the consumption of mold-contaminated feed by dairy cattle. [ 63 ]
Bacteria, fungi, allergens, and particle-bound semi-volatile organic compounds (SVOCs) can all be found in bedding and pillows with possible consequences for human health given the high amount of exposure each day. [ 64 ] Over 47 species of fungi have been identified in pillows, although the typical range of species found in a single pillow varied between four and sixteen. [ 65 ] Compared to feather pillows, synthetic pillows typically display a slightly greater variety of fungal species and significantly higher levels of β‐(1,3)‐glucan, which can cause inflammatory responses. [ 66 ] [ 67 ] The authors concluded that these and related results suggest feather bedding might be a more appropriate choice for asthmatics than synthetics. Some newer bedding products incorporate silver nanoparticles due to their antibacterial, [ 68 ] [ 69 ] [ 70 ] antifungal, [ 71 ] and antiviral [ 72 ] properties; however, the long-term safety of this additional exposure to these nanoparticles is relatively unknown, and a conservative approach to the use of these products is recommended. [ 73 ]
Flooding in houses causes a unique opportunity for mold growth, which may be attributed to adverse health effects in people exposed to the mold, especially children and adolescents. In a study on the health effects of mold exposure after hurricanes Katrina and Rita , the predominant types of mold were Aspergillus , Penicillium , and Cladosporium with indoor spore counts ranging from 6,142 – 735,123 spores m −3 . [ 19 ] Molds isolated following flooding were different from mold previously reported for non-water damaged homes in the area. [ 19 ] Further research found that homes with greater than three feet of indoor flooding demonstrated significantly higher levels of mold than those with little or no flooding. [ 19 ]
Recommended strategies to prevent mold include avoiding mold-contamination; utilization of environmental controls; the use of personal protective equipment (PPE), including skin and eye protection and respiratory protection; and environmental controls such as ventilation and suppression of dust. [ 74 ] When mold cannot be prevented, the CDC recommends clean-up protocol including first taking emergency action to stop water intrusion. [ 74 ] Second, they recommend determining the extent of water damage and mold contamination. And third, they recommend planning remediation activities such as establishing containment and protection for workers and occupants; eliminating water or moisture sources if possible; decontaminating or removing damaged materials and drying any wet materials; evaluating whether space has been successfully remediated; and reassembling the space to control sources of moisture. [ 74 ]
In 1698, the physician Sir John Floyer published the first edition of A Treatise of the Asthma , the first English textbook on the malady. In it, he describes how dampness and mold could trigger an asthmatic attack, specifically, "damp houses and fenny [boggy] countries". He also writes of an asthmatic "who fell into a violent fit by going into a Wine-Cellar", presumably due to the "fumes" in the air. [ 75 ] [ 76 ]
In the 1930s, mold was identified as the cause behind the mysterious deaths of farm animals in Russia and other countries. Stachybotrys chartarum was found growing on the wet grain used for animal feed. Illness and death also occurred in humans when starving peasants ate large quantities of rotten food grains and cereals heavily overgrown with the Stachybotrys mold. [ 77 ]
In the 1970s, building construction techniques changed in response to changing economic realities, including the energy crisis . As a result, homes, and buildings became more airtight. Also, cheaper materials such as drywall came into common use. The newer building materials reduced the drying potential of the structures, making moisture problems more prevalent. This combination of increased moisture and suitable substrates contributed to increased mold growth inside buildings. [ 78 ]
Today, the US Food and Drug Administration and the agriculture industry closely monitor mold and mycotoxin levels in grains and foodstuffs to keep the contamination of animal feed and human food supplies below specific levels. In 2005, Diamond Pet Foods, a US pet food manufacturer, experienced a significant rise in the number of corn shipments containing elevated levels of aflatoxin . This mold toxin eventually made it into the pet food supply, and dozens of dogs and cats died before the company was forced to recall affected products. [ 79 ] [ 80 ]
In November 2022, a UK coroner recorded that a two-year-old child, Awaab Ishak from Rochdale , England, died in 2020 of "acute airway oedema with severe granulomatous tracheobronchitis due to environmental mould exposure" in his home. [ 81 ] [ 82 ] While not specified in the coroner's report or outputs from official proceedings, the death was widely reported as due to specifically 'toxic' or 'toxic black' mold. [ 83 ] The finding led to a 2023 change in UK law, known as Awaab's Law , which will require social housing providers to remedy reported damp and mould within certain time limits. [ 84 ] | https://en.wikipedia.org/wiki/Mold_health_issues |
Moldable wood is a strong and flexible cellulose-based material. Moldable wood can be folded into different shapes without breaking or snapping. The patented synthesis is based on the deconstruction and softening of the wood's lignin, then re-swelling the material in a rapid "water-shock" process that produces a wrinkled cell wall structure. The result of this unique structure is a flexible wood material that can be molded or folded, with the final shape locked in plate by simple air-drying. This discovery broadens the potential applications of wood as a sustainable structural material. This research, which was a collaborative effort between the University of Maryland , Yale University , Ohio State University , USDA Forest Service , University of Bristol , University of North Texas , ETH Zurich , and the Center for Materials Innovation, [ 1 ] was published on the cover of Science in October 2021. [ 2 ]
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Moldable_wood |
Molden is a general molecular and electronic structure processing program.
Molden program has been tested on different platforms, namely Linux, Windows NT, Windows95, Windows2000, WindowsXP, MacOSX, Silicon Graphics IRIX, Sun SunOS and Solaris.
Ambfor, the main force field module of Molden, is an external program that can be initialized from Molden. Ambfor admits protein force field Amber and GAFF (General Amber Force Field). Use of Ambfor is automatic when a protein is studied with Molden. The GAFF force field is used only small molecules. Both Amber and GAFF are based on atomic charges. The differences are largely in computational cost, with GAFF being very expensive.
Molden can read several file formats with crystal information.
Molden: a pre- and post-processing program for molecular and electronic structures. [ 1 ]
This article about molecular modelling software is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Molden |
A mole is a massive structure, usually of stone , used as a pier , breakwater , or a causeway separating two bodies of water. A mole may have a wooden structure built on top of it that resembles a wooden pier. The defining feature of a mole, however, is that water cannot freely flow underneath it, unlike a true pier. The oldest known mole is at Wadi al-Jarf , an ancient Egyptian harbor complex on the Red Sea, constructed c. 2500 BCE .
The word comes from Middle French mole , ultimately from Latin mōlēs , meaning a large mass, especially of rock; it has the same root as molecule and mole , the chemical unit of measurement. [ 1 ]
Notable in antiquity was the Heptastadion , a giant mole [ 2 ] built in the 3rd century BC in the city of Alexandria , Egypt [ 3 ] to join the city to Pharos Island where the Pharos lighthouse stood. [ 4 ] The causeway formed a barrier separating Alexandria's oceanfront into two distinct harbours, [ 5 ] an arrangement which had the advantage of protecting the harbours from the force of the strong westerly coastal current. [ 6 ] The Heptastadion is also believed to have served as an aqueduct while Pharos was inhabited, [ 7 ] and geophysical research indicates that it was part of the road network of the ancient city. [ 8 ] Silting over the years [ 2 ] resulted in the former dyke disappearing under several metres of accumulated silt and soil [ 9 ] upon which the Ottomans built a town from 1517 onwards. [ 8 ] Part of the modern city of Alexandria is now built on the site.
Stone quaysides are sometimes called moles. A well-known example is the Molo in Venice . It is the site of the Doge's Palace and two pillars which form a gateway to the sea. [ 10 ] It has been depicted numerous times by artists such as Canaletto .
The Kingdom of England acquired the north African city of Tangier as English Tangier in 1661 as part of King Charles II's marriage settlement with the Portuguese princess Catherine of Braganza , who became Queen of England and Scotland.
A mole (a large breakwater) was then designed to improve the harbour and was planned to be 1,436 feet (438 m) long. The cost was about £340,000, and the improved harbour was to be 600 yd (550 m) long, 30 ft (9 m) deep at low tide, and capable of keeping out the roughest of seas. [ 11 ] Work began on the mole in August 1663 [ 12 ] and continued for some years under a succession of Governors.
With an improved harbour the town could have played the same role that Gibraltar later played in British naval strategy. [ 13 ]
However, Parliament expressed concern about the cost of maintaining the Tangier garrison, and by 1680 King Charles II had threatened to give up Tangier unless the supplies were voted for its sea defences. A crippling blockade by the Jaysh al-Rifi finally forced the English to withdraw from Tangier in 1683. The King gave secret orders to abandon the city, level the fortifications, destroy the harbour, and evacuate the troops. Samuel Pepys was present at the evacuation and wrote an account of it. [ 14 ]
In the San Francisco Bay Area in California , there were several moles, combined causeways and wooden piers or trestles extending from the eastern shore and utilized by various railroads, such as the Key System , Southern Pacific Railroad (two), and Western Pacific Railroad : the Alameda Mole , the Oakland Mole , and the Western Pacific Mole. By extending the tracks the railroads could get beyond the shallow mud flats and reach the deeper waters of the Bay that could be navigated by the Bay Ferries . A train fell off the Alameda Mole through an open drawbridge in 1890 killing several people. [ 15 ] None of the four Bay Area moles survive today, although the causeway portions of each were incorporated into the filling in of large tracts of marshland for harbor and industrial development.
A large mole was completed in 1947 at the San Francisco Naval Shipyard in the Bayview-Hunters Point neighborhood of San Francisco to accommodate the large Hunters Point gantry crane. The mole required 3,000,000 cubic yards (2,300,000 m 3 ) of fill. [ 16 ]
In Swakopmund , on the coast of Namibia, a mole was built in 1899. Designed by the engineer F. W. Oftloff, it was intended to develop the city's harbour. However, the Benguela Current continually deposited sand onto the mole until it became a promontory. The adjacent area has since become a popular leisure beach, known as the Mole Beach. [ 17 ]
The two concrete moles protecting the outer harbour at Dunkirk played a significant part in the evacuation of British and French troops during World War II in May to June 1940. The harbour had been made unusable by German bombing and it was clear that troops were not going to be taken directly off the beaches fast enough. Naval captain William Tennant had been placed ashore to take charge of the navy shore parties and organise the evacuation. Tennant had what proved to be the highly successful idea of using the East Mole to take off troops. The moles had never been designed to dock ships, but despite this, the majority of troops rescued from Dunkirk were taken off in this way. [ 18 ] James Campbell Clouston , pier master on the east mole, organised and regulated the flow of men on that site. [ 19 ]
The Churchill Barriers are a series of four causeways in the Orkney Islands with a total length of 1.5 miles (2.4 km). They link the Orkney Mainland in the north to the island of South Ronaldsay via Burray and the two smaller islands of Lamb Holm and Glimps Holm .
The barriers were built in the 1940s as naval defences to protect the anchorage at Scapa Flow . They were commissioned following the sinking of HMS Royal Oak in 1939 by German U-boat U-47 which had penetrated the existing defences of sunken blockships and anti-submarine nets . The barriers now serve as road links, carrying the A961 road from Kirkwall to Burwick . | https://en.wikipedia.org/wiki/Mole_(architecture) |
The mole (symbol mol ) is a unit of measurement , the base unit in the International System of Units (SI) for amount of substance , an SI base quantity proportional to the number of elementary entities of a substance. One mole is an aggregate of exactly 6.022 140 76 × 10 23 elementary entities (approximately 602 sextillion or 602 billion times a trillion), which can be atoms , molecules , ions , ion pairs, or other particles . The number of particles in a mole is the Avogadro number (symbol N 0 ) and the numerical value of the Avogadro constant (symbol N A ) expressed in mol −1 . [ 1 ] The SI value of the mole was chosen on the basis of the historical definition of the mole as the amount of substance that corresponds to the number of atoms in 12 grams of 12 C , [ 1 ] which made the mass of a mole of a compound expressed in grams, numerically equal to the average molecular mass or formula mass of the compound expressed in daltons . With the 2019 revision of the SI , the numerical equivalence is now approximate.
Conceptually, the mole is no different than the concept of dozen or other convenient grouping used to discuss collections of identical objects. Because atoms are small, the number of elements in the grouping must be huge to be useful for laboratory-scale work.
The mole is widely used in chemistry as a convenient way to express amounts of reactants and amounts of products of chemical reactions . For example, the chemical equation 2 H 2 + O 2 → 2 H 2 O can be interpreted to mean that for each 2 mol molecular hydrogen (H 2 ) and 1 mol molecular oxygen (O 2 ) that react, 2 mol of water (H 2 O) form. The concentration of a solution is commonly expressed by its molar concentration , defined as the amount of dissolved substance per unit volume of solution, for which the unit typically used is mole per litre (mol/L).
The number of entities (symbol N ) in a one-mole sample equals the Avogadro number (symbol N 0 ), a dimensionless quantity . [ 1 ] Historically, N 0 approximates the number of nucleons ( protons or neutrons ) in one gram of ordinary matter .
The Avogadro constant (symbol N A = N 0 /mol ) has numerical multiplier given by the Avogadro number with the unit reciprocal mole (mol −1 ). [ 2 ] The ratio n = N / N A is a measure of the amount of substance (with the unit mole). [ 2 ] [ 3 ] [ 4 ]
Depending on the nature of the substance, an elementary entity may be an atom, a molecule, an ion, an ion pair, or a subatomic particle such as a proton . For example, 10 moles of water (a chemical compound ) and 10 moles of mercury (a chemical element ) contain equal numbers of particles of each substance, with one atom of mercury for each molecule of water, despite the two quantities having different volumes and different masses.
The mole corresponds to a given count of entities. [ 5 ] Usually, the entities counted are chemically identical and individually distinct. For example, a solution may contain a certain number of dissolved molecules that are more or less independent of each other. However, the constituent entities in a solid are fixed and bound in a lattice arrangement, yet they may be separable without losing their chemical identity. Thus, the solid is composed of a certain number of moles of such entities. In yet other cases, such as diamond , where the entire crystal is essentially a single molecule, the mole is still used to express the number of atoms bound together, rather than a count of molecules. Thus, common chemical conventions apply to the definition of the constituent entities of a substance, in other cases exact definitions may be specified. The mass of a substance is equal to its relative atomic (or molecular) mass multiplied by the molar mass constant , which is almost exactly 1 g/mol.
Like chemists, chemical engineers use the unit mole extensively, but different unit multiples may be more suitable for industrial use. For example, the SI unit for volume is the cubic metre, a much larger unit than the commonly used litre in the chemical laboratory. When amount of substance is also expressed in kmol (1000 mol) in industrial-scaled processes, the numerical value of molarity remains the same, as kmol m 3 = 1000 mol 1000 L = mol L {\textstyle {\frac {\text{kmol}}{{\text{m}}^{3}}}={\frac {1000{\text{ mol}}}{1000{\text{ L}}}}={\frac {\text{mol}}{\text{L}}}} . Chemical engineers once used the kilogram-mole (notation kg-mol ), which is defined as the number of entities in 12 kg of 12 C, and often referred to the mole as the gram-mole (notation g-mol ), then defined as the number of entities in 12 g of 12 C, when dealing with laboratory data. [ 6 ]
Late 20th-century chemical engineering practice came to use the kilomole (kmol), which was numerically identical to the kilogram-mole (until the 2019 revision of the SI , which redefined the mole by fixing the value of the Avogadro constant, making it very nearly equivalent to but no longer exactly equal to the gram-mole), but whose name and symbol adopt the SI convention for standard multiples of metric units – thus, kmol means 1000 mol. This is equivalent to the use of kg instead of g. The use of kmol is not only for "magnitude convenience" but also makes the equations used for modelling chemical engineering systems coherent . For example, the conversion of a flowrate of kg/s to kmol/s only requires dividing by the molar mass in g/mol (as kg kmol = 1000 g 1000 mol = g mol {\textstyle {\frac {\text{kg}}{\text{kmol}}}={\frac {1000{\text{ g}}}{1000{\text{ mol}}}}={\frac {\text{g}}{\text{mol}}}} ) without multiplying by 1000 unless the basic SI unit of mol/s were to be used, which would otherwise require the molar mass to be converted to kg/mol.
For convenience in avoiding conversions in the imperial (or US customary units ), some engineers adopted the pound-mole (notation lb-mol or lbmol ), which is defined as the number of entities in 12 lb of 12 C. One lb-mol is equal to 453.592 37 g‑mol , [ 6 ] which is the same numerical value as the number of grams in an international avoirdupois pound .
Greenhouse and growth chamber lighting for plants is sometimes expressed in micromoles per square metre per second, where 1 mol photons ≈ 6.02 × 10 23 photons. [ 7 ] The obsolete unit einstein is variously defined as the energy in one mole of photons and also as simply one mole of photons.
The only SI derived unit with a special name derived from the mole is the katal , defined as one mole per second of catalytic activity . Like other SI units, the mole can also be modified by adding a metric prefix that multiplies it by a power of 10 :
One femtomole is exactly 602,214,076 molecules; attomole and smaller quantities cannot be exactly realized. The yoctomole, equal to around 0.6 of an individual molecule, did make appearances in scientific journals in the year the yocto- prefix was officially implemented. [ 8 ]
The history of the mole is intertwined with that of units of molecular mass , and the Avogadro constant .
The first table of standard atomic weight was published by John Dalton (1766–1844) in 1805, based on a system in which the relative atomic mass of hydrogen was defined as 1. These relative atomic masses were based on the stoichiometric proportions of chemical reaction and compounds, a fact that greatly aided their acceptance: It was not necessary for a chemist to subscribe to atomic theory (an unproven hypothesis at the time) to make practical use of the tables. This would lead to some confusion between atomic masses (promoted by proponents of atomic theory) and equivalent weights (promoted by its opponents and which sometimes differed from relative atomic masses by an integer factor), which would last throughout much of the nineteenth century.
Jöns Jacob Berzelius (1779–1848) was instrumental in the determination of relative atomic masses to ever-increasing accuracy. He was also the first chemist to use oxygen as the standard to which other masses were referred. Oxygen is a useful standard, as, unlike hydrogen, it forms compounds with most other elements, especially metals . However, he chose to fix the atomic mass of oxygen as 100, which did not catch on.
Charles Frédéric Gerhardt (1816–56), Henri Victor Regnault (1810–78) and Stanislao Cannizzaro (1826–1910) expanded on Berzelius' works, resolving many of the problems of unknown stoichiometry of compounds, and the use of atomic masses attracted a large consensus by the time of the Karlsruhe Congress (1860). The convention had reverted to defining the atomic mass of hydrogen as 1, although at the level of precision of measurements at that time – relative uncertainties of around 1% – this was numerically equivalent to the later standard of oxygen = 16. However the chemical convenience of having oxygen as the primary atomic mass standard became ever more evident with advances in analytical chemistry and the need for ever more accurate atomic mass determinations.
The name mole is an 1897 translation of the German unit Mol , coined by the chemist Wilhelm Ostwald in 1894 from the German word Molekül ( molecule ). [ 9 ] [ 10 ] [ 11 ] The related concept of equivalent mass had been in use at least a century earlier. [ 12 ]
In chemistry, it has been known since Proust's law of definite proportions (1794) that knowledge of the mass of each of the components in a chemical system is not sufficient to define the system. Amount of substance can be described as mass divided by Proust's "definite proportions", and contains information that is missing from the measurement of mass alone. As demonstrated by Dalton's law of partial pressures (1803), a measurement of mass is not even necessary to measure the amount of substance (although in practice it is usual). There are many physical relationships between amount of substance and other physical quantities, the most notable one being the ideal gas law (where the relationship was first demonstrated in 1857). The term "mole" was first used in a textbook describing these colligative properties . [ 13 ]
Developments in mass spectrometry led to the adoption of oxygen-16 as the standard substance, in lieu of natural oxygen. [ 14 ]
The oxygen-16 definition was replaced with one based on carbon-12 during the 1960s. The International Bureau of Weights and Measures defined the mole as "the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilograms of carbon-12." Thus, by that definition, one mole of pure 12 C had a mass of exactly 12 g . [ 15 ] [ 5 ] The four different definitions were equivalent to within 1%.
Because a dalton , a unit commonly used to measure atomic mass , is exactly 1/12 of the mass of a carbon-12 atom, this definition of the mole entailed that the mass of one mole of a compound or element in grams was numerically equal to the average mass of one molecule or atom of the substance in daltons, and that the number of daltons in a gram was equal to the number of elementary entities in a mole. Because the mass of a nucleon (i.e. a proton or neutron ) is approximately 1 dalton and the nucleons in an atom's nucleus make up the overwhelming majority of its mass, this definition also entailed that the mass of one mole of a substance was roughly equivalent to the number of nucleons in one atom or molecule of that substance.
Since the definition of the gram was not mathematically tied to that of the dalton, the number of molecules per mole N A (the Avogadro constant) had to be determined experimentally. The experimental value adopted by CODATA in 2010 is N A = 6.022 141 29 (27) × 10 23 mol −1 . [ 16 ] In 2011 the measurement was refined to 6.022 140 78 (18) × 10 23 mol −1 . [ 17 ]
The mole was made the seventh SI base unit in 1971 by the 14th CGPM. [ 18 ]
Before the 2019 revision of the SI , the mole was defined as the amount of substance of a system that contains as many elementary entities as there are atoms in 12 grams of carbon-12 (the most common isotope of carbon ). [ 19 ] The term gram-molecule was formerly used to mean one mole of molecules, and gram-atom for one mole of atoms. [ 15 ] For example, 1 mole of MgBr 2 is 1 gram-molecule of MgBr 2 but 3 gram-atoms of MgBr 2 . [ 20 ] [ 21 ]
In 2011, the 24th meeting of the General Conference on Weights and Measures (CGPM) agreed to a plan for a possible revision of the SI base unit definitions at an undetermined date.
On 16 November 2018, after a meeting of scientists from more than 60 countries at the CGPM in Versailles, France, all SI base units were defined in terms of physical constants. This meant that each SI unit, including the mole, would not be defined in terms of any physical objects but rather they would be defined by physical constants that are, in their nature, exact. [ 3 ]
Such changes officially came into effect on 20 May 2019. Following such changes, "one mole" of a substance was redefined as containing "exactly 6.022 140 76 × 10 23 elementary entities" of that substance. [ 22 ] [ 23 ]
Since its adoption into the International System of Units in 1971, numerous criticisms of the concept of the mole as a unit like the metre or the second have arisen:
October 23, denoted 10/23 in the US, is recognized by some as Mole Day . [ 29 ] It is an informal holiday in honor of the unit among chemists. The date is derived from the Avogadro number, which is approximately 6.022 × 10 23 . It starts at 6:02 a.m. and ends at 6:02 p.m. Alternatively, some chemists celebrate June 2 ( 06/02 ), June 22 ( 6/22 ), or 6 February ( 06.02 ), a reference to the 6.02 or 6.022 part of the constant. [ 30 ] [ 31 ] [ 32 ] | https://en.wikipedia.org/wiki/Mole_(unit) |
In chemistry , a mole map (also called a mole road map or stoichiometric map ) is a graphical representation of the relationships between the mole , molar mass , number of particles (atoms, molecules, ions), volume for gases at standard temperature and pressure (STP), and coefficients from balanced chemical equations. [ 1 ] Mole maps are widely used in teaching basic principles of stoichiometry and unit conversion in undergraduate -level and high school chemistry courses. [ 2 ] [ 3 ]
A mole map typically illustrates the following core relationships:
The map often features arrows or pathways guiding the user between these units based on known information and desired quantities. [ 4 ] | https://en.wikipedia.org/wiki/Mole_map_(chemistry) |
Molecubes are a collection of modular robots created by Hod Lipson and Victor Zykov from Cornell University . A molecube is made of two rotatable halves, one with the microprocessor which represents the intelligence behind the unit, and the other with a motor for rotating the joint. A group of the cubes can be connected into a variety of shapes.
A robot constructed entirely of molecubes would be able to repair itself using extra cubes, and to create a copy of itself using the same number of cubes. [ 1 ] Physical self-reproduction of both a three- and a four-module robot was demonstrated. [ 2 ] [ 3 ] Subsequent open-source development, with support from Microsoft Research and Festo [ 4 ] reduced size and weight of the molecubes. [ 5 ] Additional molecube types were produced including: hinges, grippers, batteries, wheels, cameras and more. [ 6 ] | https://en.wikipedia.org/wiki/Molecubes |
A molecular-weight size marker , also referred to as a protein ladder , DNA ladder , or RNA ladder , is a set of standards that are used to identify the approximate size of a molecule run on a gel during electrophoresis , using the principle that molecular weight is inversely proportional to migration rate through a gel matrix. Therefore, when used in gel electrophoresis , markers effectively provide a logarithmic scale by which to estimate the size of the other fragments (providing the fragment sizes of the marker are known).
Protein, DNA, and RNA markers with pre-determined fragment sizes and concentrations are commercially available. These can be run in either agarose or polyacrylamide gels . The markers are loaded in lanes adjacent to sample lanes before the commencement of the run.
Although the concept of molecular-weight markers has been retained, techniques of development have varied throughout the years. New inventions of molecular-weight markers are distributed in kits specific to the marker's type.
An early problem in the development of markers was achieving high resolution throughout the entire length of the marker. [ 1 ] Depending on the running conditions of gel electrophoresis, fragments may have been compressed, disrupting clarity. To address this issue, a kit for Southern Blot analysis was developed in 1990, providing the first marker to combine target DNA and probe DNA. This technique took advantage of logarithmic spacing, and could be used to identify target bands ranging over a length of 20,000 nucleotides . [ 2 ]
There are two common methods in which to construct a DNA molecular-weight size marker. [ 3 ] One such method employs the technique of partial ligation . [ 3 ] DNA ligation is the process by which linear DNA pieces are connected to each other via covalent bonds ; more specifically, these bonds are phosphodiester bonds . [ 4 ] Here, a 100bp duplex DNA piece is partially ligated. The consequence of this is that dimers of 200bp, trimers of 300bp, tetramers of 400bp, pentamers of 500bp, etc. will form. Additionally, a portion of the 100bp dsDNA will remain. As a result, a DNA "ladder" composed of DNA pieces of known molecular mass is created on the gel. [ 3 ]
The second method employs the use of restriction enzymes and a recognized DNA sequence. [ 3 ] The DNA is digested by a particular restriction enzyme, resulting in DNA pieces of varying molecular masses. One of the advantages of this method is that more marker can readily be created simply by digesting more of the known DNA. [ 3 ] On the other hand, the size of the DNA pieces are based on the sites where the restriction enzyme cuts. This makes it more difficult to control the size of the fragments in the marker. [ 5 ]
More recently, another method for constructing DNA molecular-weight size markers is being employed by laboratories. This strategy involves the use of Polymerase Chain Reaction (PCR) . [ 5 ] This is achieved one or two ways: 1) a DNA target is amplified at the same time via primer sets, or 2) different DNA targets are amplified independently via particular primers. [ 5 ]
As with experimental samples, the conditions of the gel can affect the molecular-weight size marker that runs alongside them. Factors such as buffer , charge/ voltage , and concentration of gel can affect the mobility and/or appearance of your marker/ladder/standard. These elements need to be taken into consideration when selecting a marker and when analyzing the final results on a gel.
Previously, protein markers had been developed using a variety of whole proteins. The development of a kit including a molecular-weight size marker based on protein fragments began in 1993. This protein marker, composed of 49 different amino acid sequences, included multidomain proteins , and allowed for the analysis of proteins cleaved at different sites. [ 9 ]
Current technique improvements in protein markers involve the use of auto-development. The first auto-developed regularly-weight protein marker was invented in 2012. [ 10 ]
Similar to DNA markers, these markers are typically composed of purified proteins whose molecular masses are already known. [ 3 ] The list below outlines some of the proteins, as well as the molecular mass, that are commonly used when constructing a protein marker.
Molecular-weight size markers can be broken up into two categories: molecular weight markers vs. molecular ladder markers. [ 14 ] Markers are either stained or unstained, and depending on the circumstance, one may be more appropriate than another. Molecular-weight size markers can also be biochemically altered. [ 15 ] The conjugation with biotin is the most common. Molecular-weight size markers are most commonly used in SDS-polyacrylamide gel electrophoresis and western blotting .
With all the different types and uses of molecular-weight size markers, it is important to choose the appropriate protein standard. Besides the most common use, as a way to calculate the molecular weight of the samples, other uses include allowing visual evidence of protein migration and transfer efficiency and are sometimes even used for positive control. [ 16 ]
As with DNA electrophoresis, conditions such as buffers, charge/voltage, and concentration should be taken into account when selecting a protein marker.
RNA ladders composed of RNA molecular-weight size markers were initially developed by using the synthetic circle method [ 21 ] to produce different-sized markers. This technique was improved upon by inventor Eric T. Kool to use circular DNA vectors as a method for producing RNA molecular-weight size markers. As referred to as the rolling circle method, the improvements of this technique stems from its efficiency in synthesizing RNA oligonucleotides . From the circular DNA template, single-stranded RNA varying in length from 4-1500 bp can be produced without the need for primers and by recycling nucleotide triphosphate . DNA can also be synthesized from the circular template, adding to this technique's versatility. In comparison to runoff transcription , the synthetic circle method produces RNA oligonucleotides without the runoff. In comparison to PCR , the synthetic circle method produces RNA oligonucleotides without the need for polymerase nor a thermal cycler . This method is also cost-efficient in its ability to synthesize grand amounts of product at a lower error rate than machine synthesizers. [ 21 ]
The RNA markers consist of RNA transcripts of various incrementing lengths. For example, the Lonza 0.5-9 kbp marker [ 22 ] has bands marking 0.5, 1, 1.5, 2, 2.5, 3, 4, 5, 6, and 9 kilobase pairs. Markers are dissolved in a storage buffer, such as EDTA , and can have a shelf life of up to 2 years when stored at -80 °C. To use the marker, such as for northern blot analysis, it is first thawed , and then stained so that it is detectable on a gel electrophoresis. One of the most common dyes used for markers is ethidium bromide .
The range of a particular marker refers to variety of bands it can map. A "high" range refers to relatively large fragments (measured in kb) while a "low" range refers to markers that distinguish between small fragments (measured in bp). Some markers can even be described as "ultra-low range", [ 16 ] but even more precise is the microRNA marker. A microRNA marker can be used to measure RNA fragments within a dozen nucleotides, such as the 17-25 nt microRNA marker. [ 23 ]
At equivalent molecular weights, RNA will migrate faster than DNA. However, both RNA and DNA have a negative linear slope between their migration distance and logarithmic molecular weight. [ 24 ] That is, samples of less weight are able to migrate a greater distance. This relationship is a consideration when choosing RNA or DNA markers as a standard.
When running RNA markers and RNA samples on a gel, it is important to prevent nuclease contamination, as RNA is very sensitive to ribonuclease (RNase) degradation through catalysis . [ 25 ] [ 26 ] Thus, all materials to be used in the procedure must be taken into consideration. Any glassware that is to come into contact with RNA should be pretreated with diethylpyrocarbonate (DEPC) and plastic materials should be disposable. [ 25 ]
One of the most common uses for molecular-weight size markers is in gel electrophoresis. The purpose of gel electrophoresis is to separate proteins by physical or chemical properties , which include charge, molecular size, and pH .< When separating based on size, the ideal method is SDS-PAGE or polyacrylamide gel electrophoresis and molecular-weight size markers are the appropriate standards to use.
Gels can vary in size. The number of samples to be run will determine the appropriate gel size. All gels are divided into lanes that run parallel through the gel. Each lane will contain a specific sample. Typically, molecular-weight size standards are placed in an outer lane. If a gel has a particularly high number of lanes, then multiple ladders may be placed across the gel for higher clarity.
Proteins and standards are pipetted on the gel in appropriate lanes. Sodium dodecyl sulfate (SDS) interacts with proteins, denaturing them, and giving them a negative charge. Since all proteins have the same charge-to-mass ratio, protein mobility through the gel will solely be based on molecular weight. Once the electric field is turned on, protein migration will initiate. Upon completion, a detection mechanism such as western blotting can be used, which will reveal the presence of bands. Each band represents a specific protein. The distance of travel is solely based on molecular weight; therefore, the molecular weight of each protein can be determined by comparing the distance of an unknown protein to the standard of known molecular weight. [ 27 ]
Many kinds of molecular-weight size markers exist, and each possess unique characteristics, lending to their involvement in a number of biological techniques. Selection of a molecular-weight size marker depends upon the marker type (DNA, RNA, or protein) and the length range it offers (e.g. 1kb). Before selecting a molecular-weight size marker, it is important to become familiar with these characteristics and properties. In a particular instance one type may be more appropriate than another. Although specific markers can vary between protocols for a given technique, this section will outline general markers and their roles.
The first type of molecular marker developed and run on gel electrophoresis were allozymes . These markers are used for the detection of protein variation. The word "allozyme" (also known as "alloenzyme") comes from " allelic variants of enzymes ." [ 28 ] When run on a gel, proteins are separated by size and charge. Although allozymes may seem dated when compared to the other markers available, they are still used today, mainly due to their low cost. One major downside is that since there is only a limited amount available, specificity an issue. [ 28 ]
Although allozymes can detect variations in DNA, it is by an indirect method and not very accurate. DNA-based markers were developed in the 1960s. [ 28 ] These markers are much more effective at distinguishing between DNA variants. Today these are the most commonly used markers. DNA-based markers work by surveying nucleotides, which can serve a variety of functions, such as detecting differences in nucleotides or even quantifying the number of mutations . [ 28 ]
The success of DNA based markers lead to the development of PCR. PCR ( polymerase chain reaction ) is a DNA amplification technique that can be applied to various types of fragments. Prior to this development, to amplify DNA, it had to be cloned or isolated. Shortly after the discovery of PCR came the idea of using PCR-based markers for gel electrophoresis. These type of markers are based on PCR primers and are categorized as DNA sequence polymorphism . [ 28 ]
Although technically speaking, DNA sequence polymorphism has been going on since the use of RFLP in the 1960s, the analysis has changed significantly over the years. DNA sequence polymorphism uses older techniques like RFLP, but on a larger scale. Sequencing is much faster and more efficient. The analysis is automated, as it uses a technique known as shotgun sequencing. This high-throughput method is commonly used in population genetics. [ 28 ]
Carbohydrate markers are employed in a technique known as polysaccharide analysis by carbohydrate gel electrophoresis (PACE), which is a measurable separation technique. [ 36 ] It allows for the analysis of enzyme hydrolysis products. [ 36 ] It has been used in applications such as characterizing enzymes involved in hemicellulose degradation, determining the structure of hemicellulose polysaccharides, and analysis of enzymatic cleavage of cellulose products. [ 36 ]
PACE depends on derivitization, which is the conversion of a chemical compound into a derivative . [ 36 ] [ 37 ] Here monosaccharides , oligosaccharides , and polysaccharides are the compounds of interest. They are labeled at their reducing ends with a fluorescent label (i.e. a fluorophore ). [ 36 ] This derivitization with a fluorophore permits both separation on a gel under the desired circumstances and fluorescence imaging of the gel. In this case, a polyacrylamide gel is used. [ 36 ]
As with DNA, RNA, and protein electrophoresis, markers are run alongside the samples of interest in carbohydrate gel electrophoresis. [ 36 ] The markers consist of oligosaccharides of known molecular weight. Like the samples of interest, the marker is also derivatized with a fluorophore (usually with 8-amino naphthalene -1,3,6- trisulfonic acid (ANTS) or 2- aminoacridone ). [ 36 ] | https://en.wikipedia.org/wiki/Molecular-weight_size_marker |
MolecularLab is an Italian website of science, specialized in science, biotechnology , molecular biology , with news, forums , and events. With over 4 million page views in May 2009 it is the most visited Italian science webzine .
MolecularLab has several objectives:
The site was founded in 2001 by Riccardo Fallini with the publication of notes from a degree course. In 2003 the Molecularlab.it site was created and then has gradually added new features: to the didactics was added daily news and then a community . It was the first news organization specializing on Italian science daily. [ 1 ]
One of the features of this site has been the intensive use of systems for distribution of news: custom feeds to be updated on certain categories or topics, [ 2 ] News Ticker and widgets to spread to other sites, an ICS file to sync with the events reported by MolecularLab, and other tools as plugins to search the site by the browser. Since January 2006 [ 3 ] MolecularLab is in partnership with World Community Grid to promote the use of computers in biomedical research.
During the first years, the comments associated to the news were free making it possible for users to open discussions of interest. From 2007, comments were moderated and limited to registered users.
Later a multimedia section, a directory science, a section for beginners, and a glossary were added. The last section consists of a quiz.
The newspaper, now owned by Richard Fallini, works with the European information system CORDIS and the association for consumers Aduc, receives press releases from the major Italian research institutions, and operates through some researchers to the first scientific blog network in Italy. [ 4 ] | https://en.wikipedia.org/wiki/MolecularLab |
Molecular Biology of the Cell is a cellular and molecular biology textbook published by W.W. Norton & Co and currently authored by Bruce Alberts , Rebecca Heald , David Morgan, Martin Raff , Keith Roberts, and Peter Walter . The book was first published in 1983 by Garland Science and is now in its seventh edition. The molecular biologist James Watson contributed to the first three editions.
Molecular Biology of the Cell is widely used in introductory courses at the university level, being considered a reference in many libraries and laboratories around the world. It describes the current understanding of cell biology and includes basic biochemistry , experimental methods for investigating cells, the properties common to most eukaryotic cells, the expression and transmission of genetic information, the internal organization of cells, and the behavior of cells in multicellular organisms. [ 1 ] Molecular Biology of the Cell has been described as "the most influential cell biology textbook of its time". [ 2 ] The sixth edition is dedicated to the memory of co-author Julian Lewis , who died in early 2014.
The book was the first to position cell biology as a central discipline for biology and medicine , and immediately became a landmark textbook. [ 3 ] It was written in intense collaborative sessions in which the authors lived together over periods of time, [ 3 ] organized by editor Miranda Robertson , then-Biology Editor of Nature . [ 4 ] | https://en.wikipedia.org/wiki/Molecular_Biology_of_the_Cell_(book) |
In chemistry , molecular Borromean rings are an example of a mechanically-interlocked molecular architecture in which three macrocycles are interlocked in such a way that breaking any macrocycle allows the others to dissociate . They are the smallest examples of Borromean rings . The synthesis of molecular Borromean rings was reported in 2004 by the group of J. Fraser Stoddart . The so-called Borromeate is made up of three interpenetrated macrocycles formed through templated self assembly as complexes of zinc . [ 1 ]
The synthesis of the macrocyclic systems involves self-assembles of two organic building blocks: 2,6-diformylpyridine (an aromatic compound with two aldehyde groups positioned ortho to the nitrogen atom of the pyridine ring) and a symmetric diamine containing a meta -substituted 2,2'-bipyridine group. Zinc acetate is added as the template for the reaction, resulting in one zinc cation in each of the six pentacoordinate complexation sites. Trifluoroacetic acid (TFA) is added to catalyse the imine bond-forming reactions. [ 1 ] The preparation of the tri-ring Borromeate involves a total of 18 precursor molecules and is only possible because the building blocks self-assemble through 12 aromatic pi-pi interactions and 30 zinc to nitrogen dative bonds . Because of these interactions, the Borromeate is thermodynamically the most stable reaction product out of potentially many others. As a consequence of all the reactions taking place being equilibria , the Borromeate is the predominant reaction product. [ 1 ]
Reduction with sodium borohydride in ethanol affords the neutral Borromeand . [ 2 ] With the zinc removed, the three macrocycles are no longer chemically bonded but remain "mechanically entangled in such a way that that if only one of the rings is removed the other two can part company." [ 3 ] The Borromeand is thus a true Borromean system as cleavage of just one imine bond (to an amine and an acetal ) in this structure breaks the mechanical bond between the three constituent macrocycles, releasing the other two individual rings. [ 1 ] [ 2 ] A borromeand differs from a [3]catenane in that none of its three macrocycles is concatenated with another other; if one bond in a [3]catenane is broken and a cycle removed, a [2]catenane can remain. [ 4 ]
Organic synthesis of this seemingly complex compound is in reality fairly simple; for this reason, the Stoddart group has suggested it as a gram-scale laboratory activity for undergraduate organic chemistry courses. [ 5 ] | https://en.wikipedia.org/wiki/Molecular_Borromean_rings |
Molecular Discovery Ltd is a software company working in the area of drug discovery .
Founded in 1984 by Peter Goodford, its aim was to provide the GRID [ 1 ] software to scientists working in the field of Drug Design, and enabled one of the first examples of rational drug design [ 2 ] with the discovery of Zanamivir in 1989. In combination with statistical methods such as GOLPE, GRID's method of modeling molecular interaction (known as a "forcefield") can also be used to perform 3D- QSAR .
In the last decade, the GRID forcefield has been applied to other areas of drug discovery, including virtual screening , scaffold-hopping, ADME and pharmacokinetic modelling, optimisation of metabolic stability and metabolite prediction, as well as pKa and tautomer modelling.
Molecular Discovery manages a Cytochrome P450 Consortium aimed at generating a large set of homogeneous experimental data for human metabolism, allowing the development of predictive in silico models. [ 3 ]
This article about molecular modelling software is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Molecular_Discovery |
Molecular Diversity is a quarterly peer-reviewed scientific journal published by Springer Science+Business Media covering research on molecular diversity and combinatorial chemistry in basic and applied research and drug discovery . The journal publishes both short and full-length papers, perspectives, news, and reviews. Coverage addresses the generation of molecular diversity, application of diversity for screening against alternative targets of all types, and the analysis of results and their applications. The journal was established in 1995 and the editors-in-chief are Hong-yu Li and Kunal Roy.
The journal ceased publication at the end of 2000, but was revived when starting in 2003 it absorbed Molecular Diversity Preservation International 's Journal of Molecular Diversity ( ISSN 1424-7917 ). Shu-Kun Lin served as editor-in-chief and edited volumes 6-11 (2003-2007). [ 1 ] He was succeeded by Guillermo A. Morales, who has been editor-in-chief until June 2018.
The journal is abstracted and indexed in:
According to the Journal Citation Reports , the journal has a 2022 impact factor of 3.8. [ 2 ] | https://en.wikipedia.org/wiki/Molecular_Diversity |
The Molecular Frontiers Foundation ( MFF ) was founded under the auspices of the Royal Swedish Academy of Sciences in 2007 by Bengt Nordén , a professor of physical chemistry at Chalmers University of Technology in Sweden and the former chair of the Nobel Committee for Chemistry. Part of the mission of MFF according to Nordén is to counter the "increasingly bad image that chemistry has in society" and the "decreasing interest in science by the young generation". [ 1 ] Founding members of Molecular Frontiers Foundation include Magdalena Eriksson, Lorie Karnath,(founder Molecular Frontiers Journal), Shuguang Zhang (MIT).
The MFF counts eleven Nobel Laureates amongst its 29-member Scientific Advisory Board. [ 2 ]
It holds a number of international symposia around the world.
May 23, 24, 2017 Stockholm, Sweden Title: Tailored Biology: Fundamental and Medicinal Insights co-chairs Bengt Nordén and Lorie Karnath
May 9, 10, 2019 Stockholm, Sweden Title: Planet Earth; A Scientific Journey co-chairs Bengt Nordén and Lorie Karnath.
March 6,7, 2023 Berkeley, CA, Title: On the Nature of Water [ 3 ] co-chairs Omar Yaghi, Lorie Karnath, Bengt Norden, Douglas Clark, Peidong Yang
Through its science-discussion website "MoleClues", the foundation awards the yearly "Molecular Frontiers Inquiry Prize" also known as the "kid Nobel" to equal numbers of girls and boys from around the world for asking the most penetrating scientific question. [ 4 ] The entries are collected online and judged by the MFF Scientific Advisory Board during the annual Spring MFF Youth Forum in the Royal Swedish Academy of Sciences in Stockholm , Sweden.
In April 2023, Lorie Karnath was elected as president of the organization by the Molecular Frontiers Foundation board. The term is for a three year period. | https://en.wikipedia.org/wiki/Molecular_Frontiers_Foundation |
In atomic, molecular, and optical physics and quantum chemistry , the molecular Hamiltonian is the Hamiltonian operator representing the energy of the electrons and nuclei in a molecule . This operator and the associated Schrödinger equation play a central role in computational chemistry and physics for computing properties of molecules and aggregates of molecules, such as thermal conductivity , specific heat , electrical conductivity , optical , and magnetic properties , and reactivity .
The elementary parts of a molecule are the nuclei, characterized by their atomic numbers , Z , and the electrons, which have negative elementary charge , − e . Their interaction gives a nuclear charge of Z + q , where q = − eN , with N equal to the number of electrons. Electrons and nuclei are, to a very good approximation, point charges and point masses. The molecular Hamiltonian is a sum of several terms: its major terms are the kinetic energies of the electrons and the Coulomb (electrostatic) interactions between the two kinds of charged particles. The Hamiltonian that contains only the kinetic energies of electrons and nuclei, and the Coulomb interactions between them, is known as the Coulomb Hamiltonian . From it are missing a number of small terms, most of which are due to electronic and nuclear spin .
Although it is generally assumed that the solution of the time-independent Schrödinger equation associated with the Coulomb Hamiltonian will predict most properties of the molecule, including its shape (three-dimensional structure), calculations based on the full Coulomb Hamiltonian are very rare. The main reason is that its Schrödinger equation is very difficult to solve. Applications are restricted to small systems like the hydrogen molecule.
Almost all calculations of molecular wavefunctions are based on the separation of the Coulomb Hamiltonian first devised by Born and Oppenheimer . The nuclear kinetic energy terms are omitted from the Coulomb Hamiltonian and one considers the remaining Hamiltonian as a Hamiltonian of electrons only. The stationary nuclei enter the problem only as generators of an electric potential in which the electrons move in a quantum mechanical way. Within this framework the molecular Hamiltonian has been simplified to the so-called clamped nucleus Hamiltonian , also called electronic Hamiltonian , that acts only on functions of the electronic coordinates.
Once the Schrödinger equation of the clamped nucleus Hamiltonian has been solved for a sufficient number of constellations of the nuclei, an appropriate eigenvalue (usually the lowest) can be seen as a function of the nuclear coordinates, which leads to a potential energy surface . In practical calculations the surface is usually fitted in terms of some analytic functions. In the second step of the Born–Oppenheimer approximation the part of the full Coulomb Hamiltonian that depends on the electrons is replaced by the potential energy surface. This converts the total molecular Hamiltonian into another Hamiltonian that acts only on the nuclear coordinates. In the case of a breakdown of the Born–Oppenheimer approximation —which occurs when energies of different electronic states are close—the neighboring potential energy surfaces are needed, see this article for more details on this.
The nuclear motion Schrödinger equation can be solved in a space-fixed (laboratory) frame , but then the translational and rotational (external) energies are not accounted for. Only the (internal) atomic vibrations enter the problem. Further, for molecules larger than triatomic ones, it is quite common to introduce the harmonic approximation , which approximates the potential energy surface as a quadratic function of the atomic displacements. This gives the harmonic nuclear motion Hamiltonian . Making the harmonic approximation, we can convert the Hamiltonian into a sum of uncoupled one-dimensional harmonic oscillator Hamiltonians. The one-dimensional harmonic oscillator is one of the few systems that allows an exact solution of the Schrödinger equation.
Alternatively, the nuclear motion (rovibrational) Schrödinger equation can be solved in a special frame (an Eckart frame ) that rotates and translates with the molecule. Formulated with respect to this body-fixed frame the Hamiltonian accounts for rotation , translation and vibration of the nuclei. Since Watson introduced in 1968 an important simplification to this Hamiltonian, it is often referred to as Watson's nuclear motion Hamiltonian , but it is also known as the Eckart Hamiltonian .
The algebraic form of many observables—i.e., Hermitian operators representing observable quantities—is obtained by the following quantization rules :
Classically the electrons and nuclei in a molecule have kinetic energy of the form p 2 /(2 m ) and
interact via Coulomb interactions , which are inversely proportional to the distance r ij between particle i and j . r i j ≡ | r i − r j | = ( r i − r j ) ⋅ ( r i − r j ) = ( x i − x j ) 2 + ( y i − y j ) 2 + ( z i − z j ) 2 . {\displaystyle r_{ij}\equiv |\mathbf {r} _{i}-\mathbf {r} _{j}|={\sqrt {(\mathbf {r} _{i}-\mathbf {r} _{j})\cdot (\mathbf {r} _{i}-\mathbf {r} _{j})}}={\sqrt {(x_{i}-x_{j})^{2}+(y_{i}-y_{j})^{2}+(z_{i}-z_{j})^{2}}}.}
In this expression r i stands for the coordinate vector of any particle (electron or nucleus), but from here on we will reserve capital R to represent the nuclear coordinate, and lower case r for the electrons of the system. The coordinates can be taken to be expressed with respect to any Cartesian frame centered anywhere in space, because distance, being an inner product, is invariant under rotation of the frame and, being the norm of a difference vector, distance is invariant under translation of the frame as well.
By quantizing the classical energy in Hamilton form one obtains the a molecular Hamilton operator that is often referred to as the Coulomb Hamiltonian . This Hamiltonian is a sum of five terms. They are
Here M i is the mass of nucleus i , Z i is the atomic number of nucleus i , and m e is the mass of the electron. The Laplace operator of particle i is: ∇ r i 2 ≡ ∇ r i ⋅ ∇ r i = ∂ 2 ∂ x i 2 + ∂ 2 ∂ y i 2 + ∂ 2 ∂ z i 2 {\displaystyle \nabla _{\mathbf {r} _{i}}^{2}\equiv {\boldsymbol {\nabla }}_{\mathbf {r} _{i}}\cdot {\boldsymbol {\nabla }}_{\mathbf {r} _{i}}={\frac {\partial ^{2}}{\partial x_{i}^{2}}}+{\frac {\partial ^{2}}{\partial y_{i}^{2}}}+{\frac {\partial ^{2}}{\partial z_{i}^{2}}}} . Since the kinetic energy operator is an inner product, it is invariant under rotation of the Cartesian frame with respect to which x i , y i , and z i are expressed.
In the 1920s much spectroscopic evidence made it clear that the Coulomb Hamiltonian is missing certain terms. Especially for molecules containing heavier atoms, these terms, although much smaller than kinetic and Coulomb energies, are nonnegligible. These spectroscopic observations led to the introduction of a new degree of freedom for electrons and nuclei, namely spin . This empirical concept was given a theoretical basis by Paul Dirac when he introduced a relativistically correct ( Lorentz covariant ) form of the one-particle Schrödinger equation. The Dirac equation predicts that spin and spatial motion of a particle interact via spin–orbit coupling . In analogy spin-other-orbit coupling was introduced. The fact that particle spin has some of the characteristics of a magnetic dipole led to spin–spin coupling . Further terms without a classical counterpart are the Fermi-contact term (interaction of electronic density on a finite size nucleus with the nucleus), and nuclear quadrupole coupling (interaction of a nuclear quadrupole with the gradient of an electric field due to the electrons). Finally a parity violating term predicted by the Standard Model must be mentioned. Although it is an extremely small interaction, it has attracted a fair amount of attention in the scientific literature because it gives different energies for the enantiomers in chiral molecules .
The remaining part of this article will ignore spin terms and consider the solution of the eigenvalue (time-independent Schrödinger) equation of the Coulomb Hamiltonian.
The Coulomb Hamiltonian has a continuous spectrum due to the center of mass (COM) motion of the molecule in homogeneous space. In classical mechanics it is easy to separate off the COM motion of a system of point masses. Classically the motion of the COM is uncoupled from the other motions. The COM moves uniformly (i.e., with constant velocity) through space as if it were a point particle with mass equal to the sum M tot of the masses of all the particles.
In quantum mechanics a free particle has as state function a plane wave function, which is a non-square-integrable function of well-defined momentum. The kinetic energy
of this particle can take any positive value. The position of the COM is uniformly probable everywhere, in agreement with the Heisenberg uncertainty principle .
By introducing the coordinate vector X of the center of mass as three of the degrees of freedom of the system and eliminating the coordinate vector of one (arbitrary) particle, so that the number of degrees of freedom stays the same, one obtains by a linear transformation a new set of coordinates t i . These coordinates are linear combinations of the old coordinates of all particles (nuclei and electrons). By applying the chain rule one can show that
H = − ℏ 2 2 M tot ∇ X 2 + H ′ with H ′ = − ℏ 2 2 ∑ i = 1 N tot − 1 1 m i ∇ i 2 + ℏ 2 2 M tot ∑ i , j = 1 N tot − 1 ∇ i ⋅ ∇ j + V ( t ) . {\displaystyle H=-{\frac {\hbar ^{2}}{2M_{\textrm {tot}}}}\nabla _{\mathbf {X} }^{2}+H'\quad {\text{with }}\quad H'=-{\frac {\hbar ^{2}}{2}}\sum _{i=1}^{N_{\textrm {tot}}-1}{\frac {1}{m_{i}}}\nabla _{i}^{2}+{\frac {\hbar ^{2}}{2M_{\textrm {tot}}}}\sum _{i,j=1}^{N_{\textrm {tot}}-1}\nabla _{i}\cdot \nabla _{j}+V(\mathbf {t} ).}
The first term of H {\displaystyle H} is the kinetic energy of the COM motion, which can be treated separately since H ′ {\displaystyle H'} does not depend on X . As just stated, its eigenstates are plane waves. The potential V ( t ) consists of the Coulomb terms expressed in the new coordinates. The first term of H ′ {\displaystyle H'} has the usual appearance of a kinetic energy operator. The second term is known as the mass polarization term. The translationally invariant Hamiltonian H ′ {\displaystyle H'} can be shown to be self-adjoint and to be bounded from below. That is, its lowest eigenvalue is real and finite. Although H ′ {\displaystyle H'} is necessarily invariant under permutations of identical particles (since H {\displaystyle H} and the COM kinetic energy are invariant), its invariance is not manifest.
Not many actual molecular applications of H ′ {\displaystyle H'} exist; see, however, the seminal work [ 1 ] on the hydrogen molecule for an early application. In the great majority of computations of molecular wavefunctions the electronic
problem is solved with the clamped nucleus Hamiltonian arising in the first step of the Born–Oppenheimer approximation .
See Ref. [ 2 ] for a thorough discussion of the mathematical properties of the Coulomb Hamiltonian. Also it is discussed in this paper whether one can arrive a priori at the concept of a molecule (as a stable system of electrons and nuclei with a well-defined geometry) from the properties of the Coulomb Hamiltonian alone.
The clamped nucleus Hamiltonian, which is also often called the electronic Hamiltonian, [ 3 ] [ 4 ] describes the energy of the electrons in the electrostatic field of the nuclei, where the nuclei are assumed to be stationary with respect to an inertial frame.
The form of the electronic Hamiltonian is H ^ e l = T ^ e + U ^ e n + U ^ e e + U ^ n n . {\displaystyle {\hat {H}}_{\mathrm {el} }={\hat {T}}_{e}+{\hat {U}}_{en}+{\hat {U}}_{ee}+{\hat {U}}_{nn}.}
The coordinates of electrons and nuclei are expressed with respect to a frame that moves with the nuclei, so that the nuclei are at rest with respect to this frame. The frame stays parallel to a space-fixed frame. It is an inertial frame because the nuclei are assumed not to be accelerated by external forces or torques. The origin of the frame is arbitrary, it is usually positioned on a central nucleus or in the nuclear center of mass. Sometimes it is stated that the nuclei are "at rest in a space-fixed frame". This statement implies that the nuclei are viewed as classical particles, because a quantum mechanical particle cannot be at rest. (It would mean that it had simultaneously zero momentum and well-defined position, which contradicts Heisenberg's uncertainty principle).
Since the nuclear positions are constants, the electronic kinetic energy operator is invariant under translation over any nuclear vector. [ clarification needed ] The Coulomb potential, depending on difference vectors, is invariant as well. In the description of atomic orbitals and the computation of integrals over atomic orbitals this invariance is used by equipping all atoms in the molecule with their own localized frames parallel to the space-fixed frame.
As explained in the article on the Born–Oppenheimer approximation , a sufficient number of solutions of the Schrödinger equation of H el {\displaystyle H_{\text{el}}} leads to a potential energy surface (PES) V ( R 1 , R 2 , … , R N ) {\displaystyle V(\mathbf {R} _{1},\mathbf {R} _{2},\ldots ,\mathbf {R} _{N})} . It is assumed that the functional dependence of V on its coordinates is such that V ( R 1 , R 2 , … , R N ) = V ( R 1 ′ , R 2 ′ , … , R N ′ ) {\displaystyle V(\mathbf {R} _{1},\mathbf {R} _{2},\ldots ,\mathbf {R} _{N})=V(\mathbf {R} '_{1},\mathbf {R} '_{2},\ldots ,\mathbf {R} '_{N})} for R i ′ = R i + t (translation) and R i ′ = R i + Δ ϕ | s | ( s × R i ) (infinitesimal rotation) , {\displaystyle \mathbf {R} '_{i}=\mathbf {R} _{i}+\mathbf {t} \;\;{\text{(translation) and}}\;\;\mathbf {R} '_{i}=\mathbf {R} _{i}+{\frac {\Delta \phi }{|\mathbf {s} |}}\;(\mathbf {s} \times \mathbf {R} _{i})\;\;{\text{(infinitesimal rotation)}},} where t and s are arbitrary vectors and Δφ is an infinitesimal angle,
Δφ >> Δφ 2 . This invariance condition on the PES is automatically fulfilled when the PES is expressed in terms of differences of, and angles between, the R i , which is usually the case.
In the remaining part of this article we assume that the molecule is semi-rigid . In the second step of the BO approximation the nuclear kinetic energy T n is reintroduced and the Schrödinger equation with Hamiltonian H ^ n u c = − ℏ 2 2 ∑ i = 1 N ∑ α = 1 3 1 M i ∂ 2 ∂ R i α 2 + V ( R 1 , … , R N ) {\displaystyle {\hat {H}}_{\mathrm {nuc} }=-{\frac {\hbar ^{2}}{2}}\sum _{i=1}^{N}\sum _{\alpha =1}^{3}{\frac {1}{M_{i}}}{\frac {\partial ^{2}}{\partial R_{i\alpha }^{2}}}+V(\mathbf {R} _{1},\ldots ,\mathbf {R} _{N})} is considered. One would like to recognize in its solution: the motion of the nuclear center of mass (3 degrees of freedom), the overall rotation of the molecule (3 degrees of freedom), and the nuclear vibrations. In general, this is not possible with the given nuclear kinetic energy, because it does not separate explicitly the 6 external degrees of freedom (overall translation and rotation) from the 3 N − 6 internal degrees of freedom. In fact, the kinetic energy operator here is defined with respect to a space-fixed (SF) frame. If we were to move the origin of the SF frame to the nuclear center of mass, then, by application of the chain rule , nuclear mass polarization terms would appear. It is customary to ignore these terms altogether and we will follow this custom.
In order to achieve a separation we must distinguish internal and external coordinates, to which end Eckart introduced conditions to be satisfied by the coordinates. We will show how these conditions arise in a natural way from a harmonic analysis in mass-weighted Cartesian coordinates.
In order to simplify the expression for the kinetic energy we introduce mass-weighted displacement coordinates ρ i ≡ M i ( R i − R i 0 ) . {\displaystyle {\boldsymbol {\rho }}_{i}\equiv {\sqrt {M_{i}}}(\mathbf {R} _{i}-\mathbf {R} _{i}^{0}).} Since ∂ ∂ ρ i α = ∂ M i ( ∂ R i α − ∂ R i α 0 ) = 1 M i ∂ ∂ R i α , {\displaystyle {\frac {\partial }{\partial \rho _{i\alpha }}}={\frac {\partial }{{\sqrt {M_{i}}}(\partial R_{i\alpha }-\partial R_{i\alpha }^{0})}}={\frac {1}{\sqrt {M_{i}}}}{\frac {\partial }{\partial R_{i\alpha }}},} the kinetic energy operator becomes, T = − ℏ 2 2 ∑ i = 1 N ∑ α = 1 3 ∂ 2 ∂ ρ i α 2 . {\displaystyle T=-{\frac {\hbar ^{2}}{2}}\sum _{i=1}^{N}\sum _{\alpha =1}^{3}{\frac {\partial ^{2}}{\partial \rho _{i\alpha }^{2}}}.} If we make a Taylor expansion of V around the equilibrium geometry, V = V 0 + ∑ i = 1 N ∑ α = 1 3 ( ∂ V ∂ ρ i α ) 0 ρ i α + 1 2 ∑ i , j = 1 N ∑ α , β = 1 3 ( ∂ 2 V ∂ ρ i α ∂ ρ j β ) 0 ρ i α ρ j β + ⋯ , {\displaystyle V=V_{0}+\sum _{i=1}^{N}\sum _{\alpha =1}^{3}\left({\frac {\partial V}{\partial \rho _{i\alpha }}}\right)_{0}\rho _{i\alpha }+{\frac {1}{2}}\sum _{i,j=1}^{N}\sum _{\alpha ,\beta =1}^{3}\left({\frac {\partial ^{2}V}{\partial \rho _{i\alpha }\partial \rho _{j\beta }}}\right)_{0}\;\rho _{i\alpha }\rho _{j\beta }+\cdots ,} and truncate after three terms (the so-called harmonic approximation), we can describe V with only the third term. The term V 0 can be absorbed in the energy (gives a new zero of energy). The second term is vanishing because of the equilibrium condition. The remaining term contains the Hessian matrix F of V , which is symmetric and may be diagonalized with an orthogonal 3 N × 3 N matrix with constant elements: Q F Q T = Φ with Φ = diag ( f 1 , … , f 3 N − 6 , 0 , … , 0 ) . {\displaystyle \mathbf {Q} \mathbf {F} \mathbf {Q} ^{\mathrm {T} }={\boldsymbol {\Phi }}\quad {\text{with}}\quad {\boldsymbol {\Phi }}=\operatorname {diag} (f_{1},\dots ,f_{3N-6},0,\ldots ,0).} It can be shown from the invariance of V under rotation and translation that six of the eigenvectors of F (last six rows of Q ) have eigenvalue zero (are zero-frequency modes). They span the external space . The first 3 N − 6 rows of Q are—for molecules in their ground state—eigenvectors with non-zero eigenvalue; they are the internal coordinates and form an orthonormal basis for a (3 N - 6)-dimensional subspace of
the nuclear configuration space R 3 N , the internal space . The zero-frequency eigenvectors are orthogonal to the eigenvectors of non-zero frequency. It can be shown that these orthogonalities are in fact the Eckart conditions . The kinetic energy expressed in the internal coordinates is the internal (vibrational) kinetic energy.
With the introduction of normal coordinates q t ≡ ∑ i = 1 N ∑ α = 1 3 Q t , i α ρ i α , {\displaystyle q_{t}\equiv \sum _{i=1}^{N}\sum _{\alpha =1}^{3}\;Q_{t,i\alpha }\rho _{i\alpha },} the vibrational (internal) part of the Hamiltonian for the nuclear motion becomes in the harmonic approximation H ^ nuc ≈ 1 2 ∑ t = 1 3 N − 6 [ − ℏ 2 ∂ 2 ∂ q t 2 + f t q t 2 ] . {\displaystyle {\hat {H}}_{\text{nuc}}\approx {\frac {1}{2}}\sum _{t=1}^{3N-6}\left[-\hbar ^{2}{\frac {\partial ^{2}}{\partial q_{t}^{2}}}+f_{t}q_{t}^{2}\right].} The corresponding Schrödinger equation is easily solved, it factorizes into 3 N − 6 equations for one-dimensional harmonic oscillators . The main effort in this approximate solution of the nuclear motion Schrödinger equation is the computation of the Hessian F of V and its diagonalization.
This approximation to the nuclear motion problem, described in 3 N mass-weighted Cartesian coordinates, became standard in quantum chemistry , since the days (1980s-1990s) that algorithms for accurate computations of the Hessian F became available. Apart from the harmonic approximation, it has as a further deficiency that the external (rotational and translational) motions of the molecule are not accounted for. They are accounted for in a rovibrational Hamiltonian that sometimes is called Watson's Hamiltonian .
In order to obtain a Hamiltonian for external (translation and rotation) motions coupled to the internal (vibrational) motions, it is common to return at this point to classical mechanics and to formulate the classical kinetic energy corresponding to these motions of the nuclei. Classically it is easy to separate the translational—center of mass—motion from the other motions. However, the separation of the rotational from the vibrational motion is more difficult and is not completely possible. This ro-vibrational separation was first achieved by Eckart [ 5 ] in 1935 by imposing by what is now known as Eckart conditions . Since the problem is described in a frame (an "Eckart" frame) that rotates with the molecule, and hence is a non-inertial frame , energies associated with the fictitious forces : centrifugal and Coriolis force appear in the kinetic energy.
In general, the classical kinetic energy T defines the metric tensor g = ( g ij ) associated with the curvilinear coordinates s = ( s i ) through 2 T = ∑ i j g i j s ˙ i s ˙ j . {\displaystyle 2T=\sum _{ij}g_{ij}{\dot {s}}_{i}{\dot {s}}_{j}.}
The quantization step is the transformation of this classical kinetic energy into a quantum mechanical operator. It is common to follow Podolsky [ 6 ] by writing down the Laplace–Beltrami operator in the same (generalized, curvilinear) coordinates s as used for the classical form. The equation for this operator requires the inverse of the metric tensor g and its determinant. Multiplication of the Laplace–Beltrami operator by − ℏ 2 {\displaystyle -\hbar ^{2}} gives the required quantum mechanical kinetic energy operator. When we apply this recipe to Cartesian coordinates, which have unit metric, the same kinetic energy is obtained as by application of the quantization rules .
The nuclear motion Hamiltonian was obtained by Wilson and Howard in 1936, [ 7 ] who followed this procedure, and further refined by Darling and Dennison in 1940. [ 8 ] It remained the standard until 1968, when Watson [ 9 ] was able to simplify it drastically by commuting through the derivatives the determinant of the metric tensor. We will give the ro-vibrational Hamiltonian obtained by Watson, which often is referred to as the Watson Hamiltonian . Before we do this we must mention
that a derivation of this Hamiltonian is also possible by starting from the Laplace operator in Cartesian form, application of coordinate transformations, and use of the chain rule . [ 10 ] The Watson Hamiltonian, describing all motions of the N nuclei, is H ^ = − ℏ 2 2 M t o t ∑ α = 1 3 ∂ 2 ∂ X α 2 + 1 2 ∑ α , β = 1 3 μ α β ( P α − Π α ) ( P β − Π β ) + U − ℏ 2 2 ∑ s = 1 3 N − 6 ∂ 2 ∂ q s 2 + V . {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2M_{\mathrm {tot} }}}\sum _{\alpha =1}^{3}{\frac {\partial ^{2}}{\partial X_{\alpha }^{2}}}+{\frac {1}{2}}\sum _{\alpha ,\beta =1}^{3}\mu _{\alpha \beta }({\mathcal {P}}_{\alpha }-\Pi _{\alpha })({\mathcal {P}}_{\beta }-\Pi _{\beta })+U-{\frac {\hbar ^{2}}{2}}\sum _{s=1}^{3N-6}{\frac {\partial ^{2}}{\partial q_{s}^{2}}}+V.} The first term is the center of mass term X ≡ 1 M t o t ∑ i = 1 N M i R i w i t h M t o t ≡ ∑ i = 1 N M i . {\displaystyle \mathbf {X} \equiv {\frac {1}{M_{\mathrm {tot} }}}\sum _{i=1}^{N}M_{i}\mathbf {R} _{i}\quad \mathrm {with} \quad M_{\mathrm {tot} }\equiv \sum _{i=1}^{N}M_{i}.} The second term is the rotational term akin to the kinetic energy of the rigid rotor . Here P α {\displaystyle {\mathcal {P}}_{\alpha }} is the α component of the body-fixed rigid rotor angular momentum operator ,
see this article for its expression in terms of Euler angles . The operator Π α {\displaystyle \Pi _{\alpha }\,} is a component of an operator known
as the vibrational angular momentum operator (although it does not satisfy angular momentum commutation relations), Π α = − i ℏ ∑ s , t = 1 3 N − 6 ζ s t α q s ∂ ∂ q t {\displaystyle \Pi _{\alpha }=-i\hbar \sum _{s,t=1}^{3N-6}\zeta _{st}^{\alpha }\;q_{s}{\frac {\partial }{\partial q_{t}}}} with the Coriolis coupling constant : ζ s t α = ∑ i = 1 N ∑ β , γ = 1 3 ϵ α β γ Q s , i β Q t , i γ a n d α = 1 , 2 , 3. {\displaystyle \zeta _{st}^{\alpha }=\sum _{i=1}^{N}\sum _{\beta ,\gamma =1}^{3}\epsilon _{\alpha \beta \gamma }Q_{s,i\beta }\,Q_{t,i\gamma }\;\;\mathrm {and} \quad \alpha =1,2,3.} Here ε αβγ is the Levi-Civita symbol . The terms quadratic in the P α {\displaystyle {\mathcal {P}}_{\alpha }} are centrifugal terms, those bilinear in P α {\displaystyle {\mathcal {P}}_{\alpha }} and Π β {\displaystyle \Pi _{\beta }\,} are Coriolis terms. The quantities Q s, iγ are the components of the normal coordinates introduced above. Alternatively, normal coordinates may be obtained by application of Wilson's GF method . The 3 × 3 symmetric matrix μ {\displaystyle {\boldsymbol {\mu }}} is called the effective reciprocal inertia tensor . If all q s were zero (rigid molecule) the Eckart frame would coincide with a principal axes frame (see rigid rotor ) and μ {\displaystyle {\boldsymbol {\mu }}} would be diagonal, with the equilibrium reciprocal moments of inertia on the diagonal. If all q s would be zero, only the kinetic energies of translation and rigid rotation would survive.
The potential-like term U is the Watson term : U = − 1 8 ∑ α = 1 3 μ α α {\displaystyle U=-{\frac {1}{8}}\sum _{\alpha =1}^{3}\mu _{\alpha \alpha }} proportional to the trace of the effective reciprocal inertia tensor.
The fourth term in the Watson Hamiltonian is the kinetic energy associated with the vibrations of the atoms (nuclei) expressed in normal coordinates q s , which as stated above, are given in terms of nuclear displacements ρ iα by q s = ∑ i = 1 N ∑ α = 1 3 Q s , i α ρ i α for s = 1 , … , 3 N − 6. {\displaystyle q_{s}=\sum _{i=1}^{N}\sum _{\alpha =1}^{3}Q_{s,i\alpha }\rho _{i\alpha }\quad {\text{for}}\quad s=1,\ldots ,3N-6.}
Finally V is the unexpanded potential energy by definition depending on internal coordinates only. In the harmonic approximation it takes the form V ≈ 1 2 ∑ s = 1 3 N − 6 f s q s 2 . {\displaystyle V\approx {\frac {1}{2}}\sum _{s=1}^{3N-6}f_{s}q_{s}^{2}.} | https://en.wikipedia.org/wiki/Molecular_Hamiltonian |
Molecular Informatics is a peer-reviewed scientific journal published by Wiley VCH . It covers research in cheminformatics , quantitative structure–activity relationships , and combinatorial chemistry . It was established in 1981 as Quantitative Structure-Activity Relationships and renamed to QSAR & Combinatorial Science in 2003, before obtaining its present name in 2010. According to the Journal Citation Reports , the journal has a 2012 impact factor of 2.338. [ 1 ]
This article about a chemistry journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
This article about a computer science journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Molecular_Informatics |
Molecular Koch's postulates are a set of experimental criteria that must be satisfied to show that a gene found in a pathogenic microorganism encodes a product that contributes to the disease caused by the pathogen. Genes that satisfy molecular Koch's postulates are often referred to as virulence factors . The postulates were formulated by the microbiologist Stanley Falkow in 1988 and are based on Koch's postulates . [ 1 ]
As per Falkow's original descriptions, the three postulates are: [ 1 ]
To apply the molecular Koch's postulates to human diseases, researchers must identify which microbial genes are potentially responsible for symptoms of pathogenicity, often by sequencing the full genome to compare which nucleotides are homologous to the protein-coding genes of other species. Alternatively, scientists can identify which mRNA transcripts are at elevated levels in the diseased organs of infected hosts. Additionally, the tester must identify and verify methods for inactivating and reactivating the gene being studied. [ 2 ]
In 1996, Fredricks and Relman proposed seven molecular guidelines for establishing microbial disease causation: [ 3 ] | https://en.wikipedia.org/wiki/Molecular_Koch's_postulates |
The Molecular Materials Research Group ( MMRG ) is a multidisciplinary research group composed of several Ph.D. members as well as the expertise of other researchers in the field of Computational, Organic and Analytical Chemistry. [ 1 ]
Located at Madeira University in Madeira , its main scientific activity is devoted to the preparation and characterization of potentially useful molecular materials with enhanced electronic and biomedical properties. The development of new materials based in dendrimers for gene delivery and for non-linear optical applications is one of their primary goals.
This article about a chemistry organization is a stub . You can help Wikipedia by expanding it .
This Madeira -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Molecular_Materials_Research_Group |
The Molecular Modelling Toolkit ( MMTK ) is an open-source software package written in Python , which performs common tasks in molecular modelling . [ 1 ]
The Molecular Modeling Toolkit is a library that implements common molecular simulation techniques, with an emphasis on biomolecular simulations. It uses modern software engineering techniques (object-oriented design, a high-level language) in order to overcome limitations associated with the large monolithic simulation programs that are commonly used for biomolecules. Its principal advantages are (1) easy extension and combination with other libraries due to modular library design, (2) a single high-level general-purpose programming language (Python) is used for library implementation as well as for application scripts, (3) use of documented and machine-independent formats for all data files, and (4) interfaces to other simulation and visualization programs.
As of 28 April 2011 [update] , MMTK consists of about 18,000 lines of Python code, 12,000 lines of hand-written C code, and some machine-generated C code.
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it .
This article about molecular modelling software is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Molecular_Modelling_Toolkit |
Molecular Operating Environment (MOE) is a drug discovery software platform that integrates visualization, modeling and simulations, as well as methodology development, in one package. MOE scientific applications are used by biologists, medicinal chemists and computational chemists in pharmaceutical, biotechnology and academic research. MOE runs on Windows, Linux, Unix, and macOS. Main application areas in MOE include structure-based design, [ 1 ] fragment-based design , [ 2 ] ligand-based design, pharmacophore discovery, medicinal chemistry applications, biologics applications, structural biology and bioinformatics, protein and antibody modeling, molecular modeling and simulations, virtual screening, cheminformatics & QSAR . The Scientific Vector Language ( SVL ) is the built-in command, scripting and application development language of MOE.
The Molecular Operating Environment was developed by the Chemical Computing Group under the supervision of President/CEO Paul Labute. [ 3 ] Founded in 1994 [ 4 ] and based in Montreal, Quebec, Canada, this private company is dedicated to developing computation software that will challenge, revolutionize, and aid in the scientific methodology. The Chemical Computing Group contains a team of mathematicians, scientists, and software engineers constantly altering and updating MOE in order to improve the fields of theoretical/computational chemistry and biology, molecular modeling, and computer-driven molecular design. [ 5 ] Researchers specializing in pharmaceutics (drug-discovery); computational chemistry; biotechnology; bioinformatics; cheminformatics; molecular dynamics, simulations, and modeling are the main clients of the Chemical Computing Group.
As discussed before, MOE is a versatile software with main applications in 3D molecular visualization; structure-based protein-ligand design; antibody and biologics design, structure-based protein engineering; SAR and SPR visualization; ligand-based design; protein, DNA / RNA modeling; virtual screening ; 3D pharmacophore screening; fragment-based discovery; structural bioinformatics ; molecular mechanics and dynamics; peptide modeling; structural biology ; cheminformatics and QSAR. [ 5 ]
Molecular modeling and simulations is a process often used in computational chemistry, but there is wide application for researchers in a variety of fields. This theoretical approach allows scientists to extensively study the properties of molecules, and using the data can provide insight into how these molecules may behave in biological and/or chemical systems. [ 6 ] This information is vital to the design of new materials and chemicals.
Molecular docking is a computation study used to primarily analyze the binding affinity of a ligand and a receptor . Often times, proteins are studied using this technique, because data from molecular docking allows scientists to predict if a ligand will bind to a specific molecule and if so, how strongly. [ 7 ] Molecular docking can be used to predict the binding mode of already known ligands and/or novel ligands, and as a binding affinity predictive instrument. [ 8 ] Binding affinity is measured by the change in energy and the more negative the energy, the more stable the complex and the tighter the ligand binds to the receptor. [ 9 ] Data from molecular docking can be used to construct new compounds that are more or less efficient at binding to a specific molecule. Molecular docking is extensively used throughout drug discovery for these reasons. [ 10 ]
Preparing for molecular docking studies can involve many steps. When docking proteins, proteins are obtained from the Protein Data Bank (PDB), which is an online, open access resources containing the classification, structure/folding, organism, sequence length, mutations, genome, sequence, and other data relating to proteins. [ 11 ] The structure of a protein can precisely be determined through a process known as X-ray crystallography . This process involves a concentrated beam of X-rays that is directed at a crystal. [ 12 ] When X-rays are projected to a crystal structure, the crystal diffracts the X-rays in specific directions. [ 13 ] These directions allow scientists to map and determine the detailed structure of proteins, which is then recorded and uploaded to the PDB. [ 14 ]
The protein structure file is downloaded from the PDB and opened in a molecular docking software. There are many programs that can facilitate molecular docking such as AutoDock, DOCK, FlexX, HYDRO, LIGPLOT, SPROUT, STALK, [ 15 ] and Molegro Virtual Docker. [ 16 ] Alternatively, some protein structures have not been experimentally determined through the use of X-ray crystallography and therefore, are not found on the PDB. In order to produce a protein molecule that can be used for docking, scientists can use the amino acid sequence of a protein and a program named UniProt to find protein structures in the PDB that have similar amino acid sequences. [ 17 ] The amino acid sequence of the protein that is being constructed is then used in combination with the protein structure found in the PDB with the highest percent similarity (template protein) in order to create the target protein used in docking. Although this method does not produce an exact model of the target protein, it allows scientists to produce the closest possible structure in order to conduct computational methods and gain some insight into the behavior of a protein. After constructing the necessary molecules for docking, they are imported into a computational docking software such as MOE. In this program, proteins can be visualized and certain parts of the molecule can be isolated in order to obtain more precise data for a region of interest. A cavity, or region where the molecular docking will take place, is set around the binding site, which is the region in the receptor protein where the ligand attaches to. After specifying the cavity, molecular docking settings are configured and the program is run in order to determine the binding energy of the complex.
Molecular dynamic simulations is a computational study that predicts the movement of every atom in a molecule over time. [ 18 ] Molecular dynamics can evaluate the movement of water, ions, small and macromolecules, or even complex systems which is extremally useful for reproducing the behavior of chemical and biological environments. [ 19 ] This theoretical approach allows scientists to gain further insight into how molecules may behave with respect to each other, specifically if a molecule will leave or remain in a binding pocket. If a molecule remains in a binding pocket, this often indicates that the molecule creates a stable complex with the receptor and is energetically favorable. [ 20 ] On the other hand, if the molecule leaves the binding pocket, this indicates that the complex is not stable. This information is then utilized to design new compounds with characteristics that may have a greater or lesser affinity for a receptor.
Drug discovery is a process that involves the use of computational, experimental, and clinical studies in order to design new therapeutics. [ 21 ] This process is lengthy and costly, yet it is the most popular process to date in developing successful treatments and medicines for a variety of diseases. The increasing use of drug discovery can be attributed to new technology that allow for computation/theoretical studies. Data from computation/theoretical studies is often the foundation and reasoning for the development of new drugs. [ 22 ] Without promising theoretical data, these compounds may not be synthesized and tested during experimental studies. Molecular modeling, molecular docking, and MD simulations are some of many computation studies that takes places during drug discovery, allowing scientists to thoroughly study the structure and properties of organic and inorganic molecules. By studying these properties, scientists can gain insight to predict the affinity of molecules in biological and chemical systems in order to determine how a therapeutic may react with different types of chemicals, receptors, and other conditions found in humans or other animals. For example, molecular dynamics is often used throughout drug discovery in order to identify structural cavities that are important for determining binding affinity. [ 19 ] This data is then compiled and analyzed to determine if certain therapeutics should be synthesized and tested clinically, or if further optimization is required for the design of new medicines that are more effective. [ 23 ]
Computational chemistry can also be applied to the development of safer pesticides and herbicides . Recently, the increasing use of pesticides and herbicides has raised much controversy due to environmental and public health concerns. It was found that although these chemicals are designed to kill target pests, its effects can often harm other organisms, humans included. [ 24 ] Some types of pesticides and herbicides such as organophosphates and carbamates can affect the nervous system in humans, while others were found to be carcinogenic , irritate the skin or eyes, and even affect the hormone or endocrine system. [ 25 ] Furthermore, neonicotinoids is another type of pesticide that recently gained popularity due to its effectiveness at targeting aphids and other pests that hinder agriculture production. [ 26 ] Although there are not many human health concerns associated with neonicotinoids (which is another reason for its popularity), the increasing use of this pesticide has been linked to Colony Collapse Disorder (CCD), or the rapid disappearance of adult bees. [ 27 ] Due to this pattern, the European Union has banned the outdoor use of three neonicotinoid pesticides in an attempt to mitigate CCD. [ 28 ] Clearly, there are multiple issues regarding the use of these pesticides and herbicides. A call for safer and more efficient pesticides and herbicides is being accomplished with the help of computational/theoretical methods.
Computational/theoretical chemistry and biology methods are continuously pushing the horizon. Recently, DeepMind , which is a company specializing in the development of artificial intelligence (AI), created an AI system named AlphaFold . [ 29 ] AlphaFold is the most advanced system to date that can accurately predict a protein's 3D structure from its amino acid sequence. [ 30 ] The protein folding problem first began to emerge around the 1960s and ever since, scientists have struggled in determining methods to precisely predict the way a protein will fold solely based on the amino acid sequence. [ 31 ] However, with recent advances in technology, AlphaFold has made a breakthrough in this long lasting issue. By utilizing a database with over 350,000 structures, AlphaFold can determine the shape of a protein in a few minutes with atomic accuracy. [ 32 ] The ability to predict the structure of millions of unknown proteins can help to combat disease, find more effective medicines, and unlock other unknowns that govern life. This technological breakthrough will revolutionize future research and will have profound effects for the scientific community. | https://en.wikipedia.org/wiki/Molecular_Operating_Environment |
Molecular Phylogenetics and Evolution is a peer-reviewed scientific journal of evolutionary biology and phylogenetics . The journal is edited by E.A. Zimmer .
The journal is indexed in:
This article about a biology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Molecular_Phylogenetics_and_Evolution |
Molecular Physics is a peer-reviewed scientific journal covering research on the interface between chemistry and physics , in particular chemical physics and physical chemistry . It covers both theoretical and experimental molecular science, including electronic structure , molecular dynamics , spectroscopy , reaction kinetics , statistical mechanics , condensed matter and surface science . The journal was established in 1958 and is published by Taylor & Francis . According to the Journal Citation Reports , the journal has a 2021 impact factor of 1.937. [ 1 ]
The current editor-in-chief is Professor George Jackson ( Imperial College London ). A reprint of the first editorial and a full list of editors since its establishment can be found in the issue celebrating 50 years of the journal. [ 2 ] | https://en.wikipedia.org/wiki/Molecular_Physics_(journal) |
The Molecular Query Language ( MQL ) was designed to allow more complex, problem-specific search methods in chemoinformatics . In contrast to the widely used SMARTS queries, MQL provides for the specification of spatial and physicochemical properties of atoms and bonds. Additionally, it can easily be extended to handle non-atom-based graphs, also known as "reduced feature" graphs.
The query language is based on an extended Backus–Naur form (EBNF) using JavaCC . | https://en.wikipedia.org/wiki/Molecular_Query_Language |
Molecular Systems Biology is a peer-reviewed open-access scientific journal covering systems biology at the molecular level (examples include: genomics , proteomics , metabolomics , microbial systems, the integration of cell signaling and regulatory networks ), synthetic biology , and systems medicine. [ 1 ] It was established in 2005 and published by the Nature Publishing Group on behalf of the European Molecular Biology Organization . As of December 2013, it is published by EMBO Press . [ 2 ]
This article about a molecular and cell biology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Molecular_Systems_Biology |
Molecular anatomy is the subspecialty of microscopic anatomy concerned with the identification and description of molecular structures of cells , tissues , and organs in an organism. [ 1 ] [ 2 ]
This anatomy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Molecular_anatomy |
A molecular assembler , as defined by K. Eric Drexler , is a "proposed device able to guide chemical reactions by positioning reactive molecules with atomic precision". A molecular assembler is a kind of molecular machine . Some biological molecules such as ribosomes fit this definition. This is because they receive instructions from messenger RNA and then assemble specific sequences of amino acids to construct protein molecules. However, the term "molecular assembler" usually refers to theoretical human-made devices.
Beginning in 2007, the British Engineering and Physical Sciences Research Council has funded development of ribosome -like molecular assemblers. Clearly, molecular assemblers are possible in this limited sense. A technology roadmap project, led by the Battelle Memorial Institute and hosted by several U.S. National Laboratories has explored a range of atomically precise fabrication technologies, including both early-generation and longer-term prospects for programmable molecular assembly; the report was released in December, 2007. [ 1 ] In 2008, the Engineering and Physical Sciences Research Council provided funding of £1.5 million over six years (£1,942,235.57, $2,693,808.00 in 2021 [ 2 ] ) for research working towards mechanized mechanosynthesis , in partnership with the Institute for Molecular Manufacturing, amongst others. [ 3 ]
Likewise, the term "molecular assembler" has been used in science fiction and popular culture to refer to a wide range of fantastic atom-manipulating nanomachines. Much of the controversy regarding "molecular assemblers" results from the confusion in the use of the name for both technical concepts and popular fantasies. In 1992, Drexler introduced the related but better-understood term "molecular manufacturing", which he defined as the programmed " chemical synthesis of complex structures by mechanically positioning reactive molecules, not by manipulating individual atoms". [ 4 ]
This article mostly discusses "molecular assemblers" in the popular sense. These include hypothetical machines that manipulate individual atoms and machines with organism-like self-replicating abilities, mobility, ability to consume food, and so forth. These are quite different from devices that merely (as defined above) "guide chemical reactions by positioning reactive molecules with atomic precision".
Because synthetic molecular assemblers have never been constructed and because of the confusion regarding the meaning of the term, there has been much controversy as to whether "molecular assemblers" are possible or simply science fiction. Confusion and controversy also stem from their classification as nanotechnology , which is an active area of laboratory research which has already been applied to the production of real products; however, there had been, until recently, [ when? ] no research efforts into the actual construction of "molecular assemblers".
Nonetheless, a 2013 paper by David Leigh 's group, published in the journal Science , details a new method of synthesizing a peptide in a sequence-specific manner by using an artificial molecular machine that is guided by a molecular strand. [ 5 ] This functions in the same way as a ribosome building proteins by assembling amino acids according to a messenger RNA blueprint. The structure of the machine is based on a rotaxane , which is a molecular ring sliding along a molecular axle. The ring carries a thiolate group, which removes amino acids in sequence from the axle, transferring them to a peptide assembly site. In 2018, the same group published a more advanced version of this concept in which the molecular ring shuttles along a polymeric track to assemble an oligopeptide that can fold into an α-helix that can perform the enantioselective epoxidation of a chalcone derivative (in a way reminiscent to the ribosome assembling an enzyme ). [ 6 ] In another paper published in Science in March 2015, chemists at the University of Illinois report a platform that automates the synthesis of 14 classes of small molecules , with thousands of compatible building blocks. [ 7 ]
In 2017, David Leigh 's group reported a molecular robot that could be programmed to construct any one of four different stereoisomers of a molecular product by using a nanomechanical robotic arm to move a molecular substrate between different reactive sites of an artificial molecular machine. [ 8 ] An accompanying News and Views article, titled 'A molecular assembler', outlined the operation of the molecular robot as effectively a prototypical molecular assembler. [ 9 ]
A nanofactory is a proposed system in which nanomachines (resembling molecular assemblers, or industrial robot arms) would combine reactive molecules via mechanosynthesis to build larger atomically precise parts. These, in turn, would be assembled by positioning mechanisms of assorted sizes to build macroscopic (visible) but still atomically-precise products.
A typical nanofactory would fit in a desktop box, in the vision of K. Eric Drexler published in Nanosystems: Molecular Machinery, Manufacturing and Computation (1992), a notable work of " exploratory engineering ". During the 1990s, others have extended the nanofactory concept, including an analysis of nanofactory convergent assembly by Ralph Merkle , a systems design of a replicating nanofactory architecture by J. Storrs Hall , Forrest Bishop's "Universal Assembler", the patented exponential assembly process by Zyvex , and a top-level systems design for a 'primitive nanofactory' by Chris Phoenix (director of research at the Center for Responsible Nanotechnology). All of these nanofactory designs (and more) are summarized in Chapter 4 of Kinematic Self-Replicating Machines (2004) by Robert Freitas and Ralph Merkle. The Nanofactory Collaboration, [ 10 ] founded by Freitas and Merkle in 2000, is a focused, ongoing effort involving 23 researchers from 10 organizations and 4 countries that is developing a practical research agenda [ 11 ] specifically aimed at positionally-controlled diamond mechanosynthesis and diamondoid nanofactory development.
In 2005, an animated short film of the nanofactory concept was produced by John Burch, in collaboration with Drexler. Such visions have been the subject of much debate, on several intellectual levels. No one has discovered an insurmountable problem with the underlying theories and no one has proved that the theories can be translated into practice. However, the debate continues, with some of it being summarized in the molecular nanotechnology article.
If nanofactories could be built, severe disruption to the world economy would be one of many possible negative impacts, though it could be argued that this disruption would have little negative effect, if everyone had such nanofactories. Great benefits also would be anticipated. Various works of science fiction have explored these and similar concepts. The potential for such devices was part of the mandate of a major UK study led by mechanical engineering professor Dame Ann Dowling .
"Molecular assemblers" have been confused with self-replicating machines. To produce a practical quantity of a desired product, the nanoscale size of a typical science fiction universal molecular assembler requires an extremely large number of such devices. However, a single such theoretical molecular assembler might be programmed to self-replicate , constructing many copies of itself. This would allow an exponential rate of production. Then, after sufficient quantities of the molecular assemblers were available, they would then be re-programmed for production of the desired product. However, if self-replication of molecular assemblers were not restrained then it might lead to competition with naturally occurring organisms. This has been called ecophagy or the grey goo problem. [ 12 ]
One method of building molecular assemblers is to mimic evolutionary processes employed by biological systems. Biological evolution proceeds by random variation combined with culling of the less-successful variants and reproduction of the more-successful variants. Production of complex molecular assemblers might be evolved from simpler systems since "A complex system that works is invariably found to have evolved from a simple system that worked. . . . A complex system designed from scratch never works and can not be patched up to make it work. You have to start over, beginning with a system that works." [ 13 ] However, most published safety guidelines include "recommendations against developing ... replicator designs which permit surviving mutation or undergoing evolution". [ 14 ]
Most assembler designs keep the "source code" external to the physical assembler. At each step of a manufacturing process, that step is read from an ordinary computer file and "broadcast" to all the assemblers. If any assembler gets out of range of that computer, or when the link between that computer and the assemblers is broken, or when that computer is unplugged, the assemblers stop replicating. Such a "broadcast architecture" is one of the safety features recommended by the "Foresight Guidelines on Molecular Nanotechnology", and a map of the 137-dimensional replicator design space [ 15 ] recently published by Freitas and Merkle provides numerous practical methods by which replicators can be safely controlled by good design.
One of the most outspoken critics of some concepts of "molecular assemblers" was Professor Richard Smalley (1943–2005) who won the Nobel prize for his contributions to the field of nanotechnology . Smalley believed that such assemblers were not physically possible and introduced scientific objections to them. His two principal technical objections were termed the "fat fingers problem" and the "sticky fingers problem". He believed these would exclude the possibility of "molecular assemblers" that worked by precision picking and placing of individual atoms. Drexler and coworkers responded to these two issues [ 16 ] in a 2001 publication.
Smalley also believed that Drexler's speculations about apocalyptic dangers of self-replicating machines that have been equated with "molecular assemblers" would threaten the public support for development of nanotechnology. To address the debate between Drexler and Smalley regarding molecular assemblers Chemical & Engineering News published a point-counterpoint consisting of an exchange of letters that addressed the issues. [ 4 ]
Speculation on the power of systems that have been called "molecular assemblers" has sparked a wider political discussion on the implication of nanotechnology. This is in part due to the fact that nanotechnology is a very broad term and could include "molecular assemblers". Discussion of the possible implications of fantastic molecular assemblers has prompted calls for regulation of current and future nanotechnology. There are very real concerns with the potential health and ecological impact of nanotechnology that is being integrated in manufactured products. Greenpeace for instance commissioned a report concerning nanotechnology in which they express concern into the toxicity of nanomaterials that have been introduced in the environment. [ 17 ] However, it makes only passing references to "assembler" technology. The UK Royal Society and Royal Academy of Engineering also commissioned a report entitled "Nanoscience and nanotechnologies: opportunities and uncertainties" [ 18 ] regarding the larger social and ecological implications of nanotechnology. This report does not discuss the threat posed by potential so-called "molecular assemblers".
In 2006, the U.S. National Academy of Sciences released the report of a study of molecular manufacturing (not molecular assemblers per se) as part of a longer report, A Matter of Size: Triennial Review of the National Nanotechnology Initiative [ 19 ] The study committee reviewed the technical content of Nanosystems , and in its conclusion states that no current theoretical analysis can be considered definitive regarding several questions of potential system performance, and that optimal paths for implementing high-performance systems cannot be predicted with confidence. It recommends funding for experimental research to produce experimental demonstrations in this area:
"Although theoretical calculations can be made today, the eventually attainable range of chemical reaction cycles, error rates, speed of operation, and thermodynamic efficiencies of such bottom-up manufacturing systems cannot be reliably predicted at this time. Thus, the eventually attainable perfection and complexity of manufactured products, while they can be calculated in theory, cannot be predicted with confidence. Finally, the optimum research paths that might lead to systems which greatly exceed the thermodynamic efficiencies and other capabilities of biological systems cannot be reliably predicted at this time. Research funding that is based on the ability of investigators to produce experimental demonstrations that link to abstract models and guide long-term vision is most appropriate to achieve this goal."
One potential scenario that has been envisioned is out-of-control self-replicating molecular assemblers in the form of gray goo which consumes carbon to continue its replication. If unchecked, such mechanical replication could potentially consume whole ecoregions or the whole Earth ( ecophagy ), or it could simply outcompete natural lifeforms for necessary resources such as carbon , ATP , or UV light (which some nanomotor examples run on). However, the ecophagy and 'grey goo' scenarios, like synthetic molecular assemblers, are based upon still-hypothetical technologies that have not yet been demonstrated experimentally. | https://en.wikipedia.org/wiki/Molecular_assembler |
In chemistry , molecular autoionization (or self-ionization ) is a chemical reaction between molecules of the same substance to produce ions . If a pure liquid partially dissociates into ions, it is said to be self-ionizing. [ 1 ] : 163 In most cases the oxidation number on all atoms in such a reaction remains unchanged. Such autoionization can be protic ( H + transfer), or non-protic .
Protic solvents often undergo some autoionization (in this case autoprotolysis ):
These solvents all possess atoms with odd atomic numbers, either nitrogen or a halogen. Such atoms enable the formation of singly charged, nonradical ions (which must have at least one odd-atomic-number atom), which are the most favorable autoionization products. Protic solvents, mentioned previously, use hydrogen for this role. Autoionization would be much less favorable in solvents such as sulfur dioxide or carbon dioxide, which have only even-atomic-number atoms.
Autoionization is not restricted to neat liquids or solids. Solutions of metal complexes exhibit this property. For example, compounds of the type FeX 2 ( terpyridine ) (where X = Cl or Br) are unstable with respect to autoionization forming [Fe(terpyridine) 2 ] 2+ [FeX 4 ] 2− . [ 3 ]
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Molecular_autoionization |
Molecular beacons , or molecular beacon probes , are oligonucleotide hybridization probes that can report the presence of specific nucleic acids in homogenous solutions. Molecular beacons are hairpin -shaped molecules with an internally quenched fluorophore whose fluorescence is restored when they bind to a target nucleic acid sequence. This is a novel non- radioactive method for detecting specific sequences of nucleic acids. They are useful in situations where it is either not possible or desirable to isolate the probe-target hybrids from an excess of the hybridization probes.
A typical molecular beacon probe is 25 nucleotides long. [ citation needed ] The middle 15 nucleotides are complementary to the target DNA or RNA and do not base pair with one another, while the five nucleotides at each terminus are complementary to each other rather than to the target DNA. A typical molecular beacon structure can be divided in 4 parts: 1) loop, an 18–30 base pair region of the molecular beacon that is complementary to the target sequence; 2) stem formed by the attachment to both termini of the loop of two short (5 to 7 nucleotide residues) oligonucleotides that are complementary to each other; 3) 5' fluorophore at the 5' end of the molecular beacon, a fluorescent dye is covalently attached; 4) 3' quencher (non fluorescent) dye that is covalently attached to the 3' end of the molecular beacon. When the beacon is in closed loop shape, the quencher resides in proximity to the fluorophore, which results in quenching the fluorescent emission of the latter.
If the nucleic acid to be detected is complementary to the strand in the loop, the event of hybridization occurs. The duplex formed between the nucleic acid and the loop is more stable than that of the stem because the former duplex involves more base pairs. This causes the separation of the stem and hence of the fluorophore and the quencher. Once the fluorophore is no longer next to the quencher, illumination of the hybrid with light results in the fluorescent emission. The presence of the emission reports that the event of hybridization has occurred and hence the target nucleic acid sequence is present in the test sample.
Fluorogenic signaling oligonucleotide probes were reported for use to detect and isolate cells expressing one or more desired genes, including the production of multigene stable cell lines expressing heteromultimeric epithelial sodium channel (αβγ-ENaC), sodium voltage-gated ion channel 1.7 (NaV1.7-αβ1β2), four unique γ-aminobutyric acid A (GABAA) receptor ion channel subunit combinations α1β3γ2s, α2β3γ2s, α3β3γ2s and α5β3γ2s, cystic fibrosis conductance regulator (CFTR), CFTR-Δ508 and two G-protein coupled receptors (GPCRs). [ 1 ]
Molecular beacons are synthetic oligonucleotides whose preparation is well documented. In addition to the conventional set of nucleoside phosphoramidites , the synthesis also requires a solid support derivatized with a quencher and a phosphoramidite building block designed for the attachment of a protected fluorescent dye.
The first use of the term molecular beacons, synthesis and demonstration of function was in 1996. [ 2 ] | https://en.wikipedia.org/wiki/Molecular_beacon |
A molecular beam is produced by allowing a gas at higher pressure to expand through a small orifice into a chamber at lower pressure to form a beam of particles ( atoms , free radicals , molecules or ions ) moving at approximately equal velocities , with very few collisions between the particles. Molecular beams are useful for fabricating thin films in molecular beam epitaxy and artificial structures such as quantum wells , quantum wires , and quantum dots . Molecular beams have also been applied as crossed molecular beams . The molecules in the molecular beam can be manipulated by electrical fields and magnetic fields . [ 1 ] Molecules can be decelerated in a Stark decelerator or in a Zeeman slower .
The first to study atomic beam experiments was Louis Dunoyer de Segonzac 1911, but were simple experiments to confirm that atoms travelled in straight lines when not acted on by external forces. [ 2 ]
In 1921, Hartmut Kallmann and Fritz Reiche wrote [ 3 ] about the deflection of beams of polar molecules in an inhomogeneous electric field, with an ultimate aim of measuring their dipole moments .
Seeing the page proofs for the Kallman and Reiche work prompted Otto Stern at the University of Hamburg and University of Frankfurt am Main to rush publication of his work with Walther Gerlach on what later became known as the Stern–Gerlach experiment . (Stern's paper references the preprint, but the Kallman and Reiche work would go largely unnoticed. [ 4 ] )
When the 1922 Stern-Gerlach paper appeared is caused a sensation: they claimed to have experimentally demonstrated "space quantization": clear evidence of quantum effects at a time when classical models were still considered viable. [ 4 ] : 50 The initial quantum explanation of the measurement -- as an observation of orbital angular momentum -- was not correct. Five years of intense work on quantum theory was needed before it was realized that the experiment was in fact the first demonstration quantum electron spin [ 2 ] Stern's group would go on to create pioneering experiments with atomic beams, and later with molecular beams. The advances of Stern and collaborators led to decisive discoveries including: the discovery of space quantization ; de Broglie matter waves ; anomalous magnetic moments of the proton and neutron ; recoil of an atom of emission of a photon ; and the limitation of scattering cross-sections for molecular collisions imposed by the uncertainty principle [ 2 ]
The first to report on the relationship between dipole moments and deflection in a molecular beam (using binary salts such as KCl ) was Erwin Wrede in 1927. [ 5 ] [ 4 ]
In 1939 Isidor Rabi invented a molecular beam magnetic resonance method in which two magnets placed one after the other create an inhomogeneous magnetic field. [ 6 ] The method was used to measure the magnetic moment of several lithium isotopes with molecular beams of LiCl , LiF and dilithium . [ 7 ] [ 8 ] This method is a predecessor of NMR . The invention of the maser in 1957 by James P. Gordon , Herbert J. Zeiger and Charles H. Townes was made possible by a molecular beam of ammonia and a special electrostatic quadrupole focuser. [ 9 ]
The study of molecular beam led to the development of molecular-beam epitaxy in the 1960s. | https://en.wikipedia.org/wiki/Molecular_beam |
Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding .
In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10 −14 —and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent , and thus are normally energetically weaker than covalent bonds .
Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of coordination polymers and coordination networks such as metal-organic frameworks .
Molecular binding can be classified into the following types: [ 1 ]
Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. [ 2 ] Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes can have kinetics that closely resemble irreversible covalent inhibitors. Among the tightest known protein–protein complexes is that between the enzyme angiogenin and ribonuclease inhibitor ; the dissociation constant for the human proteins is 5x10 −16 mol/L. [ 3 ] [ 4 ] Another biological example is the binding protein streptavidin , which has extraordinarily high affinity for biotin (vitamin B7/H, dissociation constant , K d ≈10 −14 mol/L). [ 5 ] In such cases, if the reaction conditions change (e.g., the protein moves into an environment where biotin concentrations are very low, or pH or ionic conditions are altered), the reverse reaction can be promoted. For example, the biotin-streptavidin interaction can be broken by incubating the complex in water at 70 °C, without damaging either molecule. [ 6 ] An example of change in local concentration causing dissociation can be found in the Bohr effect , which describes the dissociation of ligands from hemoglobin in the lung versus peripheral tissues. [ 5 ]
Some protein–protein interactions result in covalent bonding , [ 7 ] and some pharmaceuticals are irreversible antagonists that may or may not be covalently bound. [ 8 ] Drug discovery has been through periods when drug candidates that bind covalently to their targets are attractive and then are avoided; the success of bortezomib made boron -based covalently binding candidates more attractive in the late 2000s. [ 9 ] [ 10 ]
In order for the complex to be stable, the free energy of complex by definition must be lower than the solvent separated molecules. The binding may be primarily entropy -driven (release of ordered solvent molecules around the isolated molecule that results in a net increase of entropy of the system). When the solvent is water, this is known as the hydrophobic effect . Alternatively, the binding may be enthalpy -driven where non-covalent attractive forces such as electrostatic attraction, hydrogen bonding , and van der Waals / London dispersion forces are primarily responsible for the formation of a stable complex. [ 11 ] Complexes that have a strong entropy contribution to formation tend to have weak enthalpy contributions. Conversely complexes that have strong enthalpy component tend to have a weak entropy component. This phenomenon is known as enthalpy-entropy compensation . [ 12 ]
The strength of binding between the components of molecular complex is measured quantitatively by the binding constant (K A ), defined as the ratio of the concentration of the complex divided by the product of the concentrations of the isolated components at equilibrium in molar units:
When the molecular complex prevents the normal functioning of an enzyme , the binding constant is also referred to as inhibition constant (K I ).
Molecules that can participate in molecular binding include proteins , nucleic acids , carbohydrates , lipids , and small organic molecules such as drugs . Hence the types of complexes that form as a result of molecular binding include:
Proteins that form stable complexes with other molecules are often referred to as receptors while their binding partners are called ligands . [ 16 ] | https://en.wikipedia.org/wiki/Molecular_binding |
Molecular biology / m ə ˈ l ɛ k j ʊ l ər / is a branch of biology that seeks to understand the molecular basis of biological activity in and between cells , including biomolecular synthesis, modification, mechanisms, and interactions. [ 1 ] [ 2 ] [ 3 ]
Though cells and other microscopic structures had been observed in living organisms as early as the 18th century, a detailed understanding of the mechanisms and interactions governing their behavior did not emerge until the 20th century, when technologies used in physics and chemistry had advanced sufficiently to permit their application in the biological sciences. The term 'molecular biology' was first used in 1945 by the English physicist William Astbury , who described it as an approach focused on discerning the underpinnings of biological phenomena—i.e. uncovering the physical and chemical structures and properties of biological molecules, as well as their interactions with other molecules and how these interactions explain observations of so-called classical biology, which instead studies biological processes at larger scales and higher levels of organization. [ 4 ] In 1953, Francis Crick , James Watson , Rosalind Franklin , and their colleagues at the Medical Research Council Unit, Cavendish Laboratory , were the first to describe the double helix model for the chemical structure of deoxyribonucleic acid (DNA), which is often considered a landmark event for the nascent field because it provided a physico-chemical basis by which to understand the previously nebulous idea of nucleic acids as the primary substance of biological inheritance. They proposed this structure based on previous research done by Franklin, which was conveyed to them by Maurice Wilkins and Max Perutz . [ 5 ] Their work led to the discovery of DNA in other microorganisms, plants, and animals. [ 6 ]
The field of molecular biology includes techniques which enable scientists to learn about molecular processes. [ 7 ] These techniques are used to efficiently target new drugs, diagnose disease, and better understand cell physiology. [ 8 ] Some clinical research and medical therapies arising from molecular biology are covered under gene therapy , whereas the use of molecular biology or molecular cell biology in medicine is now referred to as molecular medicine . [ citation needed ]
Molecular biology sits at the intersection of biochemistry and genetics ; as these scientific disciplines emerged and evolved in the 20th century, it became clear that they both sought to determine the molecular mechanisms which underlie vital cellular functions. [ 9 ] [ 10 ] Advances in molecular biology have been closely related to the development of new technologies and their optimization. [ 11 ] Molecular biology has been elucidated by the work of many scientists, and thus the history of the field depends on an understanding of these scientists and their experiments. [ citation needed ]
The field of genetics arose from attempts to understand the set of rules underlying reproduction and heredity , and the nature of the hypothetical units of heredity known as genes . Gregor Mendel pioneered this work in 1866, when he first described the laws of inheritance he observed in his studies of mating crosses in pea plants. [ 12 ] One such law of genetic inheritance is the law of segregation , which states that diploid individuals with two alleles for a particular gene will pass one of these alleles to their offspring. [ 13 ] Because of his critical work, the study of genetic inheritance is commonly referred to as Mendelian genetics . [ 14 ]
A major milestone in molecular biology was the discovery of the structure of DNA . This work began in 1869 by Friedrich Miescher , a Swiss biochemist who first proposed a structure called nuclein , which we now know to be (deoxyribonucleic acid), or DNA. [ 15 ] He discovered this unique substance by studying the components of pus-filled bandages, and noting the unique properties of the "phosphorus-containing substances". [ 16 ] Another notable contributor to the DNA model was Phoebus Levene , who proposed the "polynucleotide model" of DNA in 1919 as a result of his biochemical experiments on yeast. [ 17 ] In 1950, Erwin Chargaff expanded on the work of Levene and elucidated a few critical properties of nucleic acids: first, the sequence of nucleic acids varies across species. [ 18 ] Second, the total concentration of purines (adenine and guanine) is always equal to the total concentration of pyrimidines (cysteine and thymine). [ 15 ] This is now known as Chargaff's rule. In 1953, James Watson and Francis Crick published the double helical structure of DNA, [ 19 ] based on the X-ray crystallography work done by Rosalind Franklin which was conveyed to them by Maurice Wilkins and Max Perutz . [ 5 ] Watson and Crick described the structure of DNA and conjectured about the implications of this unique structure for possible mechanisms of DNA replication. [ 19 ] Watson and Crick were awarded the Nobel Prize in Physiology or Medicine in 1962, along with Wilkins, for proposing a model of the structure of DNA. [ 6 ]
In 1961, it was demonstrated that when a gene encodes a protein , three sequential bases of a gene's DNA specify each successive amino acid of the protein. [ 20 ] Thus the genetic code is a triplet code, where each triplet (called a codon ) specifies a particular amino acid. Furthermore, it was shown that the codons do not overlap with each other in the DNA sequence encoding a protein, and that each sequence is read from a fixed starting point.
During 1962–1964, through the use of conditional lethal mutants of a bacterial virus, [ 21 ] fundamental advances were made in our understanding of the functions and interactions of the proteins employed in the machinery of DNA replication , DNA repair , DNA recombination , and in the assembly of molecular structures. [ 22 ]
In 1928, Frederick Griffith , encountered a virulence property in pneumococcus bacteria, which was killing lab rats. According to Mendel, prevalent at that time, gene transfer could occur only from parent to daughter cells. Griffith advanced another theory, stating that gene transfer occurring in member of same generation is known as horizontal gene transfer (HGT). This phenomenon is now referred to as genetic transformation. [ 23 ]
Griffith's experiment addressed the pneumococcus bacteria, which had two different strains, one virulent and smooth and one avirulent and rough. The smooth strain had glistering appearance owing to the presence of a type of specific polysaccharide – a polymer of glucose and glucuronic acid capsule. Due to this polysaccharide layer of bacteria, a host's immune system cannot recognize the bacteria and it kills the host. The other, avirulent, rough strain lacks this polysaccharide capsule and has a dull, rough appearance. [ citation needed ]
Presence or absence of capsule in the strain, is known to be genetically determined. Smooth and rough strains occur in several different type such as S-I, S-II, S-III, etc. and R-I, R-II, R-III, etc. respectively. All this subtypes of S and R bacteria differ with each other in antigen type they produce. [ 6 ]
The Avery–MacLeod–McCarty experiment was a landmark study conducted in 1944 that demonstrated that DNA, not protein as previously thought, carries genetic information in bacteria. Oswald Avery , Colin Munro MacLeod , and Maclyn McCarty used an extract from a strain of pneumococcus that could cause pneumonia in mice. They showed that genetic transformation in the bacteria could be accomplished by injecting them with purified DNA from the extract. They discovered that when they digested the DNA in the extract with DNase , transformation of harmless bacteria into virulent ones was lost. This provided strong evidence that DNA was the genetic material, challenging the prevailing belief that proteins were responsible. It laid the basis for the subsequent discovery of its structure by Watson and Crick.
Confirmation that DNA is the genetic material which is cause of infection came from the Hershey–Chase experiment . They used E.coli and bacteriophage for the experiment. This experiment is also known as blender experiment, as kitchen blender was used as a major piece of apparatus. Alfred Hershey and Martha Chase demonstrated that the DNA injected by a phage particle into a bacterium contains all information required to synthesize progeny phage particles. They used radioactivity to tag the bacteriophage's protein coat with radioactive sulphur and DNA with radioactive phosphorus, into two different test tubes respectively. After mixing bacteriophage and E.coli into the test tube, the incubation period starts in which phage transforms the genetic material in the E.coli cells. Then the mixture is blended or agitated, which separates the phage from E.coli cells. The whole mixture is centrifuged and the pellet which contains E.coli cells was checked and the supernatant was discarded. The E.coli cells showed radioactive phosphorus, which indicated that the transformed material was DNA not the protein coat.
The transformed DNA gets attached to the DNA of E.coli and radioactivity is only seen onto the bacteriophage's DNA. This mutated DNA can be passed to the next generation and the theory of Transduction came into existence. Transduction is a process in which the bacterial DNA carry the fragment of bacteriophages and pass it on the next generation. This is also a type of horizontal gene transfer. [ 6 ]
The Meselson-Stahl experiment was a landmark experiment in molecular biology that provided evidence for the semiconservative replication of DNA. Conducted in 1958 by Matthew Meselson and Franklin Stahl , the experiment involved growing E. coli bacteria in a medium containing heavy isotope of nitrogen ( 15 N) for several generations. This caused all the newly synthesized bacterial DNA to be incorporated with the heavy isotope.
After allowing the bacteria to replicate in a medium containing normal nitrogen ( 14 N), samples were taken at various time points. These samples were then subjected to centrifugation in a density gradient, which separated the DNA molecules based on their density.
The results showed that after one generation of replication in the 14 N medium, the DNA formed a band of intermediate density between that of pure 15 N DNA and pure 14 N DNA. This supported the semiconservative DNA replication proposed by Watson and Crick, where each strand of the parental DNA molecule serves as a template for the synthesis of a new complementary strand, resulting in two daughter DNA molecules, each consisting of one parental and one newly synthesized strand.
The Meselson-Stahl experiment provided compelling evidence for the semiconservative replication of DNA, which is fundamental to the understanding of genetics and molecular biology.
In the early 2020s, molecular biology entered a golden age defined by both vertical and horizontal technical development. Vertically, novel technologies are allowing for real-time monitoring of biological processes at the atomic level. [ 24 ] Molecular biologists today have access to increasingly affordable sequencing data at increasingly higher depths, facilitating the development of novel genetic manipulation methods in new non-model organisms. Likewise, synthetic molecular biologists will drive the industrial production of small and macro molecules through the introduction of exogenous metabolic pathways in various prokaryotic and eukaryotic cell lines. [ 25 ]
Horizontally, sequencing data is becoming more affordable and used in many different scientific fields. This will drive the development of industries in developing nations and increase accessibility to individual researchers. Likewise, CRISPR-Cas9 gene editing experiments can now be conceived and implemented by individuals for under $10,000 in novel organisms, which will drive the development of industrial and medical applications. [ 26 ]
The following list describes a viewpoint on the interdisciplinary relationships between molecular biology and other related fields. [ 27 ]
While researchers practice techniques specific to molecular biology, it is common to combine these with methods from genetics and biochemistry . Much of molecular biology is quantitative, and recently a significant amount of work has been done using computer science techniques such as bioinformatics and computational biology . Molecular genetics , the study of gene structure and function, has been among the most prominent sub-fields of molecular biology since the early 2000s. Other branches of biology are informed by molecular biology, by either directly studying the interactions of molecules in their own right such as in cell biology and developmental biology , or indirectly, where molecular techniques are used to infer historical attributes of populations or species , as in fields in evolutionary biology such as population genetics and phylogenetics . There is also a long tradition of studying biomolecules "from the ground up", or molecularly, in biophysics . [ 30 ]
Molecular cloning is used to isolate and then transfer a DNA sequence of interest into a plasmid vector. [ 31 ] This recombinant DNA technology was first developed in the 1960s. [ 32 ] In this technique, a DNA sequence coding for a protein of interest is cloned using polymerase chain reaction (PCR), and/or restriction enzymes , into a plasmid ( expression vector ). The plasmid vector usually has at least 3 distinctive features: an origin of replication, a multiple cloning site (MCS), and a selective marker (usually antibiotic resistance ). Additionally, upstream of the MCS are the promoter regions and the transcription start site, which regulate the expression of cloned gene.
This plasmid can be inserted into either bacterial or animal cells. Introducing DNA into bacterial cells can be done by transformation via uptake of naked DNA, conjugation via cell-cell contact or by transduction via viral vector. Introducing DNA into eukaryotic cells, such as animal cells, by physical or chemical means is called transfection . Several different transfection techniques are available, such as calcium phosphate transfection, electroporation , microinjection and liposome transfection . The plasmid may be integrated into the genome , resulting in a stable transfection, or may remain independent of the genome and expressed temporarily, called a transient transfection. [ 33 ] [ 34 ]
DNA coding for a protein of interest is now inside a cell, and the protein can now be expressed. A variety of systems, such as inducible promoters and specific cell-signaling factors, are available to help express the protein of interest at high levels. Large quantities of a protein can then be extracted from the bacterial or eukaryotic cell. The protein can be tested for enzymatic activity under a variety of situations, the protein may be crystallized so its tertiary structure can be studied, or, in the pharmaceutical industry, the activity of new drugs against the protein can be studied. [ 35 ]
Polymerase chain reaction (PCR) is an extremely versatile technique for copying DNA. In brief, PCR allows a specific DNA sequence to be copied or modified in predetermined ways. The reaction is extremely powerful and under perfect conditions could amplify one DNA molecule to become 1.07 billion molecules in less than two hours. PCR has many applications, including the study of gene expression, the detection of pathogenic microorganisms, the detection of genetic mutations, and the introduction of mutations to DNA. [ 36 ] The PCR technique can be used to introduce restriction enzyme sites to ends of DNA molecules, or to mutate particular bases of DNA, the latter is a method referred to as site-directed mutagenesis . PCR can also be used to determine whether a particular DNA fragment is found in a cDNA library . PCR has many variations, like reverse transcription PCR ( RT-PCR ) for amplification of RNA, and, more recently, quantitative PCR which allow for quantitative measurement of DNA or RNA molecules. [ 37 ] [ 38 ]
Gel electrophoresis is a technique which separates molecules by their size using an agarose or polyacrylamide gel. [ 39 ] This technique is one of the principal tools of molecular biology. The basic principle is that DNA fragments can be separated by applying an electric current across the gel - because the DNA backbone contains negatively charged phosphate groups, the DNA will migrate through the agarose gel towards the positive end of the current. [ 39 ] Proteins can also be separated on the basis of size using an SDS-PAGE gel, or on the basis of size and their electric charge by using what is known as a 2D gel electrophoresis . [ 40 ]
The Bradford assay is a molecular biology technique which enables the fast, accurate quantitation of protein molecules utilizing the unique properties of a dye called Coomassie Brilliant Blue G-250. [ 41 ] Coomassie Blue undergoes a visible color shift from reddish-brown to bright blue upon binding to protein. [ 41 ] In its unstable, cationic state, Coomassie Blue has a background wavelength of 465 nm and gives off a reddish-brown color. [ 42 ] When Coomassie Blue binds to protein in an acidic solution, the background wavelength shifts to 595 nm and the dye gives off a bright blue color. [ 42 ] Proteins in the assay bind Coomassie blue in about 2 minutes, and the protein-dye complex is stable for about an hour, although it is recommended that absorbance readings are taken within 5 to 20 minutes of reaction initiation. [ 41 ] The concentration of protein in the Bradford assay can then be measured using a visible light spectrophotometer , and therefore does not require extensive equipment. [ 42 ]
This method was developed in 1975 by Marion M. Bradford , and has enabled significantly faster, more accurate protein quantitation compared to previous methods: the Lowry procedure and the biuret assay. [ 41 ] Unlike the previous methods, the Bradford assay is not susceptible to interference by several non-protein molecules, including ethanol, sodium chloride, and magnesium chloride. [ 41 ] However, it is susceptible to influence by strong alkaline buffering agents, such as sodium dodecyl sulfate (SDS). [ 41 ]
The terms northern , western and eastern blotting are derived from what initially was a molecular biology joke that played on the term Southern blotting , after the technique described by Edwin Southern for the hybridisation of blotted DNA. Patricia Thomas, developer of the RNA blot which then became known as the northern blot , actually did not use the term. [ 43 ]
Named after its inventor, biologist Edwin Southern , the Southern blot is a method for probing for the presence of a specific DNA sequence within a DNA sample. DNA samples before or after restriction enzyme (restriction endonuclease) digestion are separated by gel electrophoresis and then transferred to a membrane by blotting via capillary action . The membrane is then exposed to a labeled DNA probe that has a complement base sequence to the sequence on the DNA of interest. [ 44 ] Southern blotting is less commonly used in laboratory science due to the capacity of other techniques, such as PCR , to detect specific DNA sequences from DNA samples. These blots are still used for some applications, however, such as measuring transgene copy number in transgenic mice or in the engineering of gene knockout embryonic stem cell lines . [ 30 ]
The northern blot is used to study the presence of specific RNA molecules as relative comparison among a set of different samples of RNA. It is essentially a combination of denaturing RNA gel electrophoresis , and a blot . In this process RNA is separated based on size and is then transferred to a membrane that is then probed with a labeled complement of a sequence of interest. The results may be visualized through a variety of ways depending on the label used; however, most result in the revelation of bands representing the sizes of the RNA detected in sample. The intensity of these bands is related to the amount of the target RNA in the samples analyzed. The procedure is commonly used to study when and how much gene expression is occurring by measuring how much of that RNA is present in different samples, assuming that no post-transcriptional regulation occurs and that the levels of mRNA reflect proportional levels of the corresponding protein being produced. It is one of the most basic tools for determining at what time, and under what conditions, certain genes are expressed in living tissues. [ 45 ] [ 46 ]
A western blot is a technique by which specific proteins can be detected from a mixture of proteins. [ 47 ] Western blots can be used to determine the size of isolated proteins, as well as to quantify their expression. [ 48 ] In western blotting , proteins are first separated by size, in a thin gel sandwiched between two glass plates in a technique known as SDS-PAGE . The proteins in the gel are then transferred to a polyvinylidene fluoride (PVDF), nitrocellulose, nylon, or other support membrane. This membrane can then be probed with solutions of antibodies . Antibodies that specifically bind to the protein of interest can then be visualized by a variety of techniques, including colored products, chemiluminescence , or autoradiography . Often, the antibodies are labeled with enzymes. When a chemiluminescent substrate is exposed to the enzyme it allows detection. Using western blotting techniques allows not only detection but also quantitative analysis. Analogous methods to western blotting can be used to directly stain specific proteins in live cells or tissue sections. [ 47 ] [ 49 ]
The eastern blotting technique is used to detect post-translational modification of proteins. Proteins blotted on to the PVDF or nitrocellulose membrane are probed for modifications using specific substrates. [ 50 ]
A DNA microarray is a collection of spots attached to a solid support such as a microscope slide where each spot contains one or more single-stranded DNA oligonucleotide fragments. Arrays make it possible to put down large quantities of very small (100 micrometre diameter) spots on a single slide. Each spot has a DNA fragment molecule that is complementary to a single DNA sequence . A variation of this technique allows the gene expression of an organism at a particular stage in development to be qualified ( expression profiling ). In this technique the RNA in a tissue is isolated and converted to labeled complementary DNA (cDNA). This cDNA is then hybridized to the fragments on the array and visualization of the hybridization can be done. Since multiple arrays can be made with exactly the same position of fragments, they are particularly useful for comparing the gene expression of two different tissues, such as a healthy and cancerous tissue. Also, one can measure what genes are expressed and how that expression changes with time or with other factors.
There are many different ways to fabricate microarrays; the most common are silicon chips, microscope slides with spots of ~100 micrometre diameter, custom arrays, and arrays with larger spots on porous membranes (macroarrays). There can be anywhere from 100 spots to more than 10,000 on a given array. Arrays can also be made with molecules other than DNA. [ 51 ] [ 52 ] [ 53 ] [ 54 ]
Allele-specific oligonucleotide (ASO) is a technique that allows detection of single base mutations without the need for PCR or gel electrophoresis. Short (20–25 nucleotides in length), labeled probes are exposed to the non-fragmented target DNA, hybridization occurs with high specificity due to the short length of the probes and even a single base change will hinder hybridization. The target DNA is then washed and the unhybridized probes are removed. The target DNA is then analyzed for the presence of the probe via radioactivity or fluorescence. In this experiment, as in most molecular biology techniques, a control must be used to ensure successful experimentation. [ 55 ] [ 56 ]
In molecular biology, procedures and technologies are continually being developed and older technologies abandoned. For example, before the advent of DNA gel electrophoresis ( agarose or polyacrylamide ), the size of DNA molecules was typically determined by rate sedimentation in sucrose gradients , a slow and labor-intensive technique requiring expensive instrumentation; prior to sucrose gradients, viscometry was used. Aside from their historical interest, it is often worth knowing about older technology, as it is occasionally useful to solve another new problem for which the newer technique is inappropriate. [ 57 ] | https://en.wikipedia.org/wiki/Molecular_biology |
Molecular breeding is the application of molecular biology tools, often in plant breeding [ 1 ] [ 2 ] and animal breeding. [ 3 ] [ 4 ] In the broad sense, molecular breeding can be defined as the use of genetic manipulation performed at the level of DNA to improve traits of interest in plants and animals, and it may also include genetic engineering or gene manipulation, molecular marker-assisted selection, and genomic selection. [ 5 ] More often, however, molecular breeding implies molecular marker-assisted breeding (MAB) and is defined as the application of molecular biotechnologies, specifically molecular markers, in combination with linkage maps and genomics, to alter and improve plant or animal traits on the basis of genotypic assays. [ 6 ]
The areas of molecular breeding include:
Methods in marker assisted breeding include:
Development of SNPs has revolutionized the molecular breeding process as it helps to create dense markers. [ clarification needed ] Another area that is developing is genotyping by sequencing . [ 10 ]
Transfer of genes makes possible the horizontal transfer of genes from one organism to another. Thus plants can receive genes from humans or algae or any other organism. This provides limitless opportunities in breeding crop plants.
Molecular breeding resources (including multi omics data) are available for: | https://en.wikipedia.org/wiki/Molecular_breeding |
In the kinetic theory of gases in physics , the molecular chaos hypothesis (also called Stosszahlansatz in the writings of Paul and Tatiana Ehrenfest [ 1 ] [ 2 ] ) is the assumption that the velocities of colliding particles are uncorrelated, and independent of position. This means the probability that a pair of particles with given velocities will collide can be calculated by considering each particle separately and ignoring any correlation between the probability for finding one particle with velocity v and probability for finding another velocity v ' in a small region δr . James Clerk Maxwell introduced this approximation in 1867 [ 3 ] although its origins can be traced back to his first work on the kinetic theory in 1860. [ 4 ] [ 5 ]
The assumption of molecular chaos is the key ingredient that allows proceeding from the BBGKY hierarchy to Boltzmann's equation , by reducing the 2-particle distribution function showing up in the collision term to a product of 1-particle distributions. This in turn leads to Boltzmann's H-theorem of 1872, [ 6 ] which attempted to use kinetic theory to show that the entropy of a gas prepared in a state of less than complete disorder must inevitably increase, as the gas molecules are allowed to collide. This drew the objection from Loschmidt that it should not be possible to deduce an irreversible process from time-symmetric dynamics and a time-symmetric formalism: something must be wrong ( Loschmidt's paradox ). The resolution (1895) of this paradox is that the velocities of two particles after a collision are no longer truly uncorrelated. By asserting that it was acceptable to ignore these correlations in the population at times after the initial time, Boltzmann had introduced an element of time asymmetry through the formalism of his calculation. [ citation needed ]
Though the Stosszahlansatz is usually understood as a physically grounded hypothesis, it was recently highlighted that it could also be interpreted as a heuristic hypothesis. This interpretation allows using the principle of maximum entropy in order to generalize the ansatz to higher-order distribution functions. [ 7 ]
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Molecular_chaos |
The molecular clock is a figurative term for a technique that uses the mutation rate of biomolecules to deduce the time in prehistory when two or more life forms diverged . The biomolecular data used for such calculations are usually nucleotide sequences for DNA , RNA , or amino acid sequences for proteins .
The notion of the existence of a so-called "molecular clock" was first attributed to Émile Zuckerkandl and Linus Pauling who, in 1962, noticed that the number of amino acid differences in hemoglobin between different lineages changes roughly linearly with time, as estimated from fossil evidence. [ 1 ] They generalized this observation to assert that the rate of evolutionary change of any specified protein was approximately constant over time and over different lineages (known as the molecular clock hypothesis ).
The genetic equidistance phenomenon was first noted in 1963 by Emanuel Margoliash , who wrote: "It appears that the number of residue differences between cytochrome c of any two species is mostly conditioned by the time elapsed since the lines of evolution leading to these two species originally diverged. If this is correct, the cytochrome c of all mammals should be equally different from the cytochrome c of all birds. Since fish diverges from the main stem of vertebrate evolution earlier than either birds or mammals, the cytochrome c of both mammals and birds should be equally different from the cytochrome c of fish. Similarly, all vertebrate cytochrome c should be equally different from the yeast protein." [ 2 ] For example, the difference between the cytochrome c of a carp and a frog, turtle, chicken, rabbit, and horse is a very constant 13% to 14%. Similarly, the difference between the cytochrome c of a bacterium and yeast, wheat, moth, tuna, pigeon, and horse ranges from 64% to 69%. Together with the work of Emile Zuckerkandl and Linus Pauling, the genetic equidistance result led directly to the formal postulation of the molecular clock hypothesis in the early 1960s. [ 3 ]
Similarly, Vincent Sarich and Allan Wilson in 1967 demonstrated that molecular differences among modern primates in albumin proteins showed that approximately constant rates of change had occurred in all the lineages they assessed. [ 4 ] The basic logic of their analysis involved recognizing that if one species lineage had evolved more quickly than a sister species lineage since their common ancestor, then the molecular differences between an outgroup (more distantly related) species and the faster-evolving species should be larger (since more molecular changes would have accumulated on that lineage) than the molecular differences between the outgroup species and the slower-evolving species. This method is known as the relative rate test . Sarich and Wilson's paper reported, for example, that human ( Homo sapiens ) and chimpanzee ( Pan troglodytes ) albumin immunological cross-reactions suggested they were about equally different from Ceboidea (New World Monkey) species (within experimental error). This meant that they had both accumulated approximately equal changes in albumin since their shared common ancestor. This pattern was also found for all the primate comparisons they tested. When calibrated with the few well-documented fossil branch points (such as no Primate fossils of modern aspect found before the K-T boundary ), this led Sarich and Wilson to argue that the human-chimp divergence probably occurred only ~4–6 million years ago. [ 5 ]
The observation of a clock-like rate of molecular change was originally purely phenomenological . Later, the work of Motoo Kimura [ 6 ] developed the neutral theory of molecular evolution , which predicted a molecular clock. Let there be N individuals, and to keep this calculation simple, let the individuals be haploid (i.e. have one copy of each gene). Let the rate of neutral mutations (i.e. mutations with no effect on fitness ) in a new individual be μ {\displaystyle \mu } . The probability that this new mutation will become fixed in the population is then 1/N, since each copy of the gene is as good as any other. Every generation, each individual can have new mutations, so there are μ {\displaystyle \mu } N new neutral mutations in the population as a whole. That means that each generation, μ {\displaystyle \mu } new neutral mutations will become fixed. If most changes seen during molecular evolution are neutral, then fixations in a population will accumulate at a clock-rate that is equal to the rate of neutral mutations in an individual.
To use molecular clocks to estimate divergence times, molecular clocks need to be "calibrated". This is because molecular data alone does not contain any information on absolute times. For viral phylogenetics and ancient DNA studies—two areas of evolutionary biology where it is possible to sample sequences over an evolutionary timescale—the dates of the intermediate samples can be used to calibrate the molecular clock. However, most phylogenies require that the molecular clock be calibrated using independent evidence about dates, such as the fossil record. [ 7 ] There are two general methods for calibrating the molecular clock using fossils: node calibration and tip calibration. [ 8 ]
Sometimes referred to as node dating, node calibration is a method for time-scaling phylogenetic trees by specifying time constraints for one or more nodes in the tree. Early methods of clock calibration only used a single fossil constraint (e.g. non-parametric rate smoothing), [ 9 ] but newer methods (BEAST [ 10 ] and r8s [ 11 ] ) allow for the use of multiple fossils to calibrate molecular clocks. The oldest fossil of a clade is used to constrain the minimum possible age for the node representing the most recent common ancestor of the clade. However, due to incomplete fossil preservation and other factors, clades are typically older than their oldest fossils. [ 8 ] In order to account for this, nodes are allowed to be older than the minimum constraint in node calibration analyses. However, determining how much older the node is allowed to be is challenging. There are a number of strategies for deriving the maximum bound for the age of a clade including those based on birth-death models, fossil stratigraphic distribution analyses, or taphonomic controls. [ 12 ] Alternatively, instead of a maximum and a minimum, a probability density can be used to represent the uncertainty about the age of the clade. These calibration densities can take the shape of standard probability densities (e.g. normal , lognormal , exponential , gamma ) that can be used to express the uncertainty associated with divergence time estimates. [ 10 ] Determining the shape and parameters of the probability distribution is not trivial, but there are methods that use not only the oldest fossil but a larger sample of the fossil record of clades to estimate calibration densities empirically. [ 13 ] Studies have shown that increasing the number of fossil constraints increases the accuracy of divergence time estimation. [ 14 ]
Sometimes referred to as tip dating , tip calibration is a method of molecular clock calibration in which fossils are treated as taxa and placed on the tips of the tree. This is achieved by creating a matrix that includes a molecular dataset for the extant taxa along with a morphological dataset for both the extinct and the extant taxa. [ 12 ] Unlike node calibration, this method reconstructs the tree topology and places the fossils simultaneously. Molecular and morphological models work together simultaneously, allowing morphology to inform the placement of fossils. [ 8 ] Tip calibration makes use of all relevant fossil taxa during clock calibration, rather than relying on only the oldest fossil of each clade. This method does not rely on the interpretation of negative evidence to infer maximum clade ages. [ 12 ]
Demographic changes in populations can be detected as fluctuations in historical coalescent effective population size from a sample of extant genetic variation in the population using coalescent theory. [ 15 ] [ 16 ] [ 17 ] Ancient population expansions that are well documented and dated in the geological record can be used to calibrate a rate of molecular evolution in a manner similar to node calibration. However, instead of calibrating from the known age of a node, expansion calibration uses a two-epoch model of constant population size followed by population growth, with the time of transition between epochs being the parameter of interest for calibration. [ 18 ] [ 19 ] Expansion calibration works at shorter, intraspecific timescales in comparison to node calibration, because expansions can only be detected after the most recent common ancestor of the species in question. Expansion dating has been used to show that molecular clock rates can be inflated at short timescales [ 18 ] (< 1 MY) due to incomplete fixation of alleles, as discussed below [ 20 ] [ 21 ]
This approach to tip calibration goes a step further by simultaneously estimating fossil placement, topology, and the evolutionary timescale. In this method, the age of a fossil can inform its phylogenetic position in addition to morphology. By allowing all aspects of tree reconstruction to occur simultaneously, the risk of biased results is decreased. [ 8 ] This approach has been improved upon by pairing it with different models. One current method of molecular clock calibration is total evidence dating paired with the fossilized birth-death (FBD) model and a model of morphological evolution. [ 22 ] The FBD model is novel in that it allows for "sampled ancestors", which are fossil taxa that are the direct ancestor of a living taxon or lineage . This allows fossils to be placed on a branch above an extant organism, rather than being confined to the tips. [ 23 ]
Bayesian methods can provide more appropriate estimates of divergence times, especially if large datasets—such as those yielded by phylogenomics —are employed. [ 24 ]
Sometimes only a single divergence date can be estimated from fossils, with all other dates inferred from that. Other sets of species have abundant fossils available, allowing the hypothesis of constant divergence rates to be tested. DNA sequences experiencing low levels of negative selection showed divergence rates of 0.7–0.8% per Myr in bacteria, mammals, invertebrates, and plants. [ 25 ] In the same study, genomic regions experiencing very high negative or purifying selection (encoding rRNA) were considerably slower (1% per 50 Myr).
In addition to such variation in rate with genomic position, since the early 1990s variation among taxa has proven fertile ground for research too, [ 26 ] even over comparatively short periods of evolutionary time (for example mockingbirds [ 27 ] ). Tube-nosed seabirds have molecular clocks that on average run at half speed of many other birds, [ 28 ] possibly due to long generation times, and many turtles have a molecular clock running at one-eighth the speed it does in small mammals, or even slower. [ 29 ] Effects of small population size are also likely to confound molecular clock analyses. Researchers such as Francisco J. Ayala have more fundamentally challenged the molecular clock hypothesis. [ 30 ] [ 31 ] [ 32 ] According to Ayala's 1999 study, five factors combine to limit the application of molecular clock models:
Molecular clock users have developed workaround solutions using a number of statistical approaches including maximum likelihood techniques and later Bayesian modeling . In particular, models that take into account rate variation across lineages have been proposed in order to obtain better estimates of divergence times. These models are called relaxed molecular clocks [ 33 ] because they represent an intermediate position between the 'strict' molecular clock hypothesis and Joseph Felsenstein 's many-rates model [ 34 ] and are made possible through MCMC techniques that explore a weighted range of tree topologies and simultaneously estimate parameters of the chosen substitution model. It must be remembered that divergence dates inferred using a molecular clock are based on statistical inference and not on direct evidence .
The molecular clock runs into particular challenges at very short and very long timescales. At long timescales, the problem is saturation . When enough time has passed, many sites have undergone more than one change, but it is impossible to detect more than one. This means that the observed number of changes is no longer linear with time, but instead flattens out. Even at intermediate genetic distances, with phylogenetic data still sufficient to estimate topology, signal for the overall scale of the tree can be weak under complex likelihood models, leading to highly uncertain molecular clock estimates. [ 35 ]
At very short time scales, many differences between samples do not represent fixation of different sequences in the different populations. Instead, they represent alternative alleles that were both present as part of a polymorphism in the common ancestor. The inclusion of differences that have not yet become fixed leads to a potentially dramatic inflation of the apparent rate of the molecular clock at very short timescales. [ 21 ] [ 36 ]
The molecular clock technique is an important tool in molecular systematics , macroevolution , and phylogenetic comparative methods . Estimation of the dates of phylogenetic events, including those not documented by fossils , such as the divergences between living taxa has allowed the study of macroevolutionary processes in organisms that had limited fossil records. Phylogenetic comparative methods rely heavily on calibrated phylogenies. | https://en.wikipedia.org/wiki/Molecular_clock |
Molecular cloning is a set of experimental methods in molecular biology that are used to assemble recombinant DNA molecules and to direct their replication within host organisms . [ 1 ] The use of the word cloning refers to the fact that the method involves the replication of one molecule to produce a population of cells with identical DNA molecules. Molecular cloning generally uses DNA sequences from two different organisms: the species that is the source of the DNA to be cloned, and the species that will serve as the living host for replication of the recombinant DNA. Molecular cloning methods are central to many contemporary areas of modern biology and medicine. [ 2 ]
In a conventional molecular cloning experiment, the DNA to be cloned is obtained from an organism of interest, then treated with enzymes in the test tube to generate smaller DNA fragments. Subsequently, these fragments are then combined with vector DNA to generate recombinant DNA molecules. The recombinant DNA is then introduced into a host organism (typically an easy-to-grow, benign, laboratory strain of E. coli bacteria). This will generate a population of organisms in which recombinant DNA molecules are replicated along with the host DNA. Because they contain foreign DNA fragments, these are transgenic or genetically modified microorganisms ( GMOs ). [ 3 ] This process takes advantage of the fact that a single bacterial cell can be induced to take up and replicate a single recombinant DNA molecule. This single cell can then be expanded exponentially to generate a large number of bacteria, each of which contains copies of the original recombinant molecule. Thus, both the resulting bacterial population, and the recombinant DNA molecule, are commonly referred to as "clones". Strictly speaking, recombinant DNA refers to DNA molecules, while molecular cloning refers to the experimental methods used to assemble them. The idea arose that different DNA sequences could be inserted into a plasmid and that these foreign sequences would be carried into bacteria and digested as part of the plasmid. That is, these plasmids could serve as cloning vectors to carry genes. [ 4 ]
Virtually any DNA sequence can be cloned and amplified, but there are some factors that might limit the success of the process. Examples of the DNA sequences that are difficult to clone are inverted repeats, origins of replication, centromeres and telomeres. There is also a lower chance of success when inserting large-sized DNA sequences. Inserts larger than 10 kbp have very limited success, but bacteriophages such as bacteriophage λ can be modified to successfully insert a sequence up to 40 kbp. [ 5 ]
Prior to the 1970s, the understanding of genetics and molecular biology was severely hampered by an inability to isolate and study individual genes from complex organisms. This changed dramatically with the advent of molecular cloning methods. Microbiologists, seeking to understand the molecular mechanisms through which bacteria restricted the growth of bacteriophage, isolated restriction endonucleases , enzymes that could cleave DNA molecules only when specific DNA sequences were encountered. [ 6 ] They showed that restriction enzymes cleaved chromosome-length DNA molecules at specific locations, and that specific sections of the larger molecule could be purified by size fractionation. Using a second enzyme, DNA ligase , fragments generated by restriction enzymes could be joined in new combinations, termed recombinant DNA . By recombining DNA segments of interest with vector DNA, such as bacteriophage or plasmids, which naturally replicate inside bacteria, large quantities of purified recombinant DNA molecules could be produced in bacterial cultures. The first recombinant DNA molecules were generated and studied in 1972. [ 7 ] [ 8 ]
Molecular cloning takes advantage of the fact that the chemical structure of DNA is fundamentally the same in all living organisms. Therefore, if any segment of DNA from any organism is inserted into a DNA segment containing the molecular sequences required for DNA replication , and the resulting recombinant DNA is introduced into the organism from which the replication sequences were obtained, then the foreign DNA will be replicated along with the host cell's DNA in the transgenic organism.
Molecular cloning is similar to PCR in that it permits the replication of DNA sequence. The fundamental difference between the two methods is that molecular cloning involves replication of the DNA in a living microorganism, while PCR replicates DNA in an in vitro solution, free of living cells.
Before actual cloning experiments are performed in the lab, most cloning experiments are planned in a computer, using specialized software. Although the detailed planning of the cloning can be done in any text editor, together with online utilities for e.g. PCR primer design, dedicated software exist for the purpose. Software for the purpose include for example ApE [1] (open source), DNAStrider [2] (open source), Serial Cloner [3] (gratis), Collagene [4] (open source), and SnapGene (commercial). These programs allow to simulate PCR reactions , restriction digests , ligations , etc., that is, all the steps described below.
In standard molecular cloning experiments, the cloning of any DNA fragment essentially involves seven steps: (1) Choice of host organism and cloning vector, (2) Preparation of vector DNA, (3) Preparation of DNA to be cloned, (4) Creation of recombinant DNA, (5) Introduction of recombinant DNA into host organism, (6) Selection of organisms containing recombinant DNA, (7) Screening for clones with desired DNA inserts and biological properties.
Notably, the growing capacity and fidelity of DNA synthesis platforms allows for increasingly intricate designs in molecular engineering. These projects may include very long strands of novel DNA sequence and/or test entire libraries simultaneously, as opposed to of individual sequences. These shifts introduce complexity that require design to move away from the flat nucleotide-based representation and towards a higher level of abstraction. Examples of such tools are GenoCAD , Teselagen [5] (free for academia) or GeneticConstructor [6] (free for academics).
Although a very large number of host organisms and molecular cloning vectors are in use, the great majority of molecular cloning experiments begin with a laboratory strain of the bacterium E. coli ( Escherichia coli ) and a plasmid cloning vector . E. coli and plasmid vectors are in common use because they are technically sophisticated, versatile, widely available, and offer rapid growth of recombinant organisms with minimal equipment. [ 3 ] If the DNA to be cloned is exceptionally large (hundreds of thousands to millions of base pairs), then a bacterial artificial chromosome [ 10 ] or yeast artificial chromosome vector is often chosen.
Specialized applications may call for specialized host-vector systems. For example, if the experimentalists wish to harvest a particular protein from the recombinant organism, then an expression vector is chosen that contains appropriate signals for transcription and translation in the desired host organism. Alternatively, if replication of the DNA in different species is desired (for example, transfer of DNA from bacteria to plants), then a multiple host range vector (also termed shuttle vector ) may be selected. In practice, however, specialized molecular cloning experiments usually begin with cloning into a bacterial plasmid, followed by subcloning into a specialized vector.
Whatever combination of host and vector are used, the vector almost always contains four DNA segments that are critically important to its function and experimental utility: [ 3 ]
The cloning vector is treated with a restriction endonuclease to cleave the DNA at the site where foreign DNA will be inserted. The restriction enzyme is chosen to generate a configuration at the cleavage site that is compatible with the ends of the foreign DNA (see DNA end ). Typically, this is done by cleaving the vector DNA and foreign DNA with the same restriction enzyme or restriction endonuclease, for example EcoRI and this restriction enzyme was isolated from E.coli. [ 11 ] Most modern vectors contain a variety of convenient cleavage sites that are unique within the vector molecule (so that the vector can only be cleaved at a single site) and are located within a gene (frequently beta-galactosidase ) whose inactivation can be used to distinguish recombinant from non-recombinant organisms at a later step in the process. To improve the ratio of recombinant to non-recombinant organisms, the cleaved vector may be treated with an enzyme ( alkaline phosphatase ) that dephosphorylates the vector ends. Vector molecules with dephosphorylated ends are unable to replicate, and replication can only be restored if foreign DNA is integrated into the cleavage site. [ 12 ]
For cloning of genomic DNA, the DNA to be cloned is extracted from the organism of interest. Virtually any tissue source can be used (even tissues from extinct animals ), [ 13 ] as long as the DNA is not extensively degraded. The DNA is then purified using simple methods to remove contaminating proteins (extraction with phenol), RNA (ribonuclease) and smaller molecules (precipitation and/or chromatography). Polymerase chain reaction (PCR) methods are often used for amplification of specific DNA or RNA ( RT-PCR ) sequences prior to molecular cloning.
DNA for cloning experiments may also be obtained from RNA using reverse transcriptase ( complementary DNA or cDNA cloning), or in the form of synthetic DNA ( artificial gene synthesis ). cDNA cloning is usually used to obtain clones representative of the mRNA population of the cells of interest, while synthetic DNA is used to obtain any precise sequence defined by the designer. Such a designed sequence may be required when moving genes across genetic codes (for example, from the mitochondria to the nucleus) [ 14 ] or simply for increasing expression via codon optimization . [ 15 ]
The purified DNA is then treated with a restriction enzyme to generate fragments with ends capable of being linked to those of the vector. If necessary, short double-stranded segments of DNA ( linkers ) containing desired restriction sites may be added to create end structures that are compatible with the vector. [ 3 ] [ 12 ]
The creation of recombinant DNA is in many ways the simplest step of the molecular cloning process. DNA prepared from the vector and foreign source are simply mixed together at appropriate concentrations and exposed to an enzyme ( DNA ligase ) that covalently links the ends together. This joining reaction is often termed ligation . The resulting DNA mixture containing randomly joined ends is then ready for introduction into the host organism.
DNA ligase only recognizes and acts on the ends of linear DNA molecules, usually resulting in a complex mixture of DNA molecules with randomly joined ends. The desired products (vector DNA covalently linked to foreign DNA) will be present, but other sequences (e.g. foreign DNA linked to itself, vector DNA linked to itself and higher-order combinations of vector and foreign DNA) are also usually present. This complex mixture is sorted out in subsequent steps of the cloning process, after the DNA mixture is introduced into cells. [ 3 ] [ 12 ]
The DNA mixture, previously manipulated in vitro, is moved back into a living cell, referred to as the host organism. The methods used to get DNA into cells are varied, and the name applied to this step in the molecular cloning process will often depend upon the experimental method that is chosen (e.g. transformation , transduction , transfection , electroporation ). [ 3 ] [ 12 ]
When microorganisms are able to take up and replicate DNA from their local environment, the process is termed transformation , and cells that are in a physiological state such that they can take up DNA are said to be competent . [ 16 ] In mammalian cell culture, the analogous process of introducing DNA into cells is commonly termed transfection . Both transformation and transfection usually require preparation of the cells through a special growth regime and chemical treatment process that will vary with the specific species and cell types that are used.
Electroporation uses high voltage electrical pulses to translocate DNA across the cell membrane (and cell wall, if present). [ 17 ] In contrast, transduction involves the packaging of DNA into virus-derived particles, and using these virus-like particles to introduce the encapsulated DNA into the cell through a process resembling viral infection. Although electroporation and transduction are highly specialized methods, they may be the most efficient methods to move DNA into cells.
Whichever method is used, the introduction of recombinant DNA into the chosen host organism is usually a low efficiency process; that is, only a small fraction of the cells will actually take up DNA. Experimental scientists deal with this issue through a step of artificial genetic selection, in which cells that have not taken up DNA are selectively killed, and only those cells that can actively replicate DNA containing the selectable marker gene encoded by the vector are able to survive. [ 3 ] [ 12 ]
When bacterial cells are used as host organisms, the selectable marker is usually a gene that confers resistance to an antibiotic that would otherwise kill the cells, typically ampicillin . Cells harboring the plasmid will survive when exposed to the antibiotic, while those that have failed to take up plasmid sequences will die. When mammalian cells (e.g. human or mouse cells) are used, a similar strategy is used, except that the marker gene (in this case typically encoded as part of the kanMX cassette) confers resistance to the antibiotic Geneticin .
Modern bacterial cloning vectors (e.g. pUC19 and later derivatives including the pGEM vectors) use the blue-white screening system to distinguish colonies (clones) of transgenic cells from those that contain the parental vector (i.e. vector DNA with no recombinant sequence inserted). In these vectors, foreign DNA is inserted into a sequence that encodes an essential part of beta-galactosidase , an enzyme whose activity results in formation of a blue-colored colony on the culture medium that is used for this work. Insertion of the foreign DNA into the beta-galactosidase coding sequence disables the function of the enzyme so that colonies containing transformed DNA remain colorless (white). Therefore, experimentalists are easily able to identify and conduct further studies on transgenic bacterial clones, while ignoring those that do not contain recombinant DNA.
The total population of individual clones obtained in a molecular cloning experiment is often termed a DNA library . Libraries may be highly complex (as when cloning complete genomic DNA from an organism) or relatively simple (as when moving a previously cloned DNA fragment into a different plasmid), but it is almost always necessary to examine a number of different clones to be sure that the desired DNA construct is obtained. This may be accomplished through a very wide range of experimental methods, including the use of nucleic acid hybridizations , antibody probes , polymerase chain reaction , restriction fragment analysis and/or DNA sequencing . [ 3 ] [ 12 ]
Molecular cloning provides scientists with an essentially unlimited quantity of any individual DNA segments derived from any genome. This material can be used for a wide range of purposes, including those in both basic and applied biological science. A few of the more important applications are summarized here.
Molecular cloning has led directly to the elucidation of the complete DNA sequence of the genomes of a very large number of species and to an exploration of genetic diversity within individual species, work that has been done mostly by determining the DNA sequence of large numbers of randomly cloned fragments of the genome, and assembling the overlapping sequences.
At the level of individual genes, molecular clones are used to generate probes that are used for examining how genes are expressed , and how that expression is related to other processes in biology, including the metabolic environment, extracellular signals, development, learning, senescence and cell death. Cloned genes can also provide tools to examine the biological function and importance of individual genes, by allowing investigators to inactivate the genes, or make more subtle mutations using regional mutagenesis or site-directed mutagenesis . Genes cloned into expression vectors for functional cloning provide a means to screen for genes on the basis of the expressed protein's function.
Obtaining the molecular clone of a gene can lead to the development of organisms that produce the protein product of the cloned genes, termed a recombinant protein. In practice, it is frequently more difficult to develop an organism that produces an active form of the recombinant protein in desirable quantities than it is to clone the gene. This is because the molecular signals for gene expression are complex and variable, and because protein folding, stability and transport can be very challenging.
Many useful proteins are currently available as recombinant products . These include--(1) medically useful proteins whose administration can correct a defective or poorly expressed gene (e.g. recombinant factor VIII , a blood-clotting factor deficient in some forms of hemophilia , [ 18 ] and recombinant insulin , used to treat some forms of diabetes [ 19 ] ), (2) proteins that can be administered to assist in a life-threatening emergency (e.g. tissue plasminogen activator , used to treat strokes [ 20 ] ), (3) recombinant subunit vaccines, in which a purified protein can be used to immunize patients against infectious diseases, without exposing them to the infectious agent itself (e.g. hepatitis B vaccine [ 21 ] ), and (4) recombinant proteins as standard material for diagnostic laboratory tests.
Once characterized and manipulated to provide signals for appropriate expression, cloned genes may be inserted into organisms, generating transgenic organisms, also termed genetically modified organisms (GMOs). Although most GMOs are generated for purposes of basic biological research (see for example, transgenic mouse ), a number of GMOs have been developed for commercial use, ranging from animals and plants that produce pharmaceuticals or other compounds ( pharming ), herbicide-resistant crop plants , and fluorescent tropical fish ( GloFish ) for home entertainment. [ 1 ]
Gene therapy involves supplying a functional gene to cells lacking that function, with the aim of correcting a genetic disorder or acquired disease. Gene therapy can be broadly divided into two categories. The first is alteration of germ cells, that is, sperm or eggs, which results in a permanent genetic change for the whole organism and subsequent generations. This "germ line gene therapy" is considered by many to be unethical in human beings. [ 22 ] The second type of gene therapy, "somatic cell gene therapy", is analogous to an organ transplant. In this case, one or more specific tissues are targeted by direct treatment or by removal of the tissue, addition of the therapeutic gene or genes in the laboratory, and return of the treated cells to the patient. Clinical trials of somatic cell gene therapy began in the late 1990s, mostly for the treatment of cancers and blood, liver, and lung disorders. [ 23 ]
Despite a great deal of publicity and promises, the history of human gene therapy has been characterized by relatively limited success. [ 23 ] The effect of introducing a gene into cells often promotes only partial and/or transient relief from the symptoms of the disease being treated. Some gene therapy trial patients have suffered adverse consequences of the treatment itself, including deaths. In some cases, the adverse effects result from disruption of essential genes within the patient's genome by insertional inactivation. In others, viral vectors used for gene therapy have been contaminated with infectious virus. Nevertheless, gene therapy is still held to be a promising future area of medicine, and is an area where there is a significant level of research and development activity. | https://en.wikipedia.org/wiki/Molecular_cloning |
A molecular cloud —sometimes called a stellar nursery if star formation is occurring within—is a type of interstellar cloud of which the density and size permit absorption nebulae , the formation of molecules (most commonly molecular hydrogen , H 2 ), and the formation of H II regions . This is in contrast to other areas of the interstellar medium that contain predominantly ionized gas .
Molecular hydrogen is difficult to detect by infrared and radio observations, so the molecule most often used to determine the presence of H 2 is carbon monoxide (CO). The ratio between CO luminosity and H 2 mass is thought to be constant, although there are reasons to doubt this assumption in observations of some other galaxies. [ 1 ]
Within molecular clouds are regions with higher density, where much dust and many gas cores reside, called clumps. These clumps are the beginning of star formation if gravitational forces are sufficient to cause the dust and gas to collapse. [ 2 ]
The history pertaining to the discovery of molecular clouds is closely related to the development of radio astronomy and astrochemistry . During World War II , at a small gathering of scientists, Henk van de Hulst first reported he had calculated the neutral hydrogen atom should transmit a detectable radio signal . [ 3 ] This discovery was an important step towards the research that would eventually lead to the detection of molecular clouds.
Once the war ended, and aware of the pioneering radio astronomical observations performed by Jansky and Reber in the US, the Dutch astronomers repurposed the dish-shaped antennas running along the Dutch coastline that were once used by the Germans as a warning radar system and modified into radio telescopes , initiating the search for the hydrogen signature in the depths of space. [ 3 ] [ 4 ]
The neutral hydrogen atom consists of a proton with an electron in its orbit. Both the proton and the electron have a spin property. When the spin state flips from a parallel condition to antiparallel, which contains less energy, the atom gets rid of the excess energy by radiating a spectral line at a frequency of 1420.405 MHz . [ 3 ]
This frequency is generally known as the 21 cm line , referring to its wavelength in the radio band . The 21 cm line is the signature of HI and makes the gas detectable to astronomers back on earth. The discovery of the 21 cm line was the first step towards the technology that would allow astronomers to detect compounds and molecules in interstellar space. [ 3 ]
In 1951, two research groups nearly simultaneously discovered radio emission from interstellar neutral hydrogen. Ewen and Purcell reported the detection of the 21-cm line in March, 1951. Using the radio telescope at the Kootwijk Observatory, Muller and Oort reported the detection of the hydrogen emission line in May of that same year. [ 4 ]
Once the 21-cm emission line was detected, radio astronomers began mapping the neutral hydrogen distribution of the Milky Way Galaxy. Van de Hulst, Muller, and Oort, aided by a team of astronomers from Australia, published the Leiden-Sydney map of neutral hydrogen in the galactic disk in 1958 on the Monthly Notices of the Royal Astronomical Society . This was the first neutral hydrogen map of the galactic disc and also the first map showing the spiral arm structure within it. [ 4 ]
Following the work on atomic hydrogen detection by van de Hulst, Oort and others, astronomers began to regularly use radio telescopes, this time looking for interstellar molecules . In 1963 Alan Barrett and Sander Weinred at MIT found the emission line of OH in the supernova remnant Cassiopeia A . This was the first radio detection of an interstellar molecule at radio wavelengths. [ 1 ] More interstellar OH detections quickly followed and in 1965, Harold Weaver and his team of radio astronomers at Berkeley , identified OH emissions lines coming from the direction of the Orion Nebula and in the constellation of Cassiopeia . [ 4 ]
In 1968, Cheung, Rank, Townes, Thornton and Welch detected NH₃ inversion line radiation in interstellar space. A year later, Lewis Snyder and his colleagues found interstellar formaldehyde . Also in the same year George Carruthers managed to identify molecular hydrogen . The numerous detections of molecules in interstellar space would help pave the way to the discovery of molecular clouds in 1970. [ 4 ]
Hydrogen is the most abundant species of atom in molecular clouds, and under the right conditions it will form the H 2 molecule. Despite its abundance, the detection of H 2 proved difficult. Due to its symmetrical molecule, H 2 molecules have a weak rotational and vibrational modes, making it virtually invisible to direct observation.
The solution to this problem came when Arno Penzias , Keith Jefferts, and Robert Wilson identified CO in the star-forming region in the Omega Nebula . Carbon monoxide is a lot easier to detect than H 2 because of its rotational energy and asymmetrical structure. CO soon became the primary tracer of the clouds where star-formation occurs. [ 4 ]
In 1970, Penzias and his team quickly detected CO in other locations close to the galactic center , including the giant molecular cloud identified as Sagittarius B2 , 390 light years from the galactic center, making it the first detection of a molecular cloud in history. [ 4 ] This team later would receive the Nobel prize of physics for their discovery of microwave emission from the Big Bang .
Due to their pivotal role, research about these structures have only increased over time. A paper published in 2022 reports over 10,000 molecular clouds detected since the discovery of Sagittarius B2. [ 5 ]
Within the Milky Way , molecular gas clouds account for less than one percent of the volume of the interstellar medium (ISM), yet it is also the densest part of it. The bulk of the molecular gas is contained in a ring between 3.5 and 7.5 kiloparsecs (11,000 and 24,000 light-years ) from the center of the Milky Way (the Sun is about 8.5 kiloparsecs from the center). [ 6 ] Large scale CO maps of the galaxy show that the position of this gas correlates with the spiral arms of the galaxy. [ 7 ] That molecular gas occurs predominantly in the spiral arms suggests that molecular clouds must form and dissociate on a timescale shorter than 10 million years—the time it takes for material to pass through the arm region. [ 8 ]
Perpendicularly to the plane of the galaxy, the molecular gas inhabits the narrow midplane of the galactic disc with a characteristic scale height , Z , of approximately 50 to 75 parsecs, much thinner than the warm atomic ( Z from 130 to 400 parsecs) and warm ionized ( Z around 1000 parsecs) gaseous components of the ISM . [ 10 ] The exceptions to the ionized-gas distribution are H II regions , which are bubbles of hot ionized gas created in molecular clouds by the intense radiation given off by young massive stars ; and as such they have approximately the same vertical distribution as the molecular gas.
This distribution of molecular gas is averaged out over large distances; however, the small scale distribution of the gas is highly irregular, with most of it concentrated in discrete clouds and cloud complexes. [ 6 ]
Molecular clouds typically have interstellar medium densities of 10 to 30 cm −3 , and constitute approximately 50% of the total interstellar gas in a galaxy . [ 11 ] Most of the gas is found in a molecular state . The visual boundaries of a molecular cloud is not where the cloud effectively ends, but where molecular gas changes to atomic gas in a fast transition, forming "envelopes" of mass, giving the impression of an edge to the cloud structure. The structure itself is generally irregular and filamentary. [ 8 ]
Cosmic dust and ultraviolet radiation emitted by stars are key factors that determine not only gas and column density, but also the molecular composition of a cloud. The dust provides shielding to the molecular gas inside, preventing dissociation by the ultraviolet radiation. The dissociation caused by UV photons is the main mechanism for transforming molecular material back to the atomic state inside the cloud. [ 12 ] Molecular content in a region of a molecular cloud can change rapidly due to variation in the radiation field and dust movement and disturbance. [ 13 ]
Most of the gas constituting a molecular cloud is molecular hydrogen , with carbon monoxide being the second most common compound. [ 11 ] Molecular clouds also usually contain other elements and compounds. Astronomers have observed the presence of long chain compounds such as methanol , ethanol and benzene rings and their several hydrides . Large molecules known as polycyclic aromatic hydrocarbons have also been detected. [ 12 ]
The density across a molecular cloud is fragmented and its regions can be generally categorized in clumps and cores. Clumps form the larger substructure of the cloud, having the average size of 1 pc . Clumps are the precursors of star clusters , though not every clump will eventually form stars. Cores are much smaller (by a factor of 10) and have higher densities. Cores are gravitationally bound and go through a collapse during star formation . [ 11 ]
In astronomical terms, molecular clouds are short-lived structures that are either destroyed or go through major structural and chemical changes approximately 10 million years into their existence. Their short life span can be inferred from the range in age of young stars associated with them, of 10 to 20 million years, matching molecular clouds’ internal timescales. [ 13 ]
Direct observation of T Tauri stars inside dark clouds and OB stars in star-forming regions match this predicted age span. The fact OB stars older than 10 million years don’t have a significant amount of cloud material about them, seems to suggest most of the cloud is dispersed after this time. The lack of large amounts of frozen molecules inside the clouds also suggest a short-lived structure. Some astronomers propose the molecules never froze in very large quantities due to turbulence and the fast transition between atomic and molecular gas. [ 13 ]
Due to their short lifespan, it follows that molecular clouds are constantly being assembled and destroyed. By calculating the rate at which stars are forming in our galaxy, astronomers are able to suggest the amount of interstellar gas being collected into star-forming molecular clouds in our galaxy. The rate of mass being assembled into stars is approximately 3 M ☉ per year. Only 2% of the mass of a molecular cloud is assembled into stars, giving the number of 150 M ☉ of gas being assembled in molecular clouds in the Milky Way per year. [ 13 ] [ 14 ]
Two possible mechanisms for molecular cloud formation have been suggested by astronomers. Cloud growth by collision and gravitational instability in the gas layer spread throughout the galaxy. Models for the collision theory have shown it cannot be the main mechanism for cloud formation due to the very long timescale it would take to form a molecular cloud, beyond the average lifespan of such structures. [ 14 ] [ 13 ]
Gravitational instability is likely to be the main mechanism. Those regions with more gas will exert a greater gravitational force on their neighboring regions, and draw surrounding material. This extra material increases the density, increasing their gravitational attraction. Mathematical models of gravitational instability in the gas layer predict a formation time within the timescale for the estimated cloud formation time. [ 14 ] [ 13 ]
Once a molecular cloud assembles enough mass, the densest regions of the structure will start to collapse under gravity, creating star-forming clusters. This process is highly destructive to the cloud itself. Once stars are formed, they begin to ionize portions of the cloud around it due to their heat. The ionized gas then evaporates and is dispersed in formations called ‘ champagne flows ’. [ 15 ] This process begins when approximately 2% of the mass of the cloud has been converted into stars. Stellar winds are also known to contribute to cloud dispersal. The cycle of cloud formation and destruction is closed when the gas dispersed by stars cools again and is pulled into new clouds by gravitational instability. [ 13 ]
Star formation involves the collapse of the densest part of the molecular cloud, fragmenting the collapsed region in smaller clumps. These clumps aggregate more interstellar material, increasing in density by gravitational contraction. This process continues until the temperature reaches a point where the fusion of hydrogen can occur. [ 16 ] The burning of hydrogen then generates enough heat to push against gravity, creating hydrostatic equilibrium . At this stage, a protostar is formed and it will continue to aggregate gas and dust from the cloud around it.
One of the most studied star formation regions is the Taurus molecular cloud due to its close proximity to earth (140 pc or 430 ly away), making it an excellent object to collect data about the relationship between molecular clouds and star formation. Embedded in the Taurus molecular cloud there are T Tauri stars . These are a class of variable stars in an early stage of stellar development and still gathering gas and dust from the cloud around them. Observation of star forming regions have helped astronomers develop theories about stellar evolution . Many O and B type stars have been observed in or very near molecular clouds. Since these star types belong to population I (some are less than 1 million years old), they cannot have moved far from their birth place. Many of these young stars are found embedded in cloud clusters, suggesting stars are formed inside it. [ 16 ]
A vast assemblage of molecular gas that has more than 10 thousand times the mass of the Sun [ 18 ] is called a giant molecular cloud ( GMC ). GMCs are around 15 to 600 light-years (5 to 200 parsecs) in diameter, with typical masses of 10 thousand to 10 million solar masses. [ 19 ] Whereas the average density in the solar vicinity is one particle per cubic centimetre, the average volume density of a GMC is about ten to a thousand times higher. Although the Sun is much denser than a GMC, the volume of a GMC is so great that it contains much more mass than the Sun. The substructure of a GMC is a complex pattern of filaments, sheets, bubbles, and irregular clumps. [ 8 ]
Filaments are truly ubiquitous in the molecular cloud. Dense molecular filaments will fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed fragmentation manner of the filaments. In supercritical filaments, observations have revealed quasi-periodic chains of dense cores with spacing of 0.15 parsec comparable to the filament inner width. [ 20 ] A substantial fraction of filaments contained prestellar and protostellar cores, supporting the important role of filaments in gravitationally bound core formation. [ 21 ] Recent studies have suggested that filamentary structures in molecular clouds play a crucial role in the initial conditions of star formation and the origin of the stellar IMF. [ 22 ]
The densest parts of the filaments and clumps are called molecular cores, while the densest molecular cores are called dense molecular cores and have densities in excess of 10 4 to 10 6 particles per cubic centimeter. Typical molecular cores are traced with CO and dense molecular cores are traced with ammonia . The concentration of dust within molecular cores is normally sufficient to block light from background stars so that they appear in silhouette as dark nebulae . [ 23 ]
GMCs are so large that local ones can cover a significant fraction of a constellation; thus they are often referred to by the name of that constellation, e.g. the Orion molecular cloud (OMC) or the Taurus molecular cloud (TMC). These local GMCs are arrayed in a ring in the neighborhood of the Sun coinciding with the Gould Belt . [ 24 ] The most massive collection of molecular clouds in the galaxy forms an asymmetrical ring about the galactic center at a radius of 120 parsecs; the largest component of this ring is the Sagittarius B2 complex. The Sagittarius region is chemically rich and is often used as an exemplar by astronomers searching for new molecules in interstellar space. [ 25 ]
Isolated gravitationally-bound small molecular clouds with masses less than a few hundred times that of the Sun are called Bok globules . The densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same studies.
In 1984 IRAS [ clarification needed ] identified a new type of diffuse molecular cloud. [ 27 ] These were diffuse filamentary clouds that are visible at high galactic latitudes . These clouds have a typical density of 30 particles per cubic centimetre. [ 28 ] | https://en.wikipedia.org/wiki/Molecular_cloud |
Molecular Conductance ( G = I / V {\displaystyle G=I/V} ), or the conductance of a single molecule , is a physical quantity in molecular electronics . Molecular conductance is dependent on the surrounding conditions (e.g. pH , temperature, pressure), as well as the properties of the measuring device. Many experimental techniques have been developed in an attempt to measure this quantity directly, but theorists and experimentalists still face many challenges. [ 1 ]
Recently, a great deal of progress has been made in the development of reliable conductance-measuring techniques. These techniques can be divided into two categories: molecular film experiments, which measure groups of tens of molecules, and single-molecule-measuring experiments.
Molecular film experiments generally consist of the sandwiching of a thin layer of molecules between two electrodes which are used to measure the conductance through the layer. Two of the most successful implementations of this concept have been the bulk electrode approach and in the use of nanoelectrodes. In the bulk electrode approach, a molecular film is typically immobilized onto one electrode and an upper electrode is brought into contact with it allowing for a measure of current flow as a function of applied bias voltage . The nanoelectrode class of experiments, in creatively utilizing equipment such as atomic force microscope tips and small-radius wires, are able to perform the same sorts of current versus applied bias measurements but on a much smaller number of molecules as compared to bulk electrode. For instance, the tip of an atomic force microscope can be used as a top electrode and, given the nano-scale radius of curvature of the tip, the number of molecules measured is drastically cut. The difficulties encountered in these experiments have come mainly in dealing with such thin layers of molecules which often results in problems with short-circuiting the electrodes.
More recently, single-molecule-measurement experiments have been developed that are bringing experimenters a better look at molecular conductance. These fall under the categories of scanning probe, which involves fixed electrode, and mechanically formed junction techniques. One example of a mechanically formed junction experiment involves using a movable electrode to make contact with and then pull away from an electrode surface coated with a single layer of molecules. As the electrode is removed from the surface, the molecules that had bonded between the two electrodes begin to detach until eventually one molecule is connected. The atomic-level geometry of the tip-electrode contact has an effect on the conductance and can change from one run of the experiment to the next, so a histogram approach is required. Forming a junction in which the precise contact geometry is known has been one of the main difficulties with this approach.
An important first step toward the goal of building electronic devices on the molecular level is the ability to measure and control the electric current through an individual molecule. Based on the anticipated continuation of Moore's Law , which is expected to carry the miniaturization of transistors on integrated circuits into the atomic scale within the next 10 to 20 years, this goal of single-molecule-level circuit design is likely to become widespread throughout the semiconductor industry.
Other applications focus on the insight provided by these experiments in the area of charge transport, which is a recurrent phenomenon in many chemical and biological processes. This sort of insight gives researchers the ability to read the chemical information stored in a single molecule electronically, which can then be used in a wide variety of chemical and biosensor applications. | https://en.wikipedia.org/wiki/Molecular_conductance |
The molecular configuration of a molecule is the permanent geometry that results from the spatial arrangement of its bonds . [ 1 ] The ability of the same set of atoms to form two or more molecules with different configurations is stereoisomerism . This is distinct from constitutional isomerism which arises from atoms being connected in a different order. Conformers which arise from single bond rotations, if not isolatable as atropisomers , do not count as distinct molecular configurations as the spatial connectivity of bonds is identical.
Enantiomers are molecules having one or more chiral centres that are mirror images of each other. [ 2 ] Chiral centres are designated R or S . If the 3 groups projecting towards you are arranged clockwise from highest priority to lowest priority, that centre is designated R. If counterclockwise, the centre is S. Priority is based on atomic number: atoms with higher atomic number are higher priority. If two molecules with one or more chiral centres differ in all of those centres, they are enantiomers.
Diastereomers are distinct molecular configurations that are a broader category. [ 3 ] They usually differ in physical characteristics as well as chemical properties. If two molecules with more than one chiral centre differ in one or more (but not all) centres, they are diastereomers. All stereoisomers that are not enantiomers are diastereomers. Diastereomerism also exists in alkenes. Alkenes are designated Z or E depending on group priority on adjacent carbon atoms. E/Z notation describes the absolute stereochemistry of the double bond. Cis/trans notation is also used to describe the relative orientations of groups.
Amino acids are designated either L or D depending on relative group arrangements around the stereogenic carbon center. L/D designations are not related to S/R absolute configurations. Only L configured amino acids are found in biological organisms. All amino acids except for L-cysteine have an S configuration and glycine is non-chiral. [ 4 ]
In general, all L designated amino acids are enantiomers of their D counterparts except for isoleucine and threonine which contain two carbon stereocenters, making them diastereomers.
Used as drugs, compounds with different configuration normally have different physiological activity, including the desired pharmacological effect, the toxicology and the metabolism. [ 5 ] Enantiomeric ratios and purity is an important factor in clinical assessments. Racemic mixtures are those that contain equimolar amounts of both enantiomers of a compound. Racemate and single enantiomer actions differ in most cases.
This stereochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Molecular_configuration |
Molecular cytogenetics combines two disciplines, molecular biology and cytogenetics , and involves the analysis of chromosome structure to help distinguish normal and cancer-causing cells. Human cytogenetics began in 1956 when it was discovered that normal human cells contain 46 chromosomes. However, the first microscopic observations of chromosomes were reported by Arnold, Flemming, and Hansemann in the late 1800s. Their work was ignored for decades until the actual chromosome number in humans was discovered as 46. In 1879, Arnold examined sarcoma and carcinoma cells having very large nuclei. Today, the study of molecular cytogenetics can be useful in diagnosing and treating various malignancies such as hematological malignancies, brain tumors, and other precursors of cancer. The field is overall focused on studying the evolution of chromosomes, more specifically the number, structure, function, and origin of chromosome abnormalities. [ 1 ] [ 2 ] It includes a series of techniques referred to as fluorescence in situ hybridization , or FISH, in which DNA probes are labeled with different colored fluorescent tags to visualize one or more specific regions of the genome. Introduced in the 1980s, FISH uses probes with complementary base sequences to locate the presence or absence of the specific DNA regions. FISH can either be performed as a direct approach to metaphase chromosomes or interphase nuclei. Alternatively, an indirect approach can be taken in which the entire genome can be assessed for copy number changes using virtual karyotyping. Virtual karyotypes are generated from arrays made of thousands to millions of probes, and computational tools are used to recreate the genome in silico .
Fluorescence In Situ Hybridization maps out single copy or repetitive DNA sequences through localization labeling of specific nucleic acids. The technique utilizes different DNA probes labeled with fluorescent tags that bind to one or more specific regions of the genome. [ 3 ] It labels all individual chromosomes at every stage of cell division to display structural and numerical abnormalities that may arise throughout the cycle. This is done with a probe that can be locus specific, centromeric, telomeric, and whole-chromosomal. This technique is typically performed on interphase cells and paraffin block tissues. FISH maps out single copy or repetitive DNA sequences through localization labeling of specific nucleic acids. The technique utilizes different DNA probes labeled with fluorescent tags that bind to one or more specific regions of the genome. Signals from the fluorescent tags can be seen with microscopy , and mutations can be seen by comparing these signals to healthy cells. For this to work, DNA must be denatured using heat or chemicals to break the hydrogen bonds; this allows hybridization to occur once two samples are mixed. The fluorescent probes create new hydrogen bonds, thus repairing DNA with their complementary bases, which can be detected through microscopy. FISH allows one to visualize different parts of the chromosome at different stages of the cell cycle. FISH can either be performed as a direct approach to metaphase chromosomes or interphase nuclei. Alternatively, an indirect approach can be taken in which the entire genome can be assessed for copy number changes using virtual karyotyping. Virtual karyotypes are generated from microarrays made of thousands to millions of probes, and computational tools are used to recreate the genome in silico . [ 4 ]
Comparative genomic hybridization (CGH), derived from FISH, is used to compare variations in copy number between a biological sample and a reference. CGH was originally developed to observe chromosomal aberrations in tumour cells. This method uses two genomes, a sample and a control, which are labeled fluorescently to distinguish them. [ 5 ] In CGH, DNA is isolated from a tumour sample and biotin is attached. Another labelling protein, digoxigenin, is attached to the reference DNA sample. [ 6 ] The labelled DNA samples are co-hybridized to probes during cell division, which is the most informative time for observing copy number variation. [ 7 ] CGH uses creates a map that shows the relative abundance of DNA and chromosome number. By comparing the fluorescence in a sample compared to a reference, CGH can point to gains or losses of chromosomal regions. [ 6 ] [ 8 ] CGH differs from FISH because it does not require a specific target or previous knowledge of the genetic region being analyzed. CGH can also scan an entire genome relatively quickly for various chromosome imbalances, and this is helpful in patients with underlying genetic issues and when an official diagnosis is not known. This often occurs with hematological cancers.
Array comparative genomic hybridization (aCGH) allows CGH to be performed without cell culture and isolation. Instead, it is performed on glass slides containing small DNA fragments. [ 9 ] Removing the cell culture and isolation step dramatically simplifies and expedites the process. Using similar principles to CGH, the sample DNA is isolated and fluorescently labelled, then co-hybridized to single stranded probes to generate signals. Thousands of these signals can be detected for at once, and this process is referred to as parallel screening. [ 10 ] Fluorescence ratios between the sample and reference signals are measured, representing the average difference between the amount of each. This will show if there is more or less sample DNA than is expected by reference.
FISH chromosome in-situ hybridization allows the study cytogenetics in pre- and postnatal samples and is also widely used in cyto genetic testing for cancer. While cytogenetics is the study of chromosomes and their structure, cytogenetic testing involves the analysis of cells in the blood, tissue, bone marrow, or fluid to identify changes in chromosomes of an individual. This was often done through karyotyping, and is now done with FISH. This method is commonly used to detect chromosomal deletions or translocations often associated with cancer. FISH is also used for melanocytic lesions, distinguishing atypical melanocytic or malignant melanoma. [ 5 ]
Cancer cells often accumulate complex chromosomal structural changes such as loss, duplication, inversion or movement of a segment. [ 11 ] When using FISH, any changes to a chromosome will be made visible through discrepancies between fluorescent-labelled cancer chromosomes and healthy chromosomes. [ 11 ] The findings of these cytogenetic experiments can shed light on the genetic causes for the cancer and can locate potential therapeutic targets. [ 12 ]
Molecular cytogenetics can also be used as a diagnostic tool for congenital syndromes in which the underlying genetic causes of the disease are unknown. [ 13 ] Analysis of a patient's chromosome structure can reveal causative changes. New molecular biology methods developed in the past two decades such as next generation sequencing and RNA-seq have largely replaced molecular cytogenetics in diagnostics, but recently the use of derivatives of FISH such as multicolour FISH and multicolour banding (mBAND) has been growing in medical applications. [ 14 ]
One of the current projects involving Molecular Cytogenetics involves genomic research on rare cancers, called the Cancer Genome Characterization Initiative (CGCI) . [ 15 ] The CGCI is a group interested in describing the genetic abnormalities of some rare cancers, by employing advanced sequencing of genomes, exomes, and transcriptomes, which may ultimately play a role in cancer pathogenesis. [ 15 ] Currently, the CGCI has elucidated some previously undetermined genetic alterations in medulloblastoma and B-cell non-Hodgkin lymphoma . The next steps for the CGCI is to identify genomic alternations in HIV+ tumors and in Burkitt's Lymphoma .
Some high-throughput sequencing techniques that are used by the CGCI include: whole genome sequencing , transcriptome sequencing, ChIP-sequencing , and Illumina Infinum MethylationEPIC BeadCHIP . [ 16 ] | https://en.wikipedia.org/wiki/Molecular_cytogenetics |
A molecular demon or biological molecular machine is a biological macromolecule that resembles and seems to have the same properties as Maxwell's demon . These macromolecules gather information in order to recognize their substrate or ligand within a myriad of other molecules floating in the intracellular or extracellular plasm. This molecular recognition represents an information gain which is equivalent to an energy gain or decrease in entropy . When the demon is reset i.e. when the ligand is released, the information is erased, energy is dissipated and entropy increases obeying the second law of thermodynamics . [ 1 ] The difference between biological molecular demons and the thought experiment of Maxwell's demon is the latter's apparent violation of the second law. [ 2 ] [ 3 ]
The molecular demon switches mainly between two conformations . The first, or basic state, upon recognizing and binding the ligand or substrate following an induced fit , undergoes a change in conformation which leads to the second quasi-stable state: the protein-ligand complex . In order to reset the protein to its original, basic state, it needs ATP . When ATP is consumed or hydrolyzed, the ligand is released and the demon acquires again information reverting to its basic state. The cycle may start again. [ 1 ]
The second law of thermodynamics is a statistical law. Hence, occasionally, single molecules may not obey the law. All molecules are subject to the molecular storm, i.e. the random movement of molecules in the cytoplasm and the extracellular fluid . Molecular demons or molecular machines either biological or artificially constructed are continuously pushed around by the random thermal motion in a direction that sometimes violates the law. When this happens and the gliding back of the macromolecule from the movement it had made or the conformational change it underwent to its original state can be prevented, as is the case with molecular demons, the molecule works as a ratchet; [ 4 ] [ 5 ] it is possible to observe for example the creation of a gradient of ions or other molecules across the cell membrane , the movement of motor proteins along filament proteins or also the accumulation of products deriving from an enzymatic reaction. Even some artificial molecular machines and experiments are capable of forming a ratchet apparently defying the second law of thermodynamics. [ 6 ] [ 7 ] All these molecular demons have to be reset to their original state consuming external energy that is subsequently dissipated as heat. This final step in which entropy increases is therefore irreversible. If the demons were reversible, no work would be done. [ 5 ]
An example of artificial ratchets is the work by Serreli et al. (2007). [ 6 ] Serreli et al. constructed a nanomachine , a rotaxane , that consists of a ring-shaped molecule, that moves along a tiny molecular axle between two different equal compartments, A and B. The normal, random movement of molecules sends the ring back and forth. Since the rings move freely, half of the rotaxanes have the ring on site B and the other half on site A. But the system used by Serreli et al. has a chemical gate on the rotaxane molecule and the axle contains two sticky parts, one at either side of the gate. This gate opens when the ring is close by. The sticky part in B is close to the gate and the rings pass more readily to A than from A to B. They obtained a deviation from equilibrium of 70:50 for A and B respectively, a bit like the demon of Maxwell. But this system works only when light is shone on it and thus needs external energy, just like molecular demons.
Landauer stated that information is physical. [ 8 ] His principle sets fundamental thermodynamical constraints for classical and quantum information processing. Much effort has been dedicated to incorporating information into thermodynamics and measuring the entropic and energetic costs of manipulating information. Gaining information, decreases entropy which has an energy cost. This energy has to be collected from the environment. [ 9 ] Landauer established equivalence of one bit of information with entropy which is represented by kT ln 2, where k is the Boltzmann constant and T is room temperature. This bound is called the Landauer's limit. [ 10 ] Erasing energy increases entropy instead. [ 11 ] Toyabe et al. (2010) were able to demonstrate experimentally that information can be converted in free energy. It is a quite elegant experiment that consists of a microscopic particle on a spiral-staircase-like potential. The step has a height corresponding to k B T, where k B is the Boltzmann constant and T is the temperature. The particle jumps between steps due to random thermal motions. Since the downward jumps following the gradient are more frequent than the upward ones, the particle falls down the stairs, on average. But when an upward jump is observed, a block is placed behind the particle to prevent it from falling, just like in a ratchet. This way it should climb the stairs. Information is gained by measuring the particle's location, which is equivalent to a gain in energy, i.e. a decrease in entropy. They used a generalized equation for the second law that contains a variable for information:
ΔF is the free energy between states , W is the work done on the system , k is the Boltzmann constant , T is temperature, and I is the mutual information content obtained by measurements. The brackets indicate that the energy is an average. [ 7 ] They could convert the equivalent of one bit information to 0.28 kT ln2 of energy or, in other words, they could exploit more than a quarter of the information’s energy content. [ 12 ]
In his book Chance and Necessity, Jacques Monod described the functions of proteins and other molecules capable of recognizing with 'elective discrimination' a substrate or ligand or other molecule. [ 2 ] In describing these molecules he introduced the term 'cognitive' functions, the same cognitive functions that Maxwell attributed to his demon. Werner Loewenstein goes further and names these molecules ' molecular demon ' or 'demon' in short. [ 1 ]
Naming the biological molecular machines in this way makes it easier to understand the similarities between these molecules and Maxwell's demon.
Because of this real discriminative if not 'cognitive' property, Jacques Monod attributed a teleonomic function to these biological complexes. Teleonomy implies the idea of an oriented, coherent and constructive activity. Proteins therefore must be considered essential molecular agents in the teleonomic performances of all living beings. | https://en.wikipedia.org/wiki/Molecular_demon |
Molecular descriptors play a fundamental role in chemistry, pharmaceutical sciences, environmental protection policy , and health researches, as well as in quality control, being the way molecules, thought of as real bodies, are transformed into numbers, allowing some mathematical treatment of the chemical information contained in the molecule. This was defined by Todeschini and Consonni as:
" The molecular descriptor is the final result of a logic and mathematical procedure which transforms chemical information encoded within a symbolic representation of a molecule into a useful number or the result of some standardized experiment. " [ 1 ]
By this definition, the molecular descriptors are divided into two main categories: experimental measurements , such as log P , molar refractivity , dipole moment , polarizability , and, in general, additive physico-chemical properties, and theoretical molecular descriptors , which are derived from a symbolic representation of the molecule and can be further classified according to the different types of molecular representation. [ 2 ]
The main classes of theoretical molecular descriptors are: 1) 0D-descriptors (i.e. constitutional descriptors, count descriptors), 2) 1D-descriptors (i.e. list of structural fragments, fingerprints),3) 2D-descriptors (i.e. graph invariants),4) 3D-descriptors (such as, for example, 3D-MoRSE descriptors, WHIM descriptors, GETAWAY descriptors, quantum-chemical descriptors, size, steric, surface and volume descriptors),5) 4D-descriptors (such as those derived from GRID or CoMFA methods, Volsurf). The outspread of artificial intelligence and machine learning to computational chemistry has also lead to various attempts to uncover new descriptors or to find the most predictive ones among some sort of candidates. [ 3 ] [ 4 ]
The invariance properties of molecular descriptors can be defined as the ability of the algorithm for their calculation to give a descriptor value that is independent of the particular characteristics of the molecular representation, such as atom numbering or labeling, spatial reference frame, molecular conformations, etc. Invariance to molecular numbering or labeling is assumed as a minimal basic requirement for any descriptor. [ citation needed ]
Two other important invariance properties, translational invariance and rotational invariance , are the invariance of a descriptor value to any translation or rotation of the molecules in the chosen reference frame. These last invariance properties are required for the 3D-descriptors. [ citation needed ]
This property refers to the ability of a descriptor to avoid equal values for different molecules.
In this sense, descriptors can show no degeneracy at all, low, intermediate, or high degeneracy.
For example, the number of molecule atoms and the molecular weights are high degeneracy descriptors, while, usually, 3D-descriptors show low or no degeneracy at all. [ citation needed ]
Molecular descriptors are numerical values that encapsulate chemical information about molecules, facilitating their mathematical analysis. Given the vast array of available descriptors, it’s essential to establish foundational principles to ensure their reliability and utility.
A robust molecular descriptor should: [ 5 ] [ 6 ]
Beyond these foundational criteria, to be practically valuable, a molecular descriptor should also:
The initial set of principles ensures that a descriptor is well-defined and invariant to manipulations that don’t alter the intrinsic molecular structure. Historically, many descriptors were designed for small organic molecules. However, contemporary challenges necessitate descriptors that can be applied to diverse compounds, including salts, ionic liquids, peptides, polymers, and nanostructures.
The subsequent set of guidelines emphasizes the descriptor’s practical utility. An effective descriptor should be interpretable, correlate with experimental properties, and provide unique information not captured by other descriptors. Continuity and low degeneracy are crucial, as they ensure the descriptor can sensitively reflect minor structural variations. Ultimately, the information a descriptor provides is contingent upon the chosen molecular representation and its alignment with the specific property or activity being studied. [ 2 ]
Here there is a list of a selection of commercial and free descriptor calculation tools. | https://en.wikipedia.org/wiki/Molecular_descriptor |
Molecular design software is notable software for molecular modeling , that provides special support for developing molecular models de novo .
In contrast to the usual molecular modeling programs, such as for molecular dynamics and quantum chemistry , such software directly supports the aspects related to constructing molecular models, including: | https://en.wikipedia.org/wiki/Molecular_design_software |
Molecular diffusion is the motion of atoms , molecules , or other particles of a gas or liquid at temperatures above absolute zero . The rate of this movement is a function of temperature, viscosity of the fluid, size and density (or their product, mass) of the particles. This type of diffusion explains the net flux of molecules from a region of higher concentration to one of lower concentration.
Once the concentrations are equal the molecules continue to move, but since there is no concentration gradient the process of molecular diffusion has ceased and is instead governed by the process of self-diffusion , originating from the random motion of the molecules. The result of diffusion is a gradual mixing of material such that the distribution of molecules is uniform. Since the molecules are still in motion, but an equilibrium has been established, the result of molecular diffusion is called a "dynamic equilibrium". In a phase with uniform temperature, absent external net forces acting on the particles, the diffusion process will eventually result in complete mixing.
Consider two systems; S 1 and S 2 at the same temperature and capable of exchanging particles . If there is a change in the potential energy of a system; for example μ 1 >μ 2 (μ is Chemical potential ) an energy flow will occur from S 1 to S 2 , because nature always prefers low energy and maximum entropy .
Molecular diffusion is typically described mathematically using Fick's laws of diffusion .
Diffusion is of fundamental importance in many disciplines of physics, chemistry, and biology. Some example applications of diffusion:
Diffusion is part of the transport phenomena . Of mass transport mechanisms, molecular diffusion is known as a slower one.
In cell biology , diffusion is a main form of transport for necessary materials such as amino acids within cells. [ 1 ] Diffusion of solvents, such as water, through a semipermeable membrane is classified as osmosis .
Metabolism and respiration rely in part upon diffusion in addition to bulk or active processes. For example, in the alveoli of mammalian lungs , due to differences in partial pressures across the alveolar-capillary membrane, oxygen diffuses into the blood and carbon dioxide diffuses out. Lungs contain a large surface area to facilitate this gas exchange process.
Fundamentally, two types of diffusion are distinguished:
The diffusion coefficients for these two types of diffusion are generally different because the diffusion coefficient for chemical diffusion is binary and it includes the effects due to the correlation of the movement of the different diffusing species.
Because chemical diffusion is a net transport process, the system in which it takes place is not an equilibrium system (i.e. it is not at rest yet). Many results in classical thermodynamics are not easily applied to non-equilibrium systems. However, there sometimes occur so-called quasi-steady states, where the diffusion process does not change in time, where classical results may locally apply. As the name suggests, this process is a not a true equilibrium since the system is still evolving.
Non-equilibrium fluid systems can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale. [ 3 ]
Chemical diffusion increases the entropy of a system, i.e. diffusion is a spontaneous and irreversible process. Particles can spread out by diffusion, but will not spontaneously re-order themselves (absent changes to the system, assuming no creation of new chemical bonds, and absent external forces acting on the particle).
Collective diffusion is the diffusion of a large number of particles, most often within a solvent .
Contrary to Brownian motion , which is the diffusion of a single particle, interactions between particles may have to be considered, unless the particles form an ideal mix with their solvent (ideal mix conditions correspond to the case where the interactions between the solvent and particles are identical to the interactions between particles and the interactions between solvent molecules; in this case, the particles do not interact when inside the solvent).
In case of an ideal mix, the particle diffusion equation holds true and the diffusion coefficient D the speed of diffusion in the particle diffusion equation is independent of particle concentration. In other cases, resulting interactions between particles within the solvent will account for the following effects:
Transport of material in stagnant fluid or across streamlines of a fluid in a laminar flow occurs by molecular diffusion. Two adjacent compartments separated by a partition, containing pure gases A or B may be envisaged. Random movement of all molecules occurs so that after a period molecules are found remote from their original positions. If the partition is removed, some molecules of A move towards the region occupied by B, their number depends on the number of molecules at the region considered. Concurrently, molecules of B diffuse toward regimens formerly occupied by pure A.
Finally, complete mixing occurs. Before this point in time, a gradual variation in the concentration of A occurs along an axis, designated x, which joins the original compartments. This variation, expressed mathematically as −dC A /dx, where C A is the concentration of A. The negative sign arises because the concentration of A decreases as the distance x increases. Similarly, the variation in the concentration of gas B is −dC B /dx. The rate of diffusion of A, N A , depend on concentration gradient and the average velocity with which the molecules of A moves in the x direction. This relationship is expressed by Fick's law
where D is the diffusivity of A through B, proportional to the average molecular velocity and, therefore dependent on the temperature and pressure of gases. The rate of diffusion N A is usually expressed as the number of moles diffusing across unit area in unit time. As with the basic equation of heat transfer, this indicates that the rate of force is directly proportional to the driving force, which is the concentration gradient.
This basic equation applies to a number of situations. Restricting discussion exclusively to steady state conditions, in which neither dC A /dx or dC B /dx change with time, equimolecular counterdiffusion is considered first.
If no bulk flow occurs in an element of length dx, the rates of diffusion of two ideal gases (of similar molar volume) A and B must be equal and opposite, that is N A = − N B {\displaystyle N_{A}=-N_{B}} .
The partial pressure of A changes by dP A over the distance dx. Similarly, the partial pressure of B changes dP B . As there is no difference in total pressure across the element (no bulk flow), we have
For an ideal gas the partial pressure is related to the molar concentration by the relation
where n A is the number of moles of gas A in a volume V . As the molar concentration C A is equal to n A / V therefore
Consequently, for gas A,
where D AB is the diffusivity of A in B. Similarly,
Considering that dP A /dx = −dP B /dx, it therefore proves that D AB = D BA = D. If the partial pressure of A at x 1 is P A 1 and x 2 is P A 2 , integration of above equation,
A similar equation may be derived for the counterdiffusion of gas B. | https://en.wikipedia.org/wiki/Molecular_diffusion |
Molecular distillation is a type of short-path vacuum distillation , characterized by an extremely low vacuum pressure, 0.01 torr or below, which is performed using a molecular still . [ 1 ] It is a process of separation, purification and concentration of natural products, complex and thermally sensitive molecules for example vitamins and polyunsaturated fatty acids. This process is characterized by short term exposure of the distillate liquid to high temperatures in high vacuum (around 10 −4 mmHg) in the distillation column and a small distance between the evaporator and the condenser around 2 cm. [ 2 ] In molecular distillation, fluids are in the free molecular flow regime, i.e. the mean free path of molecules is comparable to the size of the equipment. [ 3 ] The gaseous phase no longer exerts significant pressure on the substance to be evaporated, and consequently, rate of evaporation no longer depends on pressure. The motion of molecules is in the line of sight, because they do not form a continuous gas anymore. Thus, a short path between the hot surface and the cold surface is necessary, typically by suspending a hot plate covered with a film of feed next to a cold plate with a line of sight in between.
This process has the advantages of avoiding the problem of toxicity that occurs in techniques that use solvents as the separating agent, and also of minimizing losses due to thermal decomposition. and can be used in a continuous feed process to harvest distillate without having to break vacuum.
Molecular distillation is used industrially for purification of oils. [ 4 ] [ 5 ] It is also used to enrich borage oil in γ-linolenic acid (GLA) and also to recover tocopherols from deodorizer distillate of soybean oil (DDSO). [ 2 ] Molecular stills were historically used by Wallace Carothers in the synthesis of larger polymers , as a reaction product, water, interfered with polymerization by undoing the reaction via hydrolysis, but the water could be removed by the molecular still. [ 6 ] | https://en.wikipedia.org/wiki/Molecular_distillation |
Molecular drive is a term coined by Gabriel Dover in 1982 to describe evolutionary processes that change the genetic composition of a population through DNA turnover mechanisms. [ 1 ] [ 2 ] [ 3 ] Molecular drive operates independently of natural selection and genetic drift .
The best-known such process is the concerted evolution of genes present in many tandem copies, such as those for ribosomal RNAs or silk moth egg shell chorion proteins, in sexually reproducing species. The concept has been proposed to extend to the diversification of multigene families . [ 2 ] The mechanisms involved include gene conversion , unequal crossing-over , transposition , slippage replication and RNA-mediated exchanges. Because mutations changing the sequence of one copy are less common than deletions , duplications and replacement of one copy by another, the copies gradually come to resemble each other much more than they would if they had been evolving independently.
Concerted evolution can be unbiased, in which case every version has an equal probability of being the one that replaces the others. However, if the molecular events have any bias favouring one version of the sequence over others, that version will dominate the process and eventually replace the others. The name 'molecular drive' reflects the similarity of the process with what was originally the better-known process of meiotic drive .
Molecular drive can also act in bacteria , where parasexual processes such as natural transformation cause DNA turnover.
According to Dover , TRAM is a genetic system that has features of non-mendelian inheritance T urnover, copy number and functional R edundancy A nd M odulatory. To date all regulatory regions ( promoters ) and genes that have been examined in detail at the molecular level, have TRAM characteristics. As such, part of their evolutionary history will have been influenced by the molecular drive process.
According to Dover , Adoptation is an evolved feature of an organism that contributes to its viability and reproduction (established by molecular drive) and that adopts some previously inaccessible component of the environment. | https://en.wikipedia.org/wiki/Molecular_drive |
Molecular dynamics ( MD ) is a computer simulation method for analyzing the physical movements of atoms and molecules . The atoms and molecules are allowed to interact for a fixed period of time, giving a view of the dynamic "evolution" of the system. In the most common version, the trajectories of atoms and molecules are determined by numerically solving Newton's equations of motion for a system of interacting particles, where forces between the particles and their potential energies are often calculated using interatomic potentials or molecular mechanical force fields . The method is applied mostly in chemical physics , materials science , and biophysics .
Because molecular systems typically consist of a vast number of particles, it is impossible to determine the properties of such complex systems analytically; MD simulation circumvents this problem by using numerical methods. However, long MD simulations are mathematically ill-conditioned , generating cumulative errors in numerical integration that can be minimized with proper selection of algorithms and parameters, but not eliminated.
For systems that obey the ergodic hypothesis , the evolution of one molecular dynamics simulation may be used to determine the macroscopic thermodynamic properties of the system: the time averages of an ergodic system correspond to microcanonical ensemble averages. MD has also been termed " statistical mechanics by numbers" and " Laplace 's vision of Newtonian mechanics " of predicting the future by animating nature's forces [ 1 ] and allowing insight into molecular motion on an atomic scale.
MD was originally developed in the early 1950s, following earlier successes with Monte Carlo simulations —which themselves date back to the eighteenth century, in the Buffon's needle problem for example—but was popularized for statistical mechanics at Los Alamos National Laboratory by Marshall Rosenbluth and Nicholas Metropolis in what is known today as the Metropolis–Hastings algorithm . Interest in the time evolution of N-body systems dates much earlier to the seventeenth century, beginning with Isaac Newton , and continued into the following century largely with a focus on celestial mechanics and issues such as the stability of the Solar System . Many of the numerical methods used today were developed during this time period, which predates the use of computers; for example, the most common integration algorithm used today, the Verlet integration algorithm, was used as early as 1791 by Jean Baptiste Joseph Delambre . Numerical calculations with these algorithms can be considered to be MD done "by hand".
As early as 1941, integration of the many-body equations of motion was carried out with analog computers . Some undertook the labor-intensive work of modeling atomic motion by constructing physical models, e.g., using macroscopic spheres. The aim was to arrange them in such a way as to replicate the structure of a liquid and use this to examine its behavior. J.D. Bernal describes this process in 1962, writing: [ 2 ]
... I took a number of rubber balls and stuck them together with rods of a selection of different lengths ranging from 2.75 to 4 inches. I tried to do this in the first place as casually as possible, working in my own office, being interrupted every five minutes or so and not remembering what I had done before the interruption.
Following the discovery of microscopic particles and the development of computers, interest expanded beyond the proving ground of gravitational systems to the statistical properties of matter. In an attempt to understand the origin of irreversibility , Enrico Fermi proposed in 1953, and published in 1955, [ 3 ] the use of the early computer MANIAC I , also at Los Alamos National Laboratory , to solve the time evolution of the equations of motion for a many-body system subject to several choices of force laws. Today, this seminal work is known as the Fermi–Pasta–Ulam–Tsingou problem . The time evolution of the energy from the original work is shown in the figure to the right.
In 1957, Berni Alder and Thomas Wainwright used an IBM 704 computer to simulate perfectly elastic collisions between hard spheres . [ 4 ] In 1960, in perhaps the first realistic simulation of matter, J.B. Gibson et al . simulated radiation damage of solid copper by using a Born–Mayer type of repulsive interaction along with a cohesive surface force. [ 5 ] In 1964, Aneesur Rahman published simulations of liquid argon that used a Lennard-Jones potential ; calculations of system properties, such as the coefficient of self-diffusion , compared well with experimental data. [ 6 ] Today, the Lennard-Jones potential is still one of the most frequently used intermolecular potentials . [ 7 ] [ 8 ] It is used for describing simple substances (a.k.a. Lennard-Jonesium [ 9 ] [ 10 ] [ 11 ] ) for conceptual and model studies and as a building block in many force fields of real substances. [ 12 ] [ 13 ]
First used in theoretical physics , the molecular dynamics method gained popularity in materials science soon afterward, and since the 1970s it has also been commonly used in biochemistry and biophysics . MD is frequently used to refine 3-dimensional structures of proteins and other macromolecules based on experimental constraints from X-ray crystallography or NMR spectroscopy . In physics, MD is used to examine the dynamics of atomic-level phenomena that cannot be observed directly, such as thin film growth and ion subplantation, and to examine the physical properties of nanotechnological devices that have not or cannot yet be created. In biophysics and structural biology , the method is frequently applied to study the motions of macromolecules such as proteins and nucleic acids , which can be useful for interpreting the results of certain biophysical experiments and for modeling interactions with other molecules, as in ligand docking . In principle, MD can be used for ab initio prediction of protein structure by simulating folding of the polypeptide chain from a random coil .
The results of MD simulations can be tested through comparison to experiments that measure molecular dynamics, of which a popular method is NMR spectroscopy. MD-derived structure predictions can be tested through community-wide experiments in Critical Assessment of Protein Structure Prediction ( CASP ), although the method has historically had limited success in this area. Michael Levitt , who shared the Nobel Prize partly for the application of MD to proteins, wrote in 1999 that CASP participants usually did not use the method due to "... a central embarrassment of molecular mechanics , namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". [ 14 ] Improvements in computational resources permitting more and longer MD trajectories, combined with modern improvements in the quality of force field parameters, have yielded some improvements in both structure prediction and homology model refinement, without reaching the point of practical utility in these areas; many identify force field parameters as a key area for further development. [ 15 ] [ 16 ] [ 17 ]
MD simulation has been reported for pharmacophore development and drug design . [ 18 ] For example, Pinto et al . implemented MD simulations of Bcl-xL complexes to calculate average positions of critical amino acids involved in ligand binding. [ 19 ] Carlson et al . implemented molecular dynamics simulations to identify compounds that complement a receptor while causing minimal disruption to the conformation and flexibility of the active site. Snapshots of the protein at constant time intervals during the simulation were overlaid to identify conserved binding regions (conserved in at least three out of eleven frames) for pharmacophore development. Spyrakis et al . relied on a workflow of MD simulations, fingerprints for ligands and proteins (FLAP) and linear discriminant analysis (LDA) to identify the best ligand-protein conformations to act as pharmacophore templates based on retrospective ROC analysis of the resulting pharmacophores. In an attempt to ameliorate structure-based drug discovery modeling, vis-à-vis the need for many modeled compounds, Hatmal et al . proposed a combination of MD simulation and ligand-receptor intermolecular contacts analysis to discern critical intermolecular contacts (binding interactions) from redundant ones in a single ligand–protein complex. Critical contacts can then be converted into pharmacophore models that can be used for virtual screening. [ 20 ]
An important factor is intramolecular hydrogen bonds , [ 21 ] which are not explicitly included in modern force fields, but described as Coulomb interactions of atomic point charges . [ citation needed ] This is a crude approximation because hydrogen bonds have a partially quantum mechanical and chemical nature. Furthermore, electrostatic interactions are usually calculated using the dielectric constant of a vacuum , even though the surrounding aqueous solution has a much higher dielectric constant. Thus, using the macroscopic dielectric constant at short interatomic distances is questionable. Finally, van der Waals interactions in MD are usually described by Lennard-Jones potentials [ 22 ] [ 23 ] based on the Fritz London theory that is only applicable in a vacuum. [ citation needed ] However, all types of van der Waals forces are ultimately of electrostatic origin and therefore depend on dielectric properties of the environment . [ 24 ] The direct measurement of attraction forces between different materials (as Hamaker constant ) shows that "the interaction between hydrocarbons across water is about 10% of that across vacuum". [ 24 ] The environment-dependence of van der Waals forces is neglected in standard simulations, but can be included by developing polarizable force fields.
The design of a molecular dynamics simulation should account for the available computational power. Simulation size ( n = number of particles), timestep, and total time duration must be selected so that the calculation can finish within a reasonable time period. However, the simulations should be long enough to be relevant to the time scales of the natural processes being studied. To make statistically valid conclusions from the simulations, the time span simulated should match the kinetics of the natural process. Otherwise, it is analogous to making conclusions about how a human walks when only looking at less than one footstep. Most scientific publications about the dynamics of proteins and DNA [ 25 ] [ 26 ] use data from simulations spanning nanoseconds (10 −9 s) to microseconds (10 −6 s). To obtain these simulations, several CPU-days to CPU-years are needed. Parallel algorithms allow the load to be distributed among CPUs ; an example is the spatial or force decomposition algorithm. [ 27 ]
During a classical MD simulation, the most CPU intensive task is the evaluation of the potential as a function of the particles' internal coordinates. Within that energy evaluation, the most expensive one is the non-bonded or non-covalent part. In big O notation , common molecular dynamics simulations scale by O ( n 2 ) {\displaystyle O(n^{2})} if all pair-wise electrostatic and van der Waals interactions must be accounted for explicitly. This computational cost can be reduced by employing electrostatics methods such as particle mesh Ewald summation ( O ( n log ( n ) ) {\displaystyle O(n\log(n))} ), particle-particle-particle mesh ( P 3 M ), or good spherical cutoff methods ( O ( n ) {\displaystyle O(n)} ). [ citation needed ]
Another factor that impacts total CPU time needed by a simulation is the size of the integration timestep. This is the time length between evaluations of the potential. The timestep must be chosen small enough to avoid discretization errors (i.e., smaller than the period related to fastest vibrational frequency in the system). Typical timesteps for classical MD are on the order of 1 femtosecond (10 −15 s). This value may be extended by using algorithms such as the SHAKE constraint algorithm , which fix the vibrations of the fastest atoms (e.g., hydrogens) into place. Multiple time scale methods have also been developed, which allow extended times between updates of slower long-range forces. [ 28 ] [ 29 ] [ 30 ]
For simulating molecules in a solvent , a choice should be made between an explicit and implicit solvent . Explicit solvent particles (such as the TIP3P , SPC/E and SPC-f water models) must be calculated expensively by the force field, while implicit solvents use a mean-field approach. Using an explicit solvent is computationally expensive, requiring inclusion of roughly ten times more particles in the simulation. But the granularity and viscosity of explicit solvent is essential to reproduce certain properties of the solute molecules. This is especially important to reproduce chemical kinetics .
In all kinds of molecular dynamics simulations, the simulation box size must be large enough to avoid boundary condition artifacts. Boundary conditions are often treated by choosing fixed values at the edges (which may cause artifacts), or by employing periodic boundary conditions in which one side of the simulation loops back to the opposite side, mimicking a bulk phase (which may cause artifacts too).
In the microcanonical ensemble , the system is isolated from changes in moles (N), volume (V), and energy (E). It corresponds to an adiabatic process with no heat exchange. A microcanonical molecular dynamics trajectory may be seen as an exchange of potential and kinetic energy, with total energy being conserved. For a system of N particles with coordinates X {\displaystyle X} and velocities V {\displaystyle V} , the following pair of first order differential equations may be written in Newton's notation as
The potential energy function U ( X ) {\displaystyle U(X)} of the system is a function of the particle coordinates X {\displaystyle X} . It is referred to simply as the potential in physics, or the force field in chemistry. The first equation comes from Newton's laws of motion ; the force F {\displaystyle F} acting on each particle in the system can be calculated as the negative gradient of U ( X ) {\displaystyle U(X)} .
For every time step, each particle's position X {\displaystyle X} and velocity V {\displaystyle V} may be integrated with a symplectic integrator method such as Verlet integration . The time evolution of X {\displaystyle X} and V {\displaystyle V} is called a trajectory. Given the initial positions (e.g., from theoretical knowledge) and velocities (e.g., randomized Gaussian ), we can calculate all future (or past) positions and velocities.
One frequent source of confusion is the meaning of temperature in MD. Commonly we have experience with macroscopic temperatures, which involve a huge number of particles, but temperature is a statistical quantity. If there is a large enough number of atoms, statistical temperature can be estimated from the instantaneous temperature , which is found by equating the kinetic energy of the system to nk B T /2, where n is the number of degrees of freedom of the system.
A temperature-related phenomenon arises due to the small number of atoms that are used in MD simulations. For example, consider simulating the growth of a copper film starting with a substrate containing 500 atoms and a deposition energy of 100 eV . In the real world, the 100 eV from the deposited atom would rapidly be transported through and shared among a large number of atoms ( 10 10 {\displaystyle 10^{10}} or more) with no big change in temperature. When there are only 500 atoms, however, the substrate is almost immediately vaporized by the deposition. Something similar happens in biophysical simulations. The temperature of the system in NVE is naturally raised when macromolecules such as proteins undergo exothermic conformational changes and binding.
In the canonical ensemble , amount of substance (N), volume (V) and temperature (T) are conserved. It is also sometimes called constant temperature molecular dynamics (CTMD). In NVT, the energy of endothermic and exothermic processes is exchanged with a thermostat .
A variety of thermostat algorithms are available to add and remove energy from the boundaries of an MD simulation in a more or less realistic way, approximating the canonical ensemble . Popular methods to control temperature include velocity rescaling, the Nosé–Hoover thermostat , Nosé–Hoover chains, the Berendsen thermostat , the Andersen thermostat and Langevin dynamics . The Berendsen thermostat might introduce the flying ice cube effect, which leads to unphysical translations and rotations of the simulated system.
It is not trivial to obtain a canonical ensemble distribution of conformations and velocities using these algorithms. How this depends on system size, thermostat choice, thermostat parameters, time step and integrator is the subject of many articles in the field.
In the isothermal–isobaric ensemble , amount of substance (N), pressure (P) and temperature (T) are conserved. In addition to a thermostat, a barostat is needed. It corresponds most closely to laboratory conditions with a flask open to ambient temperature and pressure.
In the simulation of biological membranes , isotropic pressure control is not appropriate. For lipid bilayers , pressure control occurs under constant membrane area (NPAT) or constant surface tension "gamma" (NPγT).
The replica exchange method is a generalized ensemble. It was originally created to deal with the slow dynamics of disordered spin systems. It is also called parallel tempering. The replica exchange MD (REMD) formulation [ 31 ] tries to overcome the multiple-minima problem by exchanging the temperature of non-interacting replicas of the system running at several temperatures.
A molecular dynamics simulation requires the definition of a potential function , or a description of the terms by which the particles in the simulation will interact. In chemistry and biology this is usually referred to as a force field and in materials physics as an interatomic potential . Potentials may be defined at many levels of physical accuracy; those most commonly used in chemistry are based on molecular mechanics and embody a classical mechanics treatment of particle-particle interactions that can reproduce structural and conformational changes but usually cannot reproduce chemical reactions .
The reduction from a fully quantum description to a classical potential entails two main approximations. The first one is the Born–Oppenheimer approximation , which states that the dynamics of electrons are so fast that they can be considered to react instantaneously to the motion of their nuclei. As a consequence, they may be treated separately. The second one treats the nuclei, which are much heavier than electrons, as point particles that follow classical Newtonian dynamics. In classical molecular dynamics, the effect of the electrons is approximated as one potential energy surface, usually representing the ground state.
When finer levels of detail are needed, potentials based on quantum mechanics are used; some methods attempt to create hybrid classical/quantum potentials where the bulk of the system is treated classically but a small region is treated as a quantum system, usually undergoing a chemical transformation.
Empirical potentials used in chemistry are frequently called force fields , while those used in materials physics are called interatomic potentials .
Most force fields in chemistry are empirical and consist of a summation of bonded forces associated with chemical bonds , bond angles, and bond dihedrals , and non-bonded forces associated with van der Waals forces and electrostatic charge . [ 32 ] Empirical potentials represent quantum-mechanical effects in a limited way through ad hoc functional approximations. These potentials contain free parameters such as atomic charge , van der Waals parameters reflecting estimates of atomic radius , and equilibrium bond length , angle, and dihedral; these are obtained by fitting against detailed electronic calculations (quantum chemical simulations) or experimental physical properties such as elastic constants , lattice parameters and spectroscopic measurements.
Because of the non-local nature of non-bonded interactions, they involve at least weak interactions between all particles in the system. Its calculation is normally the bottleneck in the speed of MD simulations. To lower the computational cost, force fields employ numerical approximations such as shifted cutoff radii, reaction field algorithms, particle mesh Ewald summation , or the newer particle–particle-particle–mesh ( P3M ).
Chemistry force fields commonly employ preset bonding arrangements (an exception being ab initio dynamics), and thus are unable to model the process of chemical bond breaking and reactions explicitly. On the other hand, many of the potentials used in physics, such as those based on the bond order formalism can describe several different coordinations of a system and bond breaking. [ 33 ] [ 34 ] Examples of such potentials include the Brenner potential [ 35 ] for hydrocarbons and its
further developments for the C-Si-H [ 36 ] and C-O-H [ 37 ] systems. The ReaxFF potential [ 38 ] can be considered a fully reactive hybrid between bond order potentials and chemistry force fields.
The potential functions representing the non-bonded energy are formulated as a sum over interactions between the particles of the system. The simplest choice, employed in many popular force fields , is the "pair potential", in which the total potential energy can be calculated from the sum of energy contributions between pairs of atoms. Therefore, these force fields are also called "additive force fields". An example of such a pair potential is the non-bonded Lennard-Jones potential (also termed the 6–12 potential), used for calculating van der Waals forces.
Another example is the Born (ionic) model of the ionic lattice. The first term in the next equation is Coulomb's law for a pair of ions, the second term is the short-range repulsion explained by Pauli's exclusion principle and the final term is the dispersion interaction term. Usually, a simulation only includes the dipolar term, although sometimes the quadrupolar term is also included. [ 39 ] [ 40 ] When n l = 6, this potential is also called the Coulomb–Buckingham potential .
In many-body potentials , the potential energy includes the effects of three or more particles interacting with each other. [ 41 ] In simulations with pairwise potentials, global interactions in the system also exist, but they occur only through pairwise terms. In many-body potentials, the potential energy cannot be found by a sum over pairs of atoms, as these interactions are calculated explicitly as a combination of higher-order terms. In the statistical view, the dependency between the variables cannot in general be expressed using only pairwise products of the degrees of freedom. For example, the Tersoff potential , [ 42 ] which was originally used to simulate carbon , silicon , and germanium , and has since been used for a wide range of other materials, involves a sum over groups of three atoms, with the angles between the atoms being an important factor in the potential. Other examples are the embedded-atom method (EAM), [ 43 ] the EDIP, [ 41 ] and the Tight-Binding Second Moment Approximation (TBSMA) potentials, [ 44 ] where the electron density of states in the region of an atom is calculated from a sum of contributions from surrounding atoms, and the potential energy contribution is then a function of this sum.
Semi-empirical potentials make use of the matrix representation from quantum mechanics. However, the values of the matrix elements are found through empirical formulae that estimate the degree of overlap of specific atomic orbitals. The matrix is then diagonalized to determine the occupancy of the different atomic orbitals, and empirical formulae are used once again to determine the energy contributions of the orbitals.
There are a wide variety of semi-empirical potentials, termed tight-binding potentials, which vary according to the atoms being modeled.
Most classical force fields implicitly include the effect of polarizability , e.g., by scaling up the partial charges obtained from quantum chemical calculations. These partial charges are stationary with respect to the mass of the atom. But molecular dynamics simulations can explicitly model polarizability with the introduction of induced dipoles through different methods, such as Drude particles or fluctuating charges. This allows for a dynamic redistribution of charge between atoms which responds to the local chemical environment.
For many years, polarizable MD simulations have been touted as the next generation. For homogenous liquids such as water, increased accuracy has been achieved through the inclusion of polarizability. [ 45 ] [ 46 ] [ 47 ] Some promising results have also been achieved for proteins. [ 48 ] [ 49 ] However, it is still uncertain how to best approximate polarizability in a simulation. [ citation needed ] The point becomes more important when a particle experiences different environments during its simulation trajectory, e.g. translocation of a drug through a cell membrane. [ 50 ]
In classical molecular dynamics, one potential energy surface (usually the ground state) is represented in the force field. This is a consequence of the Born–Oppenheimer approximation . In excited states, chemical reactions or when a more accurate representation is needed, electronic behavior can be obtained from first principles using a quantum mechanical method, such as density functional theory . This is named Ab Initio Molecular Dynamics (AIMD). Due to the cost of treating the electronic degrees of freedom, the computational burden of these simulations is far higher than classical molecular dynamics. For this reason, AIMD is typically limited to smaller systems and shorter times.
Ab initio quantum mechanical and chemical methods may be used to calculate the potential energy of a system on the fly, as needed for conformations in a trajectory. This calculation is usually made in the close neighborhood of the reaction coordinate . Although various approximations may be used, these are based on theoretical considerations, not on empirical fitting. Ab initio calculations produce a vast amount of information that is not available from empirical methods, such as density of electronic states or other electronic properties. A significant advantage of using ab initio methods is the ability to study reactions that involve breaking or formation of covalent bonds, which correspond to multiple electronic states. Moreover, ab initio methods also allow recovering effects beyond the Born–Oppenheimer approximation using approaches like mixed quantum-classical dynamics .
QM (quantum-mechanical) methods are very powerful. However, they are computationally expensive, while the MM (classical or molecular mechanics) methods are fast but suffer from several limits (require extensive parameterization; energy estimates obtained are not very accurate; cannot be used to simulate reactions where covalent bonds are broken/formed; and are limited in their abilities for providing accurate details regarding the chemical environment). A new class of method has emerged that combines the good points of QM (accuracy) and MM (speed) calculations. These methods are termed mixed or hybrid quantum-mechanical and molecular mechanics methods (hybrid QM/MM). [ 51 ]
The most important advantage of hybrid QM/MM method is the speed. The cost of doing classical molecular dynamics (MM) in the most straightforward case scales O(n 2 ), where n is the number of atoms in the system. This is mainly due to electrostatic interactions term (every particle interacts with every other particle). However, use of cutoff radius, periodic pair-list updates and more recently the variations of the particle-mesh Ewald's (PME) method has reduced this to between O(n) to O(n 2 ). In other words, if a system with twice as many atoms is simulated then it would take between two and four times as much computing power. On the other hand, the simplest ab initio calculations typically scale O(n 3 ) or worse (restricted Hartree–Fock calculations have been suggested to scale ~O(n 2.7 )). To overcome the limit, a small part of the system is treated quantum-mechanically (typically active-site of an enzyme) and the remaining system is treated classically.
In more sophisticated implementations, QM/MM methods exist to treat both light nuclei susceptible to quantum effects (such as hydrogens) and electronic states. This allows generating hydrogen wave-functions (similar to electronic wave-functions). This methodology has been useful in investigating phenomena such as hydrogen tunneling. One example where QM/MM methods have provided new discoveries is the calculation of hydride transfer in the enzyme liver alcohol dehydrogenase . In this case, quantum tunneling is important for the hydrogen, as it determines the reaction rate. [ 52 ]
At the other end of the detail scale are coarse-grained and lattice models. Instead of explicitly representing every atom of the system, one uses "pseudo-atoms" to represent groups of atoms. MD simulations on very large systems may require such large computer resources that they cannot easily be studied by traditional all-atom methods. Similarly, simulations of processes on long timescales (beyond about 1 microsecond) are prohibitively expensive, because they require so many time steps. In these cases, one can sometimes tackle the problem by using reduced representations, which are also called coarse-grained models . [ 53 ]
Examples for coarse graining (CG) methods are discontinuous molecular dynamics (CG-DMD) [ 54 ] [ 55 ] and Go-models. [ 56 ] Coarse-graining is done sometimes taking larger pseudo-atoms. Such united atom approximations have been used in MD simulations of biological membranes. Implementation of such approach on systems where electrical properties are of interest can be challenging owing to the difficulty of using a proper charge distribution on the pseudo-atoms. [ 57 ] The aliphatic tails of lipids are represented by a few pseudo-atoms by gathering 2 to 4 methylene groups into each pseudo-atom.
The parameterization of these very coarse-grained models must be done empirically, by matching the behavior of the model to appropriate experimental data or all-atom simulations. Ideally, these parameters should account for both enthalpic and entropic contributions to free energy in an implicit way. [ 58 ] When coarse-graining is done at higher levels, the accuracy of the dynamic description may be less reliable. But very coarse-grained models have been used successfully to examine a wide range of questions in structural biology, liquid crystal organization, and polymer glasses.
Examples of applications of coarse-graining:
The simplest form of coarse-graining is the united atom (sometimes called extended atom ) and was used in most early MD simulations of proteins, lipids, and nucleic acids. For example, instead of treating all four atoms of a CH 3 methyl group explicitly (or all three atoms of CH 2 methylene group), one represents the whole group with one pseudo-atom. It must, of course, be properly parameterized so that its van der Waals interactions with other groups have the proper distance-dependence. Similar considerations apply to the bonds, angles, and torsions in which the pseudo-atom participates. In this kind of united atom representation, one typically eliminates all explicit hydrogen atoms except those that have the capability to participate in hydrogen bonds ( polar hydrogens ). An example of this is the CHARMM 19 force-field.
The polar hydrogens are usually retained in the model, because proper treatment of hydrogen bonds requires a reasonably accurate description of the directionality and the electrostatic interactions between the donor and acceptor groups. A hydroxyl group, for example, can be both a hydrogen bond donor, and a hydrogen bond acceptor, and it would be impossible to treat this with one OH pseudo-atom. About half the atoms in a protein or nucleic acid are non-polar hydrogens, so the use of united atoms can provide a substantial savings in computer time.
Machine Learning Force Fields] (MLFFs) represent one approach to modeling interatomic interactions in molecular dynamics simulations. [ 59 ] MLFFs can achieve accuracy close to that of ab initio methods . Once trained, MLFFs are much faster than direct quantum mechanical calculations. MLFFs address the limitations of traditional force fields by learning complex potential energy surfaces directly from high-level quantum mechanical data. Several software packages now support MLFFs, including VASP [ 60 ] and open-source libraries like DeePMD-kit [ 61 ] [ 62 ] and SchNetPack . [ 63 ] [ 64 ]
In many simulations of a solute-solvent system the main focus is on the behavior of the solute with little interest of the solvent behavior particularly in those solvent molecules residing in regions far from the solute molecule. [ 65 ] Solvents may influence the dynamic behavior of solutes via random collisions and by imposing a frictional drag on the motion of the solute through the solvent. The use of non-rectangular periodic boundary conditions, stochastic boundaries and solvent shells can all help reduce the number of solvent molecules required and enable a larger proportion of the computing time to be spent instead on simulating the solute. It is also possible to incorporate the effects of a solvent without needing any explicit solvent molecules present. One example of this approach is to use a potential mean force (PMF) which describes how the free energy changes as a particular coordinate is varied. The free energy change described by PMF contains the averaged effects of the solvent.
Without incorporating the effects of solvent simulations of macromolecules (such as proteins) may yield unrealistic behavior and even small molecules may adopt more compact conformations due to favourable van der Waals forces and electrostatic interactions which would be dampened in the presence of a solvent. [ 66 ]
A long range interaction is an interaction in which the spatial interaction falls off no faster than r − d {\displaystyle r^{-d}} where d {\displaystyle d} is the dimensionality of the system. Examples include charge-charge interactions between ions and dipole-dipole interactions between molecules. Modelling these forces presents quite a challenge as they are significant over a distance which may be larger than half the box length with simulations of many thousands of particles. Though one solution would be to significantly increase the size of the box length, this brute force approach is less than ideal as the simulation would become computationally very expensive. Spherically truncating the potential is also out of the question as unrealistic behaviour may be observed when the distance is close to the cut off distance. [ 67 ]
Steered molecular dynamics (SMD) simulations, or force probe simulations, apply forces to a protein in order to manipulate its structure by pulling it along desired degrees of freedom. These experiments can be used to reveal structural changes in a protein at the atomic level. SMD is often used to simulate events such as mechanical unfolding or stretching. [ 68 ]
There are two typical protocols of SMD: one in which pulling velocity is held constant, and one in which applied force is constant. Typically, part of the studied system (e.g., an atom in a protein) is restrained by a harmonic potential. Forces are then applied to specific atoms at either a constant velocity or a constant force. Umbrella sampling is used to move the system along the desired reaction coordinate by varying, for example, the forces, distances, and angles manipulated in the simulation. Through umbrella sampling, all of the system's configurations—both high-energy and low-energy—are adequately sampled. Then, each configuration's change in free energy can be calculated as the potential of mean force . [ 69 ] A popular method of computing PMF is through the weighted histogram analysis method (WHAM), which analyzes a series of umbrella sampling simulations. [ 70 ] [ 71 ]
A lot of important applications of SMD are in the field of drug discovery and biomolecular sciences. For e.g. SMD was used to investigate the stability of Alzheimer's protofibrils, [ 72 ] to study the protein ligand interaction in cyclin-dependent kinase 5 [ 73 ] and even to show the effect of electric field on thrombin (protein) and aptamer (nucleotide) complex [ 74 ] among many other interesting studies.
Molecular dynamics is used in many fields of science.
The following biophysical examples illustrate notable efforts to produce simulations of a systems of very large size (a complete virus) or very long simulation times (up to 1.112 milliseconds):
Another important application of MD method benefits from its ability of 3-dimensional characterization and analysis of microstructural evolution at atomic scale.
Molecular modeling on GPU is the technique of using a graphics processing unit (GPU) for molecular simulations. [ 87 ] | https://en.wikipedia.org/wiki/Molecular_dynamics |
Molecular ecology is a subdiscipline of ecology that is concerned with applying molecular genetic techniques to ecological questions (e.g., population structure, phylogeography, conservation, speciation, hybridization, biodiversity). It is virtually synonymous with the field of " Ecological Genetics " as pioneered by Theodosius Dobzhansky , E. B. Ford , Godfrey M. Hewitt , and others. [ 1 ] Molecular ecology is related to the fields of population genetics and conservation genetics .
Methods frequently include using microsatellites to determine gene flow and hybridization between populations. The development of molecular ecology is also closely related to the use of DNA microarrays , which allows for the simultaneous analysis of the expression of thousands of different genes. Quantitative PCR may also be used to analyze gene expression as a result of changes in environmental conditions or different responses by differently adapted individuals.
Molecular ecology uses molecular genetic data to answer ecological question related to biogeography, genomics, conservation genetics, and behavioral ecology. Studies mostly use data based on DNA sequences. This approach has been enhanced over a number of years to allow researchers to sequence thousands of genes from a small amount of starting DNA. Allele sizes are another way researchers are able to compare individuals and populations which allows them to quantify the genetic diversity within a population and the genetic similarities among populations. [ 2 ]
Molecular ecological techniques are used to study in situ questions of bacterial diversity. Many microorganisms are not easily obtainable as cultured strains in the laboratory, which would allow for identification and characterization. It also stems from the development of PCR technique , which allows for the rapid amplification of genetic material.
The amplification of DNA from environmental samples using general or group-specific primers leads to a mix of genetic material, requiring sorting before sequencing and identification. The classic technique to achieve this is through cloning, which involves incorporating the amplified DNA fragments into bacterial plasmids . Techniques such as temperature gradient gel electrophoresis , allow for a faster result. More recently, the advent of relatively low-cost, next-generation DNA sequencing technologies, such as 454 and Illumina platforms, has allowed exploration of bacterial ecology concerning continental-scale environmental gradients such as pH [ 3 ] that was not feasible with traditional technology.
Exploration of fungal diversity in situ has also benefited from next-generation DNA sequencing technologies. The use of high-throughput sequencing techniques has been widely adopted by the fungal ecology community since the first publication of their use in the field in 2009. [ 4 ] Similar to the exploration of bacterial diversity, these techniques have allowed high-resolution studies of fundamental questions in fungal ecology such as phylogeography , [ 5 ] fungal diversity in forest soils, [ 6 ] stratification of fungal communities in soil horizons, [ 7 ] and fungal succession on decomposing plant litter. [ 8 ]
The majority of fungal ecology research leveraging next-generation sequencing approaches involves sequencing of PCR amplicons of conserved regions of DNA (i.e. marker genes) to identify and describe the distribution of taxonomic groups in the fungal community in question, though more recent research has focused on sequencing functional gene amplicons [ 4 ] (e.g. Baldrian et al. 2012 [ 7 ] ). The locus of choice for a description of the taxonomic structure of fungal communities has traditionally been the internal transcribed spacer (ITS) region of ribosomal RNA genes [ 9 ] due to its utility in identifying fungi to genus or species taxonomic levels, [ 10 ] and its high representation in public sequence databases. [ 9 ] A second widely used locus (e.g. Amend et al. 2010, [ 5 ] Weber et al. 2013 [ 11 ] ), the D1-D3 region of 28S ribosomal RNA genes, may not allow the low taxonomic level classification of the ITS, [ 12 ] [ 13 ] but demonstrates superior performance in sequence alignment and phylogenetics . [ 5 ] [ 14 ] Also, the D1-D3 region may be a better candidate for sequencing with Illumina sequencing technologies. [ 15 ] Porras-Alfaro et al. [ 13 ] showed that the accuracy of classification of either ITS or D1-D3 region sequences was largely based on the sequence composition and quality of databases used for comparison, and poor-quality sequences and sequence misidentification in public databases is a major concern. [ 16 ] [ 17 ] The construction of sequence databases that have broad representation across fungi, and that are curated by taxonomic experts is a critical next step. [ 14 ] [ 18 ]
Next-generation sequencing technologies generate large amounts of data, and analysis of fungal marker-gene data is an active area of research. [ 4 ] [ 19 ] Two primary areas of concern are methods for clustering sequences into operational taxonomic units by sequence similarity and quality control of sequence data. [ 4 ] [ 19 ] Currently, there is no consensus on preferred methods for clustering, [ 19 ] and clustering and sequence processing methods can significantly affect results, especially for the variable-length ITS region. [ 4 ] [ 19 ] In addition, fungal species vary in intra-specific sequence similarity of the ITS region. [ 20 ] Recent research has been devoted to the development of flexible clustering protocols that allow sequence similarity thresholds to vary by taxonomic groups, which are supported by well-annotated sequences in public sequence databases. [ 18 ]
In recent years, molecular data and analyses have been able to supplement traditional approaches of behavioral ecology , the study of animal behavior in relation to its ecology and evolutionary history. One behavior that molecular data has helped scientists better understand is extra-pair fertilizations (EPFs), also known as extra-pair copulations (EPCs) . These are mating events that occur outside of a social bond, like monogamy and are hard to observe. Molecular data has been key to understanding the prevalence of and the individuals participating in EPFs.
While most bird species are socially monogamous, molecular data has revealed that less than 25% of these species are genetically monogamous. [ 21 ] EPFs complicate matters, especially for male individuals, because it does not make sense for an individual to care for offspring that are not their own. Studies have found that males will adjust their parental care in response to changes in their paternity. [ 22 ] [ 23 ] Other studies have shown that in socially monogamous species, some individuals will employ an alternative strategy to be reproductively successful since a social bond does not always equal reproductive success. [ 24 ] [ 25 ]
It appears that EPFs in some species is driven by the good genes hypothesis, [ 26 ] : 295 In red-back shrikes ( Lanius collurio ) extra-pair males had significantly longer tarsi than within-pair males, and all of the extra-pair offspring were males, supporting the prediction that females will bias their clutch towards males when they mate with an "attractive" male. [ 27 ] In house wrens ( Troglodytes aedon) , extra-pair offspring were also found to be male-biased compared to within-offspring. [ 28 ]
Without molecular ecology, identifying individuals that participate in EPFs and the offspring that result from EPFs would be impossible.
Isolation by distance (IBD), like reproductive isolation, is the effect of physical barriers to populations that limit migration and lower gene flow. The shorter the distance between populations the more likely individuals are to disperse and mate and thus, increase gene flow. [ 29 ] The use of molecular data, specifically allele frequencies of individuals among populations in relation to their geographic distance help to explain concepts such as, sex-biased dispersal , speciation , and landscape genetics.
The Mantel test is an assessment that compares genetic distance with geographic distance and is most appropriate because it doesn't assume that the comparisons are independent of each other. [ 2 ] : 135 There are three main factors that influence the chances of finding a correlation of IBD, which include sample size, metabolism, and taxa. [ 30 ] For example, based on the meta-analysis , ectotherms are more likely than endotherms to display greater IBD.
Metapopulation theory dictates that a metapopulation consists of spatially distinct populations that interact with one another on some level and move through a cycle of extinctions and recolonizations (i.e. through dispersal). [ 31 ] The most common metapopulation model is the extinction-recolonization model which explains systems in which spatially distinct populations undergo stochastic changes in population sizes which may lead to extinction at the population level. Once this has occurred, dispersing individuals from other populations will immigrate and "rescue" the population at that site. Other metapopulation models include the source-sink model (island-mainland model) where one (or multiple) large central population(s) produces disperses to smaller satellite populations that have a population growth rate of less than one and could not persist without the influx from the main population.
Metapopulation structure and the repeated extinctions and recolonizations can significantly affect a population's genetic structure. Recolonization by a few dispersers leads to population bottlenecks which will reduce the effective population size (Ne), accelerate genetic drift , and deplete genetic variation. However, dispersal between populations in the metapopulation can reverse or halt these processes over the long term. Therefore, in order for individual sub-populations to remain healthy, they must either have a large population size or have a relatively high rate of dispersal with other subpopulations. Molecular ecology focuses on using tests to determine the rates of dispersal between populations and can use molecular clocks to determine when historic bottlenecks occurred. As habitat becomes more fragmented, dispersal between populations will become increasingly rare. Therefore, subpopulations that may have historically been preserved by a metapopulation structure may start to decline. Using mitochondrial or nuclear markers to monitor dispersal coupled with population Fst values and allelic richness can provide insight into how well a population is performing and how it will perform into the future. [ citation needed ]
The molecular clock hypothesis states that DNA sequences roughly evolve at the same rate and because of this the dissimilarity between two sequences can be used to tell how long ago they diverged from one another. The first step in using a molecular clock is it must be calibrated based on the approximate time the two lineages studied diverged. The sources usually used to calibrate the molecular clocks are fossils or known geological events in the past. After calibrating the clock the next step is to calculate divergence time by dividing the estimated time since the sequences diverged by the amount of sequence divergence. The resulting number is the estimated rate at which molecular evolution is occurring. The most widely cited molecular clock is a ‘universal’ mtDNA clock of approximately two percent sequence divergence every million years. [ 32 ] Although referred to as a universal clock, this idea of the "universal" clock is not possible considering rates of evolution differ within DNA regions. Another drawback to using molecular clocks is that they ideally need to be calibrated from an independent source of data other than the molecular data. This poses a problem for taxa that don't fossilize/preserve easily, making it almost impossible to calibrate their molecular clock. Despite these inconveniences, the molecular clock hypothesis is still used today. The molecular clock has been successful in dating events happening up to 65 million years ago. [ 33 ]
The concept of mate choice explains how organisms select their mates based on two main methods; The Good Genes Hypothesis and Genetic Compatibility. The Good Genes Hypothesis, also referred to as the sexy son hypothesis , suggests that the females will choose a male that produce an offspring that will have increased fitness advantages and genetic viability . Therefore, the mates that are more 'attractive" are more likely to be chosen for mating and pass on their genes to the next generation. In species which exhibit polyandry the females will search out for the most suitable males and re-mate until they have found the best sperm to fertilize their eggs. [ 34 ] Genetic compatibility is where mates are choosing their partner based on the compatibility of their genotypes. The mate which is doing the selecting must know their own genotype as well as the genotypes of potential mates in order to select the appropriate partner. [ 35 ] Genetic compatibility in most instances is limited to specific traits, such as the major histocompatibility complex in mammals, because of complex genetic interactions. This behavior is potentially seen in humans. A study looking at women's choice in men based on body odors concluded that the scent of the odors were influenced by the MHC and that they influence mate choice in human populations. [ 36 ]
Sex-biased dispersal, or the tendency of one sex to disperse between populations more frequently than the other, is a common behavior studied by researchers. Three major hypotheses currently exist to help explain sex-biased dispersal. [ 37 ] The resource-competition hypothesis infers that the more philopatric sex (the sex more likely to remain at its natal grounds) benefits during reproduction simply by having familiarity with natal ground resources. [ 38 ] A second proposal for sex-biased dispersal is the local mate competition hypothesis, which introduces the idea that individuals encounter less mate competition with relatives the farther from their natal grounds they disperse. [ 39 ] And the inbreeding avoidance hypothesis suggests individuals disperse to decrease inbreeding.
Studying these hypotheses can be arduous since it is nearly impossible to keep track of every individual and their whereabouts within and between populations. To combat this time-consuming method, scientists have recruited several molecular ecology techniques in order to study sex-biased dispersal. One method is the comparison of differences between nuclear and mitochondrial markers among populations. Markers showing higher levels of differentiation indicate the more philopatric sex; that is, the more a sex remains at natal grounds, the more their markers will take on a unique I.D, due to lack of gene flow with respect to that marker. [ 40 ] Researchers can also quantify male-male and female-female pair relatedness within populations to understand which sex is more likely to disperse. Pairs with values consistently lower in one sex indicate the dispersing sex. This is because there is more gene flow in the dispersing sex and their markers are less similar than individuals of the same sex in the same population, which produces a low relatedness value. [ 41 ] FST values are also used to understand dispersing behaviors by calculating an FST value for each sex. The sex that disperses more displays a lower FST value, which measures levels of inbreeding between the subpopulation and the total population. Additionally, assignment tests can be utilized to quantify the number of individuals of a certain sex dispersing to other populations. A more mathematical approach to quantifying sex-biased dispersal on the molecular level is the use of spatial autocorrelation. [ 42 ] This correlation analyzes the relationship between geographic distance and spatial distance. A correlation coefficient, or r value, is calculated and the plot of r against distance provides an indication of individuals more related to or less related to one another than expected. [ 26 ] : 299–307
A quantitative trait locus (QTL) refers to a suite of genes that controls a quantitative trait. A quantitative trait is one that is influenced by several different genes as opposed to just one or two. [ 26 ] QTLs are analyzed using Qst. Qst looks at the relatedness of the traits in focus. In the case of QTLs, clines are analyzed by Qst. A cline (biology) is a change in allele frequency across a geographical distance. [ 26 ] This change in allele frequency causes a series of intermediate varying phenotypes that when associated with certain environmental conditions can indicate selection. This selection causes local adaptation, but high gene flow is still expected to be present along the cline.
For example, barn owls in Europe exhibit a cline in reference to their plumage coloration. Their feathers range in coloration from white to reddish-brown across the geological range of the southwest to the northeast. [ 43 ] This study sought to find if this phenotypic variation was due to selection by calculating the Qst values across the owl populations. Because high gene flow was still anticipated along this cline, selection was only expected to act upon the QTLs that incur locally adaptive phenotypic traits. This can be determined by comparing the Qst values to Fst ( fixation index ) values. If both of these values are similar and Fst is based on neutral markers then it can be assumed that the QTLs were based on neutral markers (markers not under selection or locally adapted) as well. However, in the case of the barn owls the Qst value was much higher than the Fst value. This means that high gene flow was present allowing the neutral markers to be similar, indicated by the low Fst value. But, local adaptation due to selection was present as well, in the form of varying plumage coloration since the Qst value was high, indicating differences in these non-neutral loci. [ 43 ] In other words, this cline of plumage coloration has some sort of adaptive value to the birds.
Fixation indices are used when determining the level of genetic differentiation between sub-populations within a total population. F ST is the script used to represent this index when using the formula:
In this equation, H T represents the expected heterozygosity of the total population and H S is the expected heterozygosity of a sub-populations. Both measures of heterozygosity are measured at one loci. In the equation, heterozygosity values expected from the total population are compared to observed heterozygosity values of the sub-populations within this total population. Larger F ST values imply that the level of genetic differentiation between sub-populations within a total population is more significant. [ 26 ] The level of differentiation is the result of a balance between gene flow amongst sub-populations (decreasing differentiation) and genetic drift within these sub-populations (increasing differentiation); however, some molecular ecologists note that it cannot be assumed that these factors are at equilibrium. [ 44 ] F ST can also be viewed as a way of comparing the amount of inbreeding within sub-populations to the amount of inbreeding for the total population and is sometimes referred to as an inbreeding coefficient. In these cases, higher F ST values typically imply higher amounts of inbreeding within the sub-populations. [ 45 ] Other factors such as selection pressures may also affect F ST values. [ 46 ]
F ST values are accompanied by several analog equations (F IS , G ST , etc.). These additional measures are interpreted in a similar manner to F ST values; however, they are adjusted to accompany other factors that F ST may not, such as accounting for multiple loci. [ 47 ]
Inbreeding depression is the reduced fitness and survival of offspring from closely related parents. [ 48 ] Inbreeding is commonly seen in small populations because of the greater chance of mating with a relative due to limited mate choice . Inbreeding, especially in small populations, is more likely to result in higher rates of genetic drift , which leads to higher rates of homozygosity at all loci in the population and decreased heterozygosity . The rate of inbreeding is based on decreased heterozygosity. In other words, the rate at which heterozygosity is lost from a population due to genetic drift is equal to the rate of accumulating inbreeding in a population. In the absence of migration, inbreeding will accumulate at a rate that is inversely proportional to the size of the population.
There are two ways in which inbreeding depression can occur. The first of these is through dominance, where beneficial alleles are usually dominant and harmful alleles are usually recessive. The increased homozygosity resulting from inbreeding means that harmful alleles are more likely to be expressed as homozygotes, and the deleterious effects cannot be masked by the beneficial dominant allele. The second method through which inbreeding depression occurs is through overdominance , or heterozygote advantage. [ 49 ] Individuals that are heterozygous at a particular locus have a higher fitness than homozygotes at that locus. Inbreeding leads to decreased heterozygosity, and therefore decreased fitness.
Deleterious alleles can be scrubbed by natural selection from inbred populations through genetic purging . As homozygosity increases, less fit individuals will be selected against and thus those harmful alleles will be lost from the population. [ 26 ]
Outbreeding depression is the reduced biological fitness in the offspring of distantly related parents. [ 50 ] The decline in fitness due to outbreeding is attributed to a breakup of coadapted gene complexes or favorable epistatic relationships. Unlike inbreeding depression, outbreeding depression emphasizes interactions between loci rather than within them. Inbreeding and outbreeding depression can occur at the same time. Risks of outbreeding depression increase with increased distance between populations. The risk of outbreeding depression during genetic rescue often limits the ability to increase a small or fragmented gene pool's genetic diversity. [ 50 ] The spawn of an intermediate of two or more adapted traits can render the adaptation less effective than either of the parental adaptations. [ 51 ] Three main mechanisms influence outbreeding depression; genetic drift , population bottlenecking , differentiation of adaptations , and set chromosomal dissimilarities resulting in sterile offspring. [ 52 ] If outbreeding is limited and the population is large enough, selective pressure acting on each generation may be able to restore fitness. However, the population is likely to experience a multi-generational decline of overall fitness as selection for traits takes multiple generations. [ 53 ] Selection acts on outbred generations using increased diversity to adapt to the environment. This may result in greater fitness among offspring than the original parental type.
Conservation units are classifications often used in conservation biology , conservation genetics , and molecular ecology in order to separate and group different species or populations based on genetic variance and significance for protection. [ 54 ] Two of the most common types of conservation units are:
Conservation units are often identified using both neutral and non-neutral genetic markers, with each having its own advantages. Using neutral markers during unit identification can provide unbiased assumptions of genetic drift and time since reproductive isolation within and among species and populations, while using non-neutral markers can provide more accurate estimations of adaptive evolutionary divergence, which can help determine the potential for a conservation unit to adapt within a certain habitat. [ 54 ]
Because of conservation units, populations and species that have high or differing levels of genetic variation can be distinguished in order to manage each individually, which can ultimately differ based on a number of factors. In one instance, Atlantic salmon located within the Bay of Fundy were given evolutionary significance based on the differences in genetic sequences found among different populations. [ 55 ] This detection of evolutionary significance can allow each population of salmon to receive customized conservation and protection based on their adaptive uniqueness in response to geographic location. [ 55 ]
Phylogenies are the evolutionary history of an organism, also known as phylogeography . A phylogenetic tree is a tree that shows evolutionary relationships between different species based on similarities/differences among genetic or physical traits. Community ecology is based on knowledge of evolutionary relationships among coexisting species. [ 56 ] Phylogenies embrace aspects of both time (evolutionary relationships) and space (geographic distribution). [ 57 ] Typically phylogeny trees include tips, which represent groups of descendent species, and nodes, which represent the common ancestors of those descendants. If two descendants split from the same node, they are called sister groups . They also may include an outgroup , a species outside of the group of interest. [ 58 ] The trees depict clades , which is a group of organisms that include an ancestor and all descendants of that ancestor. The maximum parsimony tree is the simplest tree that has the minimum number of steps possible.
Phylogenies confer important historical processes that shape current distributions of genes and species. [ 57 ] When two species become isolated from each other they retain some of the same ancestral alleles also known as allele sharing. Alleles can be shared because of lineage sorting and hybridization. Lineage sorting is driven by genetic drift and must occur before alleles become species specific. Some of these alleles over time will simply be lost, or they may proliferate. Hybridization leads to introgression of alleles from one species to another. [ 59 ]
Community ecology emerged from natural history and population biology. Not only does it include the study of the interactions between species, but it also focuses on ecological concepts such as mutualism, predation, and competition within communities. [ 60 ] It is used to explicate properties such as diversity, dominance, and composition of a community. [ 61 ] There are three primary approaches to integrating phylogenetic information into studies of community organizations. The first approach focuses on examining the phylogenetic structure of community assemblages. The second approach focuses on exploring the phylogenetic basis of community niche structures. The final way zones in on adding a community context to studies of trait evolution and biogeography. [ 56 ]
Species concepts are the subject of debate in the field of molecular ecology. Since the beginning of taxonomy, scientists have wanted to standardize and perfect the way species are defined. There are many species concepts that dictate how ecologists determine a good species. The most commonly used concept is the biological species concept which defines a species as groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups (Mayr, 1942). [ 57 ] This concept is not always useful, particularly when it comes to hybrids. Other species concepts include phylogenetic species concept which describes a species as the smallest identifiable monophyletic group of organisms within which there is a parental pattern of ancestry and descent. [ 57 ] This concept defines species on the identifiable. It would also suggest that until two identifiable groups actually produce offspring, they remain separate species. In 1999, John Avise and Glenn Johns suggested a standardized method for defining species based on past speciation and measuring biological classifications as time dependent. Their method used temporal banding to make genus, family and order based on how many tens of millions of years ago the speciation event that resulted in each species took place. [ 62 ]
Landscape genetics is a rapidly emerging interdisciplinary field within molecular ecology. Landscape genetics relates genetics to landscape characteristics, such as land-cover use (forests, agriculture, roads, etc.), presence of barriers, and corridors, rivers, elevation, etc. Landscape genetics answers how landscape affects dispersal and gene flow.
Barriers are any landscape features that prevents dispersal. [ 26 ] Barriers for terrestrial species can include mountains, rivers, roads, and unsuitable terrain, such as agriculture fields. Barriers for aquatic species can include islands or dams. Barriers are species specific; for example a river is a barrier to a field mouse, while a hawk can fly over a river. Corridors are areas over which dispersal is possible. [ 26 ] Corridors are stretches of suitable habitat and can also be man-made, such as overpasses over roads and fish ladders on dams.
Geographic data used for landscape genetics can include data collected by radars in planes, land satellite data, marine data collected by NOAA, as well as any other ecological data. In landscape genetics researchers often use different analyses to attempt to determine the best way for a species to travel from point A to point B. Least cost path analysis uses geographic data to determine the most efficient path from one point to another. [ 26 ] : 131–337–341 Circuit scape analysis predicts all the possible paths and the probability of each path's use between point A and point B. These analyses are used to determine the route a dispersing individual is likely to travel.
Landscape genetics is becoming an increasingly important tool in wildlife conservation efforts. It is being used to determine how habitat loss and fragmentation affects the movement of species. [ 63 ] It is also used to determine which species need to be managed and whether to manage subpopulations the same or differently according to their gene flow. | https://en.wikipedia.org/wiki/Molecular_ecology |
In theoretical chemistry , molecular electronic transitions take place when electrons in a molecule are excited from one energy level to a higher energy level. The energy change associated with this transition provides information on the structure of the molecule and determines many of its properties, such as colour . The relationship between the energy involved in the electronic transition and the frequency of radiation is given by Planck's relation .
The electronic transitions in organic compounds and some other compounds can be determined by ultraviolet–visible spectroscopy , provided that transitions in the ultraviolet (UV) or visible range of the electromagnetic spectrum exist for the compound. [ 1 ] [ 2 ] Electrons occupying a HOMO (highest-occupied molecular orbital) of a sigma bond (σ) can get excited to the LUMO (lowest-unoccupied molecular orbital) of that bond. This process is denoted as a σ → σ* transition. Likewise, promotion of an electron from a pi-bonding orbital (π) to an antibonding pi orbital (π*) is denoted as a π → π* transition. Auxochromes with free electron pairs (denoted as "n") have their own transitions, as do aromatic pi bond transitions. Sections of molecules which can undergo such detectable electron transitions can be referred to as chromophores , since such transitions absorb electromagnetic radiation (light), which may be hypothetically perceived as color somewhere in the electromagnetic spectrum. The following molecular electronic transitions exist:
In addition to these assignments, electronic transitions also have so-called bands associated with them. The following bands are defined (by A. Burawoy in 1930): [ 3 ]
For example, the absorption spectrum for ethane shows a σ → σ* transition at 135 nm and that of water a n → σ* transition at 167 nm with an extinction coefficient of 7,000. Benzene has three aromatic π → π* transitions; two E-bands at 180 and 200 nm and one B-band at 255 nm with extinction coefficients respectively 60,000, 8,000 and 215. These absorptions are not narrow bands but are generally broad because the electronic transitions are superimposed on the other molecular energy states .
The electronic transitions of molecules in solution can depend strongly on the type of solvent with additional bathochromic shifts or hypsochromic shifts .
Spectral lines are associated with atomic electronic transitions and polyatomic gases have their own absorption band system. [ 4 ] | https://en.wikipedia.org/wiki/Molecular_electronic_transition |
Molecular electronics is the study and application of molecular building blocks for the fabrication of electronic components. It is an interdisciplinary area that spans physics , chemistry , and materials science . It provides a potential means to extend Moore's Law beyond the foreseen limits of small-scale conventional silicon integrated circuits . [ 1 ]
Molecular scale electronics , also called single-molecule electronics, is a branch of nanotechnology that uses single molecules, or nanoscale collections of single molecules, as electronic components . Because single molecules constitute the smallest stable structures possible, this miniaturization is the ultimate goal for shrinking electrical circuits .
Conventional electronic devices are traditionally made from bulk materials. Bulk methods have inherent limits, and are growing increasingly demanding and costly. Thus, the idea was born that the components could instead be built up atom by atom in a chemistry lab (bottom up) as opposed to carving them out of bulk material (top down). In single-molecule electronics, the bulk material is replaced by single molecules. The molecules used have properties that resemble traditional electronic components such as a wire , transistor , or rectifier . [ 2 ]
Single-molecule electronics is an emerging field, and entire electronic circuits consisting exclusively of molecular sized compounds are still very far from being realized. However, the continuous demand for more computing power, together with the inherent limits of the present day lithographic methods make the transition seem unavoidable. Currently, the focus is on discovering molecules with interesting properties and on finding ways to obtain reliable and reproducible contacts between the molecular components and the bulk material of the electrodes.
Molecular electronics operates at distances less than 100 nanometers. Miniaturization down to single molecules brings the scale down to a regime where quantum mechanics effects are important. In contrast to the case in conventional electronic components, where electrons can be filled in or drawn out more or less like a continuous flow of electric charge , the transfer of a single electron alters the system significantly. The significant amount of energy due to charging has to be taken into account when making calculations about the electronic properties of the setup and is highly sensitive to distances to conducting surfaces nearby.
One of the biggest problems with measuring on single molecules is to establish reproducible electrical contact with only one molecule and doing so without shortcutting the electrodes. Because the current photolithographic technology is unable to produce electrode gaps small enough to contact both ends of the molecules tested (in the order of nanometers), alternative strategies are used. These include molecular-sized gaps called break junctions, in which a thin electrode is stretched until it breaks. One of the ways to overcome the gap size issue is by trapping molecular functionalized nanoparticles (internanoparticle spacing is matchable to the size of molecules), and later target the molecule by place exchange reaction. [ 3 ]
Another method is to use the tip of a scanning tunneling microscope (STM) to contact molecules adhered at the other end to a metal substrate. [ 4 ] Another popular way to anchor molecules to the electrodes is to make use of sulfur 's high chemical affinity to gold ; though useful, the anchoring is non-specific and thus anchors the molecules randomly to all gold surfaces, and the contact resistance is highly dependent on the precise atomic geometry around the site of anchoring and thereby inherently compromises the reproducibility of the connection. To circumvent the latter issue, experiments have shown that fullerenes could be a good candidate for use instead of sulfur because of the large conjugated π-system that can electrically contact many more atoms at once than a single atom of sulfur. [ 5 ]
The shift from metal electrodes to semiconductor electrodes allows for more tailored properties and thus for more interesting applications. There are some concepts for contacting organic molecules using semiconductor-only electrodes, for example by using indium arsenide nanowires with an embedded segment of the wider bandgap material indium phosphide used as an electronic barrier to be bridged by molecules. [ 6 ]
One of the biggest hindrances for single-molecule electronics to be commercially exploited is the lack of means to connect a molecular sized circuit to bulk electrodes in a way that gives reproducible results. Also problematic is that some measurements on single molecules are done at cryogenic temperatures , near absolute zero, which is very energy consuming.
The first time in history molecular electronics are mentioned was in 1956 by the German physicist Arthur Von Hippel, [ 7 ] who suggested a bottom-up procedure of developing electronics from atoms and molecules rather than using prefabricated materials, an idea he named molecular engineering. However the first breakthrough in the field is considered by many the article by Aviram and Ratner in 1974. [ 2 ] In this article named Molecular Rectifiers, they presented a theoretical calculation of transport through a modified charge-transfer molecule with donor acceptor groups that would allow transport only in one direction, essentially like a semiconductor diode. This was a breakthrough that inspired many years of research in the field of molecular electronics.
The biggest advantage of conductive polymers is their processability, mainly by dispersion . Conductive polymers are not plastics , i.e., they are not thermoformable, yet they are organic polymers, like (insulating) polymers. They can offer high electrical conductivity but have different mechanical properties than other commercially used polymers. The electrical properties can be fine-tuned using the methods of organic synthesis [ 8 ] and of advanced dispersion. [ 9 ]
The linear-backbone polymers such as polyacetylene , polypyrrole , and polyaniline are the main classes of conductive polymers. Poly(3-alkylthiophenes) are the archetypical materials for solar cells and transistors. [ 8 ]
Conducting polymers have backbones of contiguous sp 2 hybridized carbon centers. One valence electron on each center resides in a p z orbital, which is orthogonal to the other three sigma-bonds. The electrons in these delocalized orbitals have high mobility when the material is doped by oxidation, which removes some of these delocalized electrons. Thus the conjugated p-orbitals form a one-dimensional electronic band , and the electrons within this band become mobile when it is emptied partly. Despite intensive research, the relationship between morphology, chain structure, and conductivity is poorly understood yet. [ 10 ]
Due to their poor processability, conductive polymers have few large-scale applications. They have some promise in antistatic materials [ 8 ] and have been built into commercial displays and batteries, but have had limits due to the production costs, material inconsistencies, toxicity, poor solubility in solvents, and inability to directly melt process. Nevertheless, conducting polymers are rapidly gaining attraction in new uses with increasingly processable materials with better electrical and physical properties and lower costs. With the availability of stable and reproducible dispersions, poly(3,4-ethylenedioxythiophene) (PEDOT) and polyaniline have gained some large-scale applications. While PEDOT is mainly used in antistatic applications and as a transparent conductive layer in the form of PEDOT and polystyrene sulfonic acid (PSS, mixed form: PEDOT:PSS) dispersions, polyaniline is widely used to make printed circuit boards, in the final finish, to protect copper from corrosion and preventing its solderability. [ 9 ] Newer nanostructured forms of conducting polymers provide fresh impetus to this field, with their higher surface area and better dispersability.
Recently supramolecular chemistry has been introduced to the field, which provide new opportunity for developing next generation of molecular electronics. [ 11 ] [ 12 ] For example, two orders of magnitude current intensity enhancement was achieved by inserting cationic molecules into the cavity of pillar[5]arene. [ 13 ] | https://en.wikipedia.org/wiki/Molecular_electronics |
Molecular engineering is an emerging field of study concerned with the design and testing of molecular properties, behavior and interactions in order to assemble better materials, systems, and processes for specific functions. This approach, in which observable properties of a macroscopic system are influenced by direct alteration of a molecular structure, falls into the broader category of “bottom-up” design . This field it utmost relevant to Cheminformatics , when related to the research in the Computational Sciences .
Molecular engineering is highly interdisciplinary by nature, encompassing aspects of chemical engineering , materials science , bioengineering , electrical engineering , physics , mechanical engineering , and chemistry . There is also considerable overlap with nanotechnology , in that both are concerned with the behavior of materials on the scale of nanometers or smaller. Given the highly fundamental nature of molecular interactions, there are a plethora of potential application areas, limited perhaps only by one's imagination and the laws of physics. However, some of the early successes of molecular engineering have come in the fields of immunotherapy, synthetic biology, and printable electronics (see molecular engineering applications ).
Molecular engineering is a dynamic and evolving field with complex target problems; breakthroughs require sophisticated and creative engineers who are conversant across disciplines. A rational engineering methodology that is based on molecular principles is in contrast to the widespread trial-and-error approaches common throughout engineering disciplines. Rather than relying on well-described but poorly-understood empirical correlations between the makeup of a system and its properties, a molecular design approach seeks to manipulate system properties directly using an understanding of their chemical and physical origins. This often gives rise to fundamentally new materials and systems, which are required to address outstanding needs in numerous fields, from energy to healthcare to electronics. Additionally, with the increased sophistication of technology, trial-and-error approaches are often costly and difficult, as it may be difficult to account for all relevant dependencies among variables in a complex system . Molecular engineering efforts may include computational tools, experimental methods, or a combination of both.
Molecular engineering was first mentioned in the research literature in 1956 by Arthur R. von Hippel , who defined it as "… a new mode of thinking about engineering problems. Instead of taking prefabricated materials and trying to devise engineering applications consistent with their macroscopic properties, one builds materials from their atoms and molecules for the purpose at hand." [ 1 ] This concept was echoed in Richard Feynman's seminal 1959 lecture There's Plenty of Room at the Bottom , which is widely regarded as giving birth to some of the fundamental ideas of the field of nanotechnology . In spite of the early introduction of these concepts, it was not until the mid-1980s with the publication of Engines of Creation: The Coming Era of Nanotechnology by Drexler that the modern concepts of nano and molecular-scale science began to grow in the public consciousness.
The discovery of electrically conductive properties in polyacetylene by Alan J. Heeger in 1977 [ 2 ] effectively opened the field of organic electronics , which has proved foundational for many molecular engineering efforts. Design and optimization of these materials has led to a number of innovations including organic light-emitting diodes and flexible solar cells .
Molecular design has been an important element of many disciplines in academia, including bioengineering, chemical engineering, electrical engineering, materials science, mechanical engineering and chemistry. However, one of the ongoing challenges is in bringing together the critical mass of manpower amongst disciplines to span the realm from design theory to materials production, and from device design to product development. Thus, while the concept of rational engineering of technology from the bottom-up is not new, it is still far from being widely translated into R&D efforts.
Molecular engineering is used in many industries. Some applications of technologies where molecular engineering plays a critical role:
Molecular engineers utilize sophisticated tools and instruments to make and analyze the interactions of molecules and the surfaces of materials at the molecular and nano-scale. The complexity of molecules being introduced at the surface is increasing, and the techniques used to analyze surface characteristics at the molecular level are ever-changing and improving. Meantime, advancements in high performance computing have greatly expanded the use of computer simulation in the study of molecular scale systems.
At least three universities offer graduate degrees dedicated to molecular engineering: the University of Chicago , [ 18 ] the University of Washington , [ 19 ] and Kyoto University . [ 20 ] These programs are interdisciplinary institutes with faculty from several research areas.
The academic journal Molecular Systems Design & Engineering [ 21 ] publishes research from a wide variety of subject areas that demonstrates "a molecular design or optimisation strategy targeting specific systems functionality and performance." | https://en.wikipedia.org/wiki/Molecular_engineering |
In chemistry and physics , a molecular entity , or chemical entity , is "any constitutionally or isotopically distinct atom , molecule , ion , ion pair, radical , radical ion, complex , conformer , etc., identifiable as a separately distinguishable entity". [ 1 ] A molecular entity is any singular entity, irrespective of its nature, used to concisely express any type of chemical particle that can exemplify some process: for example, atoms, molecules, ions, etc. can all undergo a chemical reaction .
Chemical species is the macroscopic equivalent of molecular entity and refers to sets or ensembles of molecular entities.
According to IUPAC , "The degree of precision necessary to describe a molecular entity depends on the context. For example 'hydrogen molecule' is an adequate definition of a certain molecular entity for some purposes, whereas for others it is necessary to distinguish the electronic state and/or vibrational state and/or nuclear spin , etc. of the hydrogen molecule."
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Molecular_entity |
Molecular epidemiology is a branch of epidemiology and medical science that focuses on the contribution of potential genetic and environmental risk factors, identified at the molecular level, to the etiology , distribution and prevention of disease within families and across populations. [ 1 ] This field has emerged from the integration of molecular biology into traditional epidemiological research. Molecular epidemiology improves our understanding of the pathogenesis of disease by identifying specific pathways, molecules and genes that influence the risk of developing disease. [ 2 ] [ 3 ] More broadly, it seeks to establish understanding of how the interactions between genetic traits and environmental exposures result in disease. [ 4 ]
The term "molecular epidemiology" was first coined by Edwin D. Kilbourne in a 1973 article entitled "The molecular epidemiology of influenza". [ 5 ] The term became more formalized with the formulation of the first book on molecular epidemiology titled Molecular Epidemiology: Principles and Practice by Paul A. Schulte and Frederica Perera . [ 6 ] At the heart of this book is the impact of advances in molecular research that have given rise to and enabled the measurement and exploitation of the biomarker as a vital tool to link traditional molecular and epidemiological research strategies to understand the underlying mechanisms of disease in populations. [ citation needed ]
While most molecular epidemiology studies are using conventional disease designation system for an outcome (with the use of exposures at the molecular level), compelling evidence indicates that disease evolution represents inherently heterogeneous process differing from person to person. Conceptually, each individual has a unique disease process different from any other individual ("the unique disease principle"), [ 7 ] considering uniqueness of the exposome and its unique influence on molecular pathologic process in each individual. Studies to examine the relationship between an exposure and molecular pathologic signature of disease (particularly, cancer) became increasingly common throughout the 2000s. However, the use of molecular pathology in epidemiology posed unique challenges including lack of standardized methodologies and guidelines as well as paucity of interdisciplinary experts and training programs. [ 8 ] [ 9 ] The use of "molecular epidemiology" for this type of research masked the presence of these challenges, and hindered the development of methods and guidelines. [ 10 ] [ 11 ] Furthermore, the concept of disease heterogeneity appears to conflict with the premise that individuals with the same disease name have similar etiologies and disease processes.
The genome of a bacterial species fundamentally determines its identity. Thus, gel electrophoresis techniques like pulsed-field gel electrophoresis can be used in molecular epidemiology to comparatively analyze patterns of bacterial chromosomal fragments and to elucidate the genomic content of bacterial cells. Due to its widespread use and ability to analyse epidemiological information about most bacterial pathogens based on their molecular markers, pulsed-field gel electrophoresis is relied upon heavily in molecular epidemiological studies. [ 12 ]
Molecular epidemiology allows for an understanding of the molecular outcomes and implications of diet, lifestyle, and environmental exposure, particularly how these choices and exposures result in acquired genetic mutations and how these mutations are distributed throughout selected populations through the use of biomarkers and genetic information. Molecular epidemiological studies are able to provide additional understanding of previously-identified risk factors and disease mechanisms. [ 13 ] Specific applications include:
While the use of advanced molecular analysis techniques within the field of molecular epidemiology is providing the larger field of epidemiology with greater means of analysis, Miquel Porta identified several challenges that the field of molecular epidemiology faces, particularly selecting and incorporating requisite applicable data in an unbiased manner. [ 15 ] Limitations of molecular epidemiological studies are similar in nature to those of generic epidemiological studies, that is, samples of convenience - both of the target population and genetic information, small sample sizes, inappropriate statistical methods, poor quality control, and poor definition of target populations. [ 16 ] | https://en.wikipedia.org/wiki/Molecular_epidemiology |
Molecular fragmentation (mass spectrometry) , or molecular dissociation, occurs both in nature and in experiments. It occurs when a complete molecule is rendered into smaller fragments by some energy source, usually ionizing radiation . The resulting fragments can be far more chemically reactive than the original molecule, as in radiation therapy for cancer, and are thus a useful field of inquiry. Different molecular fragmentation methods have been built to break apart molecules, some of which are listed below.
A major objective of theoretical chemistry and computational chemistry is the calculation of the energy and properties of molecules so that chemical reactivity and material properties can be understood from first principles. As a practical matter, the aim is to complement the knowledge we gain from experiments, particularly where experimental data may be incomplete or very difficult to obtain.
High-level ab-initio quantum chemistry methods are known to be an invaluable tool for understanding the structure, energy, and properties of small up to medium-sized molecules. However, the computational time for these calculations grows rapidly with increased size of molecules. One way of dealing with this problem is the molecular fragmentation approach which provides a hierarchy of approximations to the molecular electronic energy. In this approach, large molecules are divided in a systematic way to small fragments, for which high-level ab-initio calculation can be performed with acceptable computational time.
The defining characteristic of an energy-based molecular fragmentation method is that the molecule (also cluster of molecules, or liquid or solid) is broken up into a set of relatively small molecular fragments, in such a way that the electronic energy, E F {\displaystyle E_{F}} , of the full system F {\displaystyle F} is given by a sum of the energies of these fragment molecules:
E F = ∑ i = 1 N f r a g ( c i E i ) + ϵ F {\displaystyle E_{F}=\sum _{i=1}^{N_{frag}}(c_{i}E_{i})+\epsilon _{F}}
where E i {\displaystyle E_{i}} is the energy of a relatively small molecular fragment, F i {\displaystyle F_{i}} . The c i {\displaystyle c_{i}} are simple coefficients (typically integers), and N f r a g {\displaystyle N_{frag}} is the number of fragment molecules. Some of the methods also require a correction to the energies evaluated from the fragments. However, where necessary, this correction, ϵ F {\displaystyle \epsilon _{F}} , is easily computed. [ 1 ]
Different methods have been devised to fragment molecules. Among them you can find the following energy-based methods:
This molecular physics –related article is a stub . You can help Wikipedia by expanding it .
This molecular biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Molecular_fragmentation_methods |
Molecular genetics is a branch of biology that addresses how differences in the structures or expression of DNA molecules manifests as variation among organisms. Molecular genetics often applies an "investigative approach" to determine the structure and/or function of genes in an organism's genome using genetic screens . [ 1 ] [ 2 ]
The field of study is based on the merging of several sub-fields in biology: classical Mendelian inheritance , cellular biology , molecular biology , biochemistry , and biotechnology . It integrates these disciplines to explore things like genetic inheritance, gene regulation and expression, and the molecular mechanism behind various life processes. [ 1 ]
A key goal of molecular genetics is to identify and study genetic mutations. Researchers search for mutations in a gene or induce mutations in a gene to link a gene sequence to a specific phenotype. [ 3 ] Therefore molecular genetics is a powerful methodology for linking mutations to genetic conditions that may aid the search for treatments of various genetics diseases.
The discovery of DNA as the blueprint for life and breakthroughs in molecular genetics research came from the combined works of many scientists. In 1869, chemist Johann Friedrich Miescher , who was researching the composition of white blood cells, discovered and isolated a new molecule that he named nuclein from the cell nucleus, which would ultimately be the first discovery of the molecule DNA that was later determined to be the molecular basis of life. He determined it was composed of hydrogen, oxygen, nitrogen and phosphorus. [ 4 ] Biochemist Albrecht Kossel identified nuclein as a nucleic acid and provided its name deoxyribonucleic acid (DNA). He continued to build on that by isolating the basic building blocks of DNA and RNA ; made up of the nucleotides : adenine, guanine, thymine, cytosine. and uracil. His work on nucleotides earned him a Nobel Prize in Physiology. [ 5 ]
In the early 1800s, Gregor Mendel , who became known as one of the fathers of genetics , made great contributions to the field of genetics through his various experiments with pea plants where he was able to discover the principles of inheritance such as recessive and dominant traits, without knowing what genes where composed of. [ 6 ] In the mid 19th century, anatomist Walther Flemming, discovered what we now know as chromosomes and the separation process they undergo through mitosis. His work along with Theodor Boveri first came up with the Chromosomal Theory of Inheritance, which helped explain some of the patterns Mendel had observed much earlier. [ 7 ]
For molecular genetics to develop as a discipline, several scientific discoveries were necessary. The discovery of DNA as a means to transfer the genetic code of life from one cell to another and between generations was essential for identifying the molecule responsible for heredity . Molecular genetics arose initially from studies involving genetic transformation in bacteria . In 1944 Avery, McLeod and McCarthy [ 8 ] isolated DNA from a virulent strain of S. pneumoniae , and using just this DNA were able to convert a harmless strain to virulence. They called the uptake, incorporation and expression of DNA by bacteria "transformation". This finding suggested that DNA is the genetic material of bacteria. [ 9 ] Bacterial transformation is often induced by conditions of stress, and the function of transformation appears to be repair of genomic damage . [ 9 ]
In 1950, Erwin Chargaff derived rules that offered evidence of DNA being the genetic material of life. These were "1) that the base composition of DNA varies between species and 2) in natural DNA molecules, the amount of adenine (A) is equal to the amount of thymine (T), and the amount of guanine (G) is equal to the amount of cytosine (C)." [ 10 ] These rules, known as Chargaff's rules, helped to understand of molecular genetics. [ 10 ] In 1953 Francis Crick and James Watson, building upon the X-ray crystallography work done by Rosalind Franklin and Maurice Wilkins, were able to derive the 3-D double helix structure of DNA. [ 11 ]
The phage group was an informal network of biologists centered on Max Delbrück that contributed substantially to molecular genetics and the origins of molecular biology during the period from about 1945 to 1970. [ 12 ] The phage group took its name from bacteriophages , the bacteria-infecting viruses that the group used as experimental model organisms. Studies by molecular geneticists affiliated with this group contributed to understanding how gene-encoded proteins function in DNA replication , DNA repair and DNA recombination , and on how viruses are assembled from protein and nucleic acid components (molecular morphogenesis). Furthermore, the role of chain terminating codons was elucidated. One noteworthy study was performed by Sydney Brenner and collaborators using "amber" mutants defective in the gene encoding the major head protein of bacteriophage T4. [ 13 ] This study demonstrated the co-linearity of the gene with its encoded polypeptide, thus providing strong evidence for the "sequence hypothesis" that the amino acid sequence of a protein is specified by the nucleotide sequence of the gene determining the protein.
The isolation of a restriction endonuclease in E. coli by Arber and Linn in 1969 opened the field of genetic engineering . [ 14 ] Restriction enzymes were used to linearize DNA for separation by electrophoresis and Southern blotting allowed for the identification of specific DNA segments via hybridization probes . [ 15 ] [ 16 ] In 1971, Berg utilized restriction enzymes to create the first recombinant DNA molecule and first recombinant DNA plasmid . [ 17 ] In 1972, Cohen and Boyer created the first recombinant DNA organism by inserting recombinant DNA plasmids into E. coli , now known as bacterial transformation , and paved the way for molecular cloning. [ 18 ] The development of DNA sequencing techniques in the late 1970s, first by Maxam and Gilbert, and then by Frederick Sanger , was pivotal to molecular genetic research and enabled scientists to begin conducting genetic screens to relate genotypic sequences to phenotypes. [ 19 ] Polymerase chain reaction (PCR) using Taq polymerase, invented by Mullis in 1985, enabled scientists to create millions of copies of a specific DNA sequence that could be used for transformation or manipulated using agarose gel separation. [ 20 ] A decade later, the first whole genome was sequenced ( Haemophilus influenzae ), followed by the eventual sequencing of the human genome via the Human Genome Project in 2001. [ 21 ] The culmination of all of those discoveries was a new field called genomics that links the molecular structure of a gene to the protein or RNA encoded by that segment of DNA and the functional expression of that protein within an organism. [ 22 ] Today, through the application of molecular genetic techniques, genomics is being studied in many model organisms and data is being collected in computer databases like NCBI and Ensembl . The computer analysis and comparison of genes within and between different species is called bioinformatics , and links genetic mutations on an evolutionary scale. [ 23 ]
The central dogma plays a key role in the study of molecular genetics. The central dogma states that DNA replicates itself, DNA is transcribed into RNA, and RNA is translated into proteins. [ 24 ] Along with the central dogma, the genetic code is used in understanding how RNA is translated into proteins. Replication of DNA and transcription from DNA to mRNA occurs in the nucleus while translation from RNA to proteins occurs in the ribosome . [ 25 ] The genetic code is made of four interchangeable parts othe DNA molecules, called "bases": adenine, cytosine, uracil (in RNA; thymine in DNA), and guanine and is redundant, meaning multiple combinations of these base pairs (which are read in triplicate) produce the same amino acid. [ 26 ] Proteomics and genomics are fields in biology that come out of the study of molecular genetics and the central dogma. [ 27 ]
An organism's genome is made up by its entire set of DNA and is responsible for its genetic traits, function and development. The composition of DNA itself is an essential component to the field of molecular genetics; it is the basis of how DNA is able to store genetic information, pass it on, and be in a format that can be read and translated. [ 28 ]
DNA is a double stranded molecule, with each strand oriented in an antiparallel fashion. Nucleotides are the building blocks of DNA, each composed of a sugar molecule, a phosphate group and one of four nitrogenous bases: adenine, guanine, cytosine, and thymine. A single strand of DNA is held together by covalent bonds, while the two antiparallel strands are held together by hydrogen bonds between the nucleotide bases. Adenine binds with thymine and cytosine binds with guanine. It is these four base sequences that form the genetic code for all biological life and contains the information for all the proteins the organism will be able to synthesize. [ 29 ]
Its unique structure allows DNA to store and pass on biological information across generations during cell division . At cell division, cells must be able to copy its genome and pass it on to daughter cells. This is possible due to the double-stranded structure of DNA because one strand is complementary to its partner strand, and therefore each of these strands can act as a template strand for the formation of a new complementary strand. This is why the process of DNA replication is known as a semiconservative process. [ 30 ]
Forward genetics is a molecular genetics technique used to identify genes or genetic mutations that produce a certain phenotype . In a genetic screen , random mutations are generated with mutagens (chemicals or radiation) or transposons and individuals are screened for the specific phenotype. Often, a secondary assay in the form of a selection may follow mutagenesis where the desired phenotype is difficult to observe, for example in bacteria or cell cultures. The cells may be transformed using a gene for antibiotic resistance or a fluorescent reporter so that the mutants with the desired phenotype are selected from the non-mutants. [ 31 ]
Mutants exhibiting the phenotype of interest are isolated and a complementation test may be performed to determine if the phenotype results from more than one gene. The mutant genes are then characterized as dominant (resulting in a gain of function), recessive (showing a loss of function), or epistatic (the mutant gene masks the phenotype of another gene). Finally, the location and specific nature of the mutation is mapped via sequencing . [ 32 ] Forward genetics is an unbiased approach and often leads to many unanticipated discoveries, but may be costly and time consuming. Model organisms like the nematode worm Caenorhabditis elegans , the fruit fly Drosophila melanogaster , and the zebrafish Danio rerio have been used successfully to study phenotypes resulting from gene mutations. [ 33 ]
Reverse genetics is the term for molecular genetics techniques used to determine the phenotype resulting from an intentional mutation in a gene of interest. The phenotype is used to deduce the function of the un-mutated version of the gene. Mutations may be random or intentional changes to the gene of interest. Mutations may be a missense mutation caused by nucleotide substitution, a nucleotide addition or deletion to induce a frameshift mutation , or a complete addition/deletion of a gene or gene segment. The deletion of a particular gene creates a gene knockout where the gene is not expressed and a loss of function results (e.g. knockout mice ). Missense mutations may cause total loss of function or result in partial loss of function, known as a knockdown. Knockdown may also be achieved by RNA interference (RNAi). [ 35 ] Alternatively, genes may be substituted into an organism's genome (also known as a transgene ) to create a gene knock-in and result in a gain of function by the host. [ 36 ] Although these techniques have some inherent bias regarding the decision to link a phenotype to a particular function, it is much faster in terms of production than forward genetics because the gene of interest is already known.
Molecular genetics is a scientific approach that utilizes the fundamentals of genetics as a tool to better understand the molecular basis of a disease and biological processes in organisms. Below are some tools readily employed by researchers in the field.
Microsatellites or single sequence repeats (SSRS) are short repeating segment of DNA composed to 6 nucleotides at a particular location on the genome that are used as genetic marker. Researchers can analyze these microsatellites in techniques such DNA fingerprinting and paternity testing since these repeats are highly unique to individuals/families. a can also be used in constructing genetic maps and to studying genetic linkage to locate the gene or mutation responsible for specific trait or disease. Microsatellites can also be applied to population genetics to study comparisons between groups. [ 37 ]
Genome-wide association studies (GWAS) are a technique that relies on single nucleotide polymorphisms ( SNPs ) to study genetic variations in populations that can be associated with a particular disease. The Human Genome Project mapped the entire human genome and has made this approach more readily available and cost effective for researchers to implement. In order to conduct a GWAS researchers use two groups, one group that has the disease researchers are studying and another that acts as the control that does not have that particular disease. DNA samples are obtained from participants and their genome can then be derived through lab machinery and quickly surveyed to compare participants and look for SNPs that can potentially be associated with the disease. This technique allows researchers to pinpoint genes and locations of interest in the human genome that they can then further study to identify that cause of the disease. [ 38 ]
Karyotyping allows researchers to analyze chromosomes during metaphase of mitosis, when they are in a condensed state. Chromosomes are stained and visualized through a microscope to look for any chromosomal abnormalities. This technique can be used to detect congenital genetic disorder such as down syndrome , identify gender in embryos, and diagnose some cancers that are caused by chromosome mutations such as translocations. [ 39 ]
Genetic engineering is an emerging field of science, and researcher are able to leverage molecular genetic technology to modify the DNA of organisms and create genetically modified and enhanced organisms for industrial, agricultural and medical purposes. This can be done through genome editing techniques, which can involve modifying base pairings in a DNA sequence, or adding and deleting certain regions of DNA. [ 40 ]
Gene editing allows scientists to alter/edit an organism's DNA. One way to due this is through the technique Crispr/Cas9 , which was adapted from the genome immune defense that is naturally occurring in bacteria. This technique relies on the protein Cas9 which allows scientists to make a cut in strands of DNA at a specific location, and it uses a specialized RNA guide sequence to ensure the cut is made in the proper location in the genome. Then scientists use DNAs repair pathways to induce changes in the genome; this technique has wide implications for disease treatment. [ 41 ]
Molecular genetics has wide implications in medical advancement and understanding the molecular basis of a disease allows the opportunity for more effective diagnostic and therapies. One of the goals of the field is personalized medicine , where an individual's genetics can help determine the cause and tailor the cure for a disease they are afflicted with and potentially allow for more individualized treatment approaches which could be more effective. For example, certain genetic variations in individuals could make them more receptive to a particular drug while other could have a higher risk of adverse reaction to treatments. So this information would allow researchers and clinicals to make the most informed decisions about treatment efficacy for patients rather than the standard trial and error approach. [ 42 ]
Forensic genetics plays an essential role for criminal investigations through that use of various molecular genetic techniques. One common technique is DNA fingerprinting which is done using a combination of molecular genetic techniques like polymerase chain reaction (PCR) and gel electrophoresis . PCR is a technique that allows a target DNA sequence to be amplified, meaning even a tiny quantity of DNA from a crime scene can be extracted and replicated many times to provide a sufficient amount of material for analysis. Gel electrophoresis allows the DNA sequence to be separated based on size, and the pattern that is derived is known as DNA fingerprinting and is unique to each individual. This combination of molecular genetic techniques allows a simple DNA sequence to be extracted, amplified, analyzed and compared with others and is a standard technique used in forensics. [ 43 ] | https://en.wikipedia.org/wiki/Molecular_genetics |
Molecular geometry is the three-dimensional arrangement of the atoms that constitute a molecule . It includes the general shape of the molecule as well as bond lengths , bond angles , torsional angles and any other geometrical parameters that determine the position of each atom.
Molecular geometry influences several properties of a substance including its reactivity , polarity , phase of matter , color , magnetism and biological activity . [ 1 ] [ 2 ] [ 3 ] The angles between bonds that an atom forms depend only weakly on the rest of a molecule, i.e. they can be understood as approximately local and hence transferable properties .
The molecular geometry can be determined by various spectroscopic methods and diffraction methods. IR , microwave and Raman spectroscopy can give information about the molecule geometry from the details of the vibrational and rotational absorbance detected by these techniques. X-ray crystallography , neutron diffraction and electron diffraction can give molecular structure for crystalline solids based on the distance between nuclei and concentration of electron density. Gas electron diffraction can be used for small molecules in the gas phase. NMR and FRET methods can be used to determine complementary information including relative distances, [ 4 ] [ 5 ] [ 6 ] dihedral angles, [ 7 ] [ 8 ] angles, and connectivity. Molecular geometries are best determined at low temperature because at higher temperatures the molecular structure is averaged over more accessible geometries (see next section). Larger molecules often exist in multiple stable geometries ( conformational isomerism ) that are close in energy on the potential energy surface . Geometries can also be computed by ab initio quantum chemistry methods to high accuracy. The molecular geometry can be different as a solid, in solution, and as a gas.
The position of each atom is determined by the nature of the chemical bonds by which it is connected to its neighboring atoms. The molecular geometry can be described by the positions of these atoms in space, evoking bond lengths of two joined atoms, bond angles of three connected atoms, and torsion angles ( dihedral angles ) of three consecutive bonds.
Since the motions of the atoms in a molecule are determined by quantum mechanics, "motion" must be defined in a quantum mechanical way. The overall (external) quantum mechanical motions translation and rotation hardly change the geometry of the molecule. (To some extent rotation influences the geometry via Coriolis forces and centrifugal distortion , but this is negligible for the present discussion.) In addition to translation and rotation, a third type of motion is molecular vibration , which corresponds to internal motions of the atoms such as bond stretching and bond angle variation. The molecular vibrations are harmonic (at least to good approximation), and the atoms oscillate about their equilibrium positions, even at the absolute zero of temperature. At absolute zero all atoms are in their vibrational ground state and show zero point quantum mechanical motion , so that the wavefunction of a single vibrational mode is not a sharp peak, but approximately a Gaussian function (the wavefunction for n = 0 depicted in the article on the quantum harmonic oscillator ). At higher temperatures the vibrational modes may be thermally excited (in a classical interpretation one expresses this by stating that "the molecules will vibrate faster"), but they oscillate still around the recognizable geometry of the molecule.
To get a feeling for the probability that the vibration of molecule may be thermally excited,
we inspect the Boltzmann factor β ≡ exp(− Δ E / kT ) , where Δ E is the excitation energy of the vibrational mode, k the Boltzmann constant and T the absolute temperature. At 298 K (25 °C), typical values for the Boltzmann factor β are:
(The reciprocal centimeter is an energy unit that is commonly used in infrared spectroscopy ; 1 cm −1 corresponds to 1.239 84 × 10 −4 eV ). When an excitation energy is 500 cm −1 , then about 8.9 percent of the molecules are thermally excited at room temperature. To put this in perspective: the lowest excitation vibrational energy in water is the bending mode (about 1600 cm −1 ). Thus, at room temperature less than 0.07 percent of all the molecules of a given amount of water will vibrate faster than at absolute zero.
As stated above, rotation hardly influences the molecular geometry. But, as a quantum mechanical motion, it is thermally excited at relatively (as compared to vibration) low temperatures. From a classical point of view it can be stated that at higher temperatures more molecules will rotate faster,
which implies that they have higher angular velocity and angular momentum . In quantum mechanical language: more eigenstates of higher angular momentum become thermally populated with rising temperatures. Typical rotational excitation energies are on the order of a few cm −1 . The results of many spectroscopic experiments are broadened because they involve an averaging over rotational states. It is often difficult to extract geometries from spectra at high temperatures, because the number of rotational states probed in the experimental averaging increases with increasing temperature. Thus, many spectroscopic observations can only be expected to yield reliable molecular geometries at temperatures close to absolute zero, because at higher temperatures too many higher rotational states are thermally populated.
Molecules, by definition, are most often held together with covalent bonds involving single, double, and/or triple bonds, where a "bond" is a shared pair of electrons (the other method of bonding between atoms is called ionic bonding and involves a positive cation and a negative anion ).
Molecular geometries can be specified in terms of 'bond lengths', 'bond angles' and 'torsional angles'. The bond length is defined to be the average distance between the nuclei of two atoms bonded together in any given molecule. A bond angle is the angle formed between three atoms across at least two bonds. For four atoms bonded together in a chain, the torsional angle is the angle between the plane formed by the first three atoms and the plane formed by the last three atoms.
There exists a mathematical relationship among the bond angles for one central atom and four peripheral atoms (labeled 1 through 4) expressed by the following determinant. This constraint removes one degree of freedom from the choices of (originally) six free bond angles to leave only five choices of bond angles. (The angles θ 11 , θ 22 , θ 33 , and θ 44 are always zero and that this relationship can be modified for a different number of peripheral atoms by expanding/contracting the square matrix.)
0 = | cos θ 11 cos θ 12 cos θ 13 cos θ 14 cos θ 21 cos θ 22 cos θ 23 cos θ 24 cos θ 31 cos θ 32 cos θ 33 cos θ 34 cos θ 41 cos θ 42 cos θ 43 cos θ 44 | {\displaystyle 0={\begin{vmatrix}\cos \theta _{11}&\cos \theta _{12}&\cos \theta _{13}&\cos \theta _{14}\\\cos \theta _{21}&\cos \theta _{22}&\cos \theta _{23}&\cos \theta _{24}\\\cos \theta _{31}&\cos \theta _{32}&\cos \theta _{33}&\cos \theta _{34}\\\cos \theta _{41}&\cos \theta _{42}&\cos \theta _{43}&\cos \theta _{44}\end{vmatrix}}}
Molecular geometry is determined by the quantum mechanical behavior of the electrons. Using the valence bond approximation this can be understood by the type of bonds between the atoms that make up the molecule. When atoms interact to form a chemical bond , the atomic orbitals of each atom are said to combine in a process called orbital hybridisation . The two most common types of bonds are sigma bonds (usually formed by hybrid orbitals) and pi bonds (formed by unhybridized p orbitals for atoms of main group elements ). The geometry can also be understood by molecular orbital theory where the electrons are delocalised.
An understanding of the wavelike behavior of electrons in atoms and molecules is the subject of quantum chemistry .
Isomers are types of molecules that share a chemical formula but have difference geometries, resulting in different properties:
A bond angle is the geometric angle between two adjacent bonds. Some common shapes of simple molecules include:
The bond angles in the table below are ideal angles from the simple VSEPR theory (pronounced "Vesper Theory") [ citation needed ] , followed by the actual angle for the example given in the following column where this differs. For many cases, such as trigonal pyramidal and bent, the actual angle for the example differs from the ideal angle, and examples differ by different amounts. For example, the angle in H 2 S (92°) differs from the tetrahedral angle by much more than the angle for H 2 O (104.48°) does.
The greater the number of lone pairs contained in a molecule, the smaller the angles between the atoms of that molecule. The VSEPR theory predicts that lone pairs repel each other, thus pushing the different atoms away from them. | https://en.wikipedia.org/wiki/Molecular_geometry |
A molecular glue is a type of small molecule that modulates protein–protein interactions in cells by enhancing the affinity between proteins. These compounds can induce novel interactions between proteins (type I) or stabilize pre-existing ones (type II), offering an alternative strategy to traditional drug discovery . Molecular glues have shown promise in targeting proteins previously considered "undruggable" by conventional methods. They work through various mechanisms, such as promoting protein degradation or inhibiting protein function , and are being studied for potential use in treating cancer, neurodegenerative disorders, and other diseases.
Unlike PROTACs , which are rationally designed heterobifunctional molecules that contain two covalently linked ligands that bind respectively to a target protein and an E3 ligase , molecular glues are small, monofunctional compounds typically discovered serendipitously through screening or chance observations.
Molecular glue compounds are typically small molecules that facilitate interactions between proteins by stabilizing or inducing protein–protein interactions (PPIs). These compounds often bind to specific binding sites on a target protein and alter its surface conformation, promoting interactions with other proteins that would not normally associate. By reshaping protein surfaces, molecular glues can stabilize protein complexes , reducing their tendency to dissociate , and thus modulate essential cellular functions, many of which rely on dynamic protein assemblies. Through this mechanism, molecular glues can alter the function, localization, or stability of target proteins, offering valuable applications in both therapeutic and research contexts. [ 2 ]
Unlike PROTACs , which are bifunctional and physically tether the target to an E3 ubiquitin ligase , molecular glues induce or enhance PPIs between the ligase and the substrate by binding at existing or latent interaction surfaces. [ 3 ] This mechanism allows for selective targeting of proteins, including those previously considered "undruggable."
A notable example involves small molecules that promote the interaction between the oncogenic transcription factor β-Catenin and the E3 ligase SCF β-TrCP . These molecules function as molecular glues by enhancing the native PPI interface, resulting in increased ubiquitylation and subsequent degradation of mutant β-Catenin both in vitro and in cellular models. [ 3 ] Unlike PROTACs, which require two separate binding moieties, these monovalent molecules insert directly into the PPI interface, simultaneously optimizing contacts with both substrate and ligase within a single chemical entity. [ 3 ]
Molecular glues are especially advantageous for degrading non-ligandable targets, as they exploit naturally complementary protein surfaces to induce degradation without requiring high-affinity ligands for the target protein. [ 3 ] Although many molecular glues have historically been discovered serendipitously and characterized retrospectively, newer approaches now aim to identify them prospectively through systematic chemical profiling. [ 4 ]
For example, the compound CR8 was identified through correlation analysis as a molecular glue that promotes ubiquitination and degradation of specific targets via a top-down screening approach. [ 5 ] This highlights the broader potential of small molecules, beyond PROTACs, in targeted protein degradation strategies. [ 5 ]
There is also growing evidence that molecular glues can stabilize interactions beyond protein–protein pairs, including protein–RNA [ 6 ] and protein–lipid complexes. [ 7 ]
Molecular glues are categorized into functional types based on their mechanisms of modulating protein-protein interactions (PPIs): stabilization of non-native (type I) or native (type II) protein-protein interactions.
Type I molecular glues induce non-native protein-protein interactions that physically block, or "shield," a protein’s normal endogenous activity. Rather than promoting protein degradation, these compounds typically stabilize inactive conformations [ 2 ] or mask functional regions of the target protein, thereby preventing it from participating in its usual biological processes. This can include blocking active sites, disrupting ligand binding, or interfering with native protein–protein interactions. [ 8 ] [ 9 ]
One example is the immunosuppressant rapamycin , which forms a ternary complex with FKBP12 and the kinase mTOR , resulting in inhibition of mTOR activity. Another is cyclosporin A , which bridges cyclophilin A and calcineurin , leading to inhibition of calcineurin’s phosphatase function. These cases illustrate how Type I molecular glues can modulate protein function by enforcing artificial protein interactions that hinder normal activity.
Type II molecular glues stabilize endogenous protein-protein interactions by altering protein conformation or dynamics. They can either inhibit or enhance activity by locking proteins into specific states. One example is lenalidomide (an immunomodulatory drug), which binds cereblon (CRBN) and reprograms it to degrade transcription factors like IKZF1 / IKZF3 in multiple myeloma . [ 9 ] Other examples include tafamidis that stabilizes transthyretin (TTR) tetramers to prevent amyloid fibril formation in neurodegenerative diseases and paclitaxel that stabilizes microtubule polymers, blocking disassembly and inhibiting cancer cell division [ 8 ]
Molecular glues employ two primary mechanisms to modulate protein-protein interactions (PPIs): allosteric regulation and direct bridging. [ 9 ] Allosteric mechanisms dominate therapeutic applications of molecular glues because of their versatility in targeting diverse proteins and pathways. [ 10 ]
In allosteric regulation, molecular glues bind to one protein, inducing conformational changes that create or stabilize novel interaction surfaces, enabling the recruitment of a second protein. [ 11 ] For example, lenalidomide binds to the E3 ligase cereblon (CRBN), remodeling its surface to recruit neo-substrates such as IKZF1 / IKZF3 for ubiquitination and subsequent degradation. [ 12 ] Similarly, CC-885 binds CRBN and induces the degradation of GSPT1 by stabilizing a ternary complex between CRBN, GSPT1, and the molecular glue. [ 13 ]
In contrast, direct bridging involves the glue physically linking two proteins at their interface. For instance, rapamycin bridges FKBP12 and mTOR by binding to both proteins simultaneously, forming a ternary complex that inhibits mTOR’s kinase activity. [ 14 ] While direct bridging is observed in some cases, allosteric modulation is far more common in molecular glues due to its ability to exploit dynamic protein surfaces and induce novel interactions without requiring pre-existing binding pockets. [ 10 ]
The ability of molecular glues to selectively degrade disease-relevant proteins has significant implications for drug discovery, particularly in the context of "undruggable" targets. Their monovalent nature and reliance on endogenous PPIs make them especially appealing for therapeutic development.
Compared to traditional small molecule drugs , molecular glues offer several advantages, including lower molecular weight , improved cell permeability , and favorable oral bioavailability . These properties align with the "Five Rules for Drugs" and may enable more efficient delivery and distribution in vivo. [ 3 ]
In contrast, PROTACs —though similarly used for targeted protein degradation—often face challenges such as high molecular weight, reduced cell permeability, and poor pharmacokinetic profiles, which can hinder their clinical development . [ 3 ]
Several therapeutic molecular glues have been developed to target proteins involved in cancer and other diseases. For instance, small molecule degraders of BCL6 and Cyclin K exploit both ligand-binding and PPI surfaces to drive the formation of ternary complexes with E3 ligases. [ 15 ] These compounds, typically under 500 Da , promote tight binding between ligase and neosubstrate in the presence of the glue and demonstrate high potency in cellular models. [ 15 ]
As research continues to uncover new targets and refine discovery approaches, molecular glues are expected to play an increasingly important role in precision medicine and targeted degradation therapies.
Molecular glue compounds have demonstrated significant potential in cancer treatment by influencing protein-protein interactions (PPIs) and subsequently modulating pathways promoting cancer growth. These compounds act as targeted protein degraders, contributing to the development of innovative cancer therapies . [ 16 ] The high efficacy of small-molecule molecular glue compounds in cancer treatment is notable, as they can interact with and control multiple key protein targets involved in cancer etiology. [ 16 ] This approach, with its wider range of action and ability to target "undruggable" proteins, holds promise for overcoming drug resistance and changing the landscape of drug development in cancer therapy . [ 16 ]
Molecular glue compounds are being explored for their potential in influencing protein interactions associated with neurodegenerative diseases such as Alzheimer's and Parkinson's . By modulating these interactions, researchers aim to develop treatments that could slow or prevent the progression of these diseases. [ 16 ] Additionally, the versatility of small-molecule molecular glue compounds in targeting various proteins implicated in disease mechanisms provides a valuable avenue for unraveling the complexities of neurodegenerative disorders . [ 16 ]
Molecular glue compounds, particularly those involved in targeted protein degradation (TPD), offer a novel strategy for inhibiting viral protein interactions and combating viral infections . [ 17 ] Unlike traditional direct-acting antivirals (DAAs), TPD-based molecules exert their pharmacological activity through event-driven mechanisms, inducing target degradation. This unique approach can lead to prolonged pharmacodynamic efficacy with lower pharmacokinetic exposure, potentially reducing toxicity and the risk of antiviral resistance. [ 17 ] The protein-protein interactions induced by TPD molecules may also enhance selectivity, making them a promising avenue for antiviral research. [ 17 ]
Molecular glue serves as a valuable tool in chemical biology , enabling scientists to manipulate and understand protein functions and interactions in a controlled manner. [ 16 ] The emergence of targeted protein degradation as a modality in drug discovery has further expanded the applications of molecular glue in chemical biology . [ 17 ] The ability of small-molecule molecular glue compounds to induce iterative cycles of target degradation provides researchers with a powerful method for studying protein-protein interactions and opens avenues for drug development in various human diseases. [ 17 ]
Induce non-native PPIs to block or inhibit target activity without degradation:
Redirect or stabilize PPIs to induce target degradation.
CRBN-Based Degraders:
The concept of "molecular glue" originated in the late 20th century, with immunosuppressants like cyclosporine A (CsA) and FK506 identified as pioneering examples. [ 46 ] CsA, discovered in 1971 during routine screening for antifungal antibiotics , exhibited immunosuppressive properties by inhibiting the peptidyl–prolyl isomerase activity of cyclophilin , ultimately preventing organ transplant rejections . [ 47 ] By 1979, CsA was used clinically, and FK506 (tacrolimus), discovered in 1987 by Fujisawa, emerged as a more potent immunosuppressant . [ 47 ] The ensuing 4-year race to understand CsA and FK506 's mechanisms led to the identification of FKBP12 as a common binding partner, marking the birth of the "molecular glue" concept. [ 47 ] The term molecular glue found its way into publications in 1992, highlighting the selective gluing of specific proteins by antigenic peptides, akin to immunosuppressants acting as docking assemblies. [ 47 ] The term, however, remained esoteric and hidden from keyword searches.
In the early 1990s, researchers delved into understanding the role of proximity in biological processes. [ 47 ] The creation of synthetic "chemical inducers of proximity" (CIPs), such as FK1012 , opened the door to more complex molecular glues. [ 47 ] Rimiducid, a purposefully synthesized molecular glue, demonstrated its effectiveness in eliminating graft-versus-host disease by inducing dimerization of death-receptor fusion targets. [ 47 ]
The exploration of molecular glues took a significant turn in 1996 with the discovery that discodermolide stabilized the association of alpha and beta tubulin monomers , functioning as a "molecular clamp" rather than inducing neo-associations. [ 47 ] In 2000, the revelation that a synthetic compound, synstab-A , could induce associations of native proteins marked a shift towards the discovery of non-natural molecular glues. [ 47 ]
In 2001, Kathleen Sakamoto, Craig M. Crews and Raymond J. Deshaies raised the concept of PROTACs , which consist of a heterobifunctional molecule with a ligand of an E3 ubiquitin ligase linked to a ligand of a target protein. [ 48 ] PROTACs are synthetic CIPs acting as protein degraders.
In 2007, the term “molecular glue” became popularized after it was independently coined by Ning Zheng to describe the mechanism of action of auxin , a class of plant hormones regulating many aspects of plant growth and development. [ 1 ] By promoting the interaction between a plant E3 ubiquitin ligase, TIR1, and its substrate proteins, auxin induces the degradation of a family of transcriptional repressors. [ 49 ] Auxin is chemically known as indole-3-acetic acid and has a molecular weight of 175 dalton. Unlike PROTACs and immunosupressants such as CsA and FK506, auxin is a chemically simple and monovalent compound with drug-like properties obeying Lipinski’s rule of five . With no detectable affinity to the polyubiquitination substrate proteins of TIR1, auxin leverages the intrinsic weak affinity between the E3 ligase and its substrate proteins to enable stable protein complex formation. The same mechanism of action is shared by jasmonate , another plant hormone involved in wound and stress responses. [ 50 ] The term “molecular glue” has since been used, particularly in the context of targeted protein degradation , to specifically describe monovalent compounds with drug-like properties capable of promoting productive protein-protein interactions, instead of CIPs in general.
In 2013, the mechanism of thalidomide analogs as molecular glue degraders had been revealed. [ 46 ] Notably, thalidomide , discovered as a CRBN ligand in 2010, and lenalidomide enhance the binding of CK1α to the E3 ubiquitin ligase, solidifying their role as molecular glues. [ 46 ] [ 47 ] Subsequently, indisulam was identified as a molecular glue capable of degrading RBM39 by targeting DCAF15 in 2017. [ 46 ] These compounds are considered molecular glues because of their monovalency and chemical simplicity, which are consistent with the definition proposed by Shiyun Cao and Ning Zheng . [ 51 ] Analogous to auxin, these compounds are distinct from PROTACs, displaying no detectable affinity to the substrate proteins of the E3 ubiquitin ligases.
The year 2020 saw the discovery of autophagic molecular degraders and the identification of BI-3802 as a molecular glue inducing the polymerization and degradation of BCL6 . [ 46 ] Additionally, chemogenomic screening revealed structurally diverse molecular glue degraders targeting cyclin K . [ 46 ] The discovery that manumycin polyketides acted as molecular glues, fostering interactions between UBR7 and P53 , further expanded the understanding of molecular glue functions. [ 46 ]
In recent years, the field of molecular glues has witnessed an explosion of discoveries targeting native proteins. [ 47 ] Examples include synthetic FKBP12 -binding glues like FKBP12 -rapadocin, which targets the adenosine transporter SLC29A1 . [ 47 ] Thalidomide and lenalidomide , classified as immunomodulatory drugs (IMiDs), were identified as small-molecule glues inducing ubiquitination of transcription factors via E3 ligase complexes. [ 47 ] Computational searches for molecular-glue degraders since 2020 have added novel probes to the ever-expanding landscape of molecular glues. [ 47 ] [ 52 ] Furthermore, computational methods are starting to shed light onto molecular glues mechanisms of action. [ 52 ]
The transformative power of molecular glues in medicine became evident as drugs like sandimmune , tacrolimus , sirolimus , thalidomide , lenalidomide , and taxotere proved effective. [ 47 ] The concept of inducing protein associations has shown promise in gene therapy and has become a potent tool in understanding cell circuitry. [ 47 ] As the field continues to advance, the discovery of new molecular glues offers the potential to reshape drug discovery and overcome previously labeled "undruggable" targets. [ 47 ] The future of molecular glues holds promise for rewiring cellular circuitry and providing innovative solutions in precision medicine . [ 47 ]
While molecular glue compounds hold great potential in various fields, there are challenges to overcome. Ensuring the specificity of these compounds and minimizing off-target effects is essential. Additionally, understanding the long-term consequences of manipulating protein interactions is crucial for their safe and effective application in medicine.
Ongoing research in molecular glue is unlocking new compounds and insights into their mechanisms. With an expanding understanding of protein-protein interactions , molecular glue holds significant promise across biology, medicine, and chemistry, potentially revolutionizing cellular processes and advancing innovative disease treatments. As this field progresses, it may open new therapeutic avenues and deepen our understanding of life's molecular intricacies. | https://en.wikipedia.org/wiki/Molecular_glue |
In chemical graph theory and in mathematical chemistry , a molecular graph or chemical graph is a representation of the structural formula of a chemical compound in terms of graph theory . A chemical graph is a labeled graph whose vertices correspond to the atoms of the compound and edges correspond to chemical bonds . Its vertices are labeled with the kinds of the corresponding atoms and edges are labeled with the types of bonds. [ 1 ] For particular purposes any of the labelings may be ignored.
A hydrogen-depleted molecular graph or hydrogen-suppressed molecular graph is the molecular graph with hydrogen vertices deleted.
In some important cases ( topological index calculation etc.) the following classical definition is sufficient: a molecular graph is a connected, undirected graph which admits a one-to-one correspondence with the structural formula of a chemical compound in which the vertices of the graph correspond to atoms of the molecule and edges of the graph correspond to chemical bonds between these atoms. [ 2 ] One variant is to represent materials as infinite Euclidean graphs , in particular, crystals as periodic graphs . [ 3 ]
Arthur Cayley was probably the first to publish results that consider molecular graphs as early as in 1874, even before the introduction of the term " graph ". [ 4 ] For the purposes of enumeration of isomers , Cayley considered "diagrams" made of points labelled by atoms and connected by links into an assemblage. He further introduced the terms plerogram and kenogram , [ 5 ] which are the molecular graph and the hydrogen-suppressed molecular graph respectively. If one continues to delete atoms connected by a single link further, one arrives at a mere kenogram , possibly empty. [ 6 ]
Danail Bonchev in his Chemical Graph Theory traces the origins of representation of chemical forces by diagrams which may be called "chemical graphs" to as early as the mid-18th century. In the early 18th century, Isaac Newton 's notion of gravity had led to speculative ideas that atoms are held together by some kind of "gravitational force". In particular, since 1758 Scottish chemist William Cullen in his lectures used what he called "affinity diagrams" to represent forces supposedly existing between pairs of molecules in a chemical reaction. In a 1789 book by William Higgins similar diagrams were used to represent forces within molecules. These and some other contemporary diagrams had no relation to chemical bonds: the latter notion was introduced only in the following century. [ 7 ] | https://en.wikipedia.org/wiki/Molecular_graph |
Molecular graphics is the discipline and philosophy of studying molecules and their properties through graphical representation. [ 1 ] IUPAC limits the definition to representations on a "graphical display device". [ 2 ] Ever since Dalton 's atoms and Kekulé 's benzene , there has been a rich history of hand-drawn atoms and molecules, and these representations have had an important influence on modern molecular graphics.
Colour molecular graphics are often used on chemistry journal covers artistically. [ 3 ]
Prior to the use of computer graphics in representing molecular structure, Robert Corey and Linus Pauling developed a system for representing atoms or groups of atoms from hard wood on a scale of 1 inch = 1 angstrom connected by a clamping device to maintain the molecular configuration. [ 4 ] These early models also established the CPK coloring scheme that is still used today to differentiate the different types of atoms in molecular models (e.g. carbon = black, oxygen = red, nitrogen = blue, etc). This early model was improved upon in 1966 by W.L. Koltun and are now known as Corey-Pauling-Koltun (CPK) models. [ 5 ]
The earliest efforts to produce models of molecular structure was done by Project MAC using wire-frame models displayed on a cathode ray tube in the mid 1960s. In 1965, Carroll Johnson distributed the Oak Ridge thermal ellipsoid plot (ORTEP) that visualized molecules as a ball-and-stick model with lines representing the bonds between atoms and ellipsoids to represent the probability of thermal motion. [ 6 ] Thermal ellipsoid plots quickly became the de facto standard used in the display of X-ray crystallography data, and are still in wide use today. [ 6 ] The first practical use of molecular graphics was a simple display of the protein myoglobin using a wireframe representation in 1966 by Cyrus Levinthal and Robert Langridge working at Project MAC. [ 7 ]
Among the milestones in high-performance molecular graphics was the work of Nelson Max in "realistic" rendering of macromolecules using reflecting spheres .
Initially much of the technology concentrated on high-performance 3D graphics . [ 8 ] During the 1970s, methods for displaying 3D graphics using cathode ray tubes were developed using continuous tone computer graphics in combination with electro-optic shutter viewing devices. [ 9 ] The first devices used an active shutter 3D system , generating different perspective views for the left and right channel to provide the illusion of three-dimensional viewing. Stereoscopic viewing glasses were designed using lead lanthanum zirconate titanate (PLZT) ceramics as electronically-controlled shutter elements. [ 10 ] Active 3D glasses require batteries and work in concert with the display to actively change the presentation by the lenses to the wearer's eyes. Many modern 3D glasses use a passive, polarized 3D system that enables the wearer to visualize 3D effects based on their own perception. Passive 3D glasses are more common today since they are less expensive. [ 11 ]
The requirements of macromolecular crystallography also drove molecular graphics because the traditional techniques of physical model-building could not scale. The first two protein structures solved by molecular graphics without the aid of the Richards' Box were built with Stan Swanson's program FIT on the Vector General graphics display in the laboratory of Edgar Meyer at Texas A&M University: First Marge Legg in Al Cotton's lab at A&M solved a second, higher-resolution structure of staph. nuclease (1975) and then Jim Hogle solved the structure of monoclinic lysozyme in 1976. A full year passed before other graphics systems were used to replace the Richards' Box for modelling into density in 3-D. Alwyn Jones' FRODO program (and later "O") were developed to overlay the molecular electron density determined from X-ray crystallography and the hypothetical molecular structure.
In the ball-and-stick model, atoms are drawn as small sphered connected by rods representing the chemical bonds between them.
In the space-filling model, atoms are drawn as solid spheres to suggest the space they occupy, in proportion to their van der Waals radii . Atoms that share a bond overlap with each other.
In some models, the surface of the molecule is approximated and shaded to represent a physical property of the molecule, such as electronic charge density. [ 39 ] [ 40 ]
Ribbon diagrams are schematic representations of protein structure and are one of the most common methods of protein depiction used today. The ribbon shows the overall path and organization of the protein backbone in 3D, and serves as a visual framework on which to hang details of the full atomic structure, such as the balls for the oxygen atoms bound to the active site of myoglobin in the adjacent image. Ribbon diagrams are generated by interpolating a smooth curve through the polypeptide backbone. α-helices are shown as coiled ribbons or thick tubes, β-strands as arrows, and non-repetitive coils or loops as lines or thin tubes. The direction of the polypeptide chain is shown locally by the arrows, and may be indicated overall by a colour ramp along the length of the ribbon. [ 41 ] | https://en.wikipedia.org/wiki/Molecular_graphics |
Molecular gyroscopes are chemical compounds or supramolecular complexes containing a rotor that moves freely relative to a stator , and therefore act as gyroscopes . Though any single bond or triple bond permits a chemical group to freely rotate, the compounds described as gyroscopes may protect the rotor from interactions, such as in a crystal structure with low packing density [ 2 ] or by physically surrounding the rotor avoiding steric contact. [ 3 ] A qualitative distinction can be made based on whether the activation energy needed to overcome rotational barriers is higher than the available thermal energy . If the activation energy required is higher than the available thermal energy, the rotor undergoes "site exchange", jumping in discrete steps between local energy minima on the potential energy surface . If there is thermal energy sufficiently higher than that needed to overcome the barrier to rotation, the molecular rotor can behave more like a macroscopic freely rotating inertial mass. [ 2 ]
For example, several studies in 2002 with a p - phenylene rotor found that some structures using variable-temperature (VT) solid-state 13 C CPMAS and quadrupolar echo 2 H NMR were able to detect a two-site exchange rate of 1.6 MHz (over 10 6 /second at 65 °C), described as "remarkably fast for a phenylene group in a crystalline solid", with steric barriers of 12–14 kcal / mol . However, tert -butyl modification of the rotor increased the exchange rate to over 10 8 per second at room temperature, and the rate for inertially rotating p -phenylene without barriers is estimated to be approximately 2.4 x 10 12 revolutions per second . [ 2 ] | https://en.wikipedia.org/wiki/Molecular_gyroscope |
Molecular imprinting is a technique to create template-shaped cavities in polymer matrices with predetermined selectivity and high affinity. [ 1 ] This technique is based on the system used by enzymes for substrate recognition, which is called the "lock and key" model. The active binding site of an enzyme has a shape specific to a substrate. Substrates with a complementary shape to the binding site selectively bind to the enzyme; alternative shapes that do not fit the binding site are not recognized.
Molecularly imprinted materials are prepared using a template molecule and functional monomers that assemble around the template and subsequently get cross-linked to each other. The monomers, which are self-assembled around the template molecule by interaction between functional groups on both the template and monomers, are polymerized to form an imprinted matrix (commonly known in the scientific community as a molecular imprinted polymer (MIP)). The template is subsequently removed in part or entirely, [ 1 ] leaving behind a cavity complementary in size and shape to the template. The obtained cavity can work as a selective binding site for the templated molecule.
In recent decades, the molecular imprinting technique has been developed for use in drug delivery , separations, biological and chemical sensing, and more. Taking advantage of the shape selectivity of the cavity, use in catalysis for certain reactions has also been facilitated.
The first example of molecular imprinting is attributed to M. V. Polyakov in 1931 with his studies in the polymerization of sodium silicate with ammonium carbonate . When the polymerization process was accompanied by an additive such as benzene , the resulting silica showed a higher uptake of this additive. [ 1 ] By 1949, the concept of instructional theory molecular imprinting was used by Dickey; his research precipitated silica gels in the presence of organic dyes and showed imprinted silica had high selectivity towards the template dye. [ 2 ]
Following Dickey’s observations, Patrikeev published a paper of his ‘imprinted’ silica with the method of incubating bacteria with gel silica. The process of drying and heating the silica promoted growth of bacteria better than other reference silicas and exhibited enantioselectivity . [ 3 ] He later used this imprinted silica method in further applications such as thin layer chromatography (TLC) and high performance liquid chromatography (HPLC). In 1972, Wulff and Klotz introduced molecular imprinting to organic polymers. They found that molecular recognition was possible by covalently introducing functional groups within the imprinted cavity of polymers. [ 4 ] [ 5 ] The Mosbach group then proved it was possible to introduce functional groups into imprinted cavities through non-covalent interactions, thus leading to non-covalent imprinting. [ 6 ] [ 7 ] Many approaches regarding molecular imprinting have since been extended to different purposes. [ 1 ]
In covalent imprinting, the template molecule is covalently bonded to the functional monomers that are then polymerized together. After polymerization, the polymer matrix is cleaved from the template molecule, leaving a cavity shaped as the template. Upon rebinding with the original molecule, the binding sites will interact with the target molecule, reestablishing the covalent bonds . [ 8 ] [ 9 ] During this reestablishment, kinetics associated with bond binding and bond breakage are obtained back. The imprinted molecule is then released from the template, in which it would then rebind with the target molecule, forming the same covalent bonds that were formed before polymerization. [ 7 ] Advantages through utilizing this approach include the functional group being solely associated with the binding sites, [ 1 ] avoiding any non-specific binding. The imprinted molecule also displays a homogenous distribution of binding sites, increasing the stability of the template-polymer complex. [ 7 ] However, there are a few number of compounds that can be used to imprint with template molecules via covalent bonding, such as alcohols , aldehydes and ketones , all of which have high formation kinetics. [ 10 ] [ 11 ] In some cases, the rebinding of the polymer matrix with the template can be very slow, making this approach time inefficient for applications that require fast kinetics, such as chromatography .
With non-covalent imprinting, interaction forces between template molecule and functional monomer are the same as the interaction forces between the polymer matrix and analyte . The forces involved in this procedure can include hydrogen bonds , dipole dipole interactions , and induced dipole forces . [ 1 ] This method is the most widely used approach to create MIPs due to easy preparation and the wide variety of functional monomers that can be bound to the template molecule. Among the functional groups, methacrylic acid is the most commonly used compound due to its ability to interact with other functional groups. [ 12 ] [ 13 ] Another way to alternate the non-covalent interaction between the template molecule and polymer is through the technique ‘bite and switch’ developed by Professor Sergey A. Piletsky and Sreenath Subrahmanyam. [ 14 ] In this process, functional groups first non-covalently bond with the binding site, but during the rebinding step, the polymer matrix forms irreversible covalent bonds with the target molecule. [ 14 ] [ 15 ]
Ionic imprinting, which involves metal ions , serves as an approach to enhance template molecule and functional monomer interaction in water. [ 16 ] Typically, metal ions serve as a mediator during the imprinting process. Cross-linking polymers that are in the presence of a metal ion will form a matrix that is capable of metal binding. [ 17 ] Metal ions can also mediate molecular imprinting by binding to a range of functional monomers, where ligands donate electrons to the outermost orbital of the metal ion. [ 1 ] In addition to mediating imprinting, metal ions can be utilized in the direct imprinting. For example, a metal ion can serve as the template for the imprinting process. [ 18 ]
One application of molecular imprinting technology is in affinity-based separations for biomedical, environmental, and food analysis. Sample preconcentration and treatment can be carried out by removing targeted trace amounts of analytes in samples using MIPs. The feasibility of MIPs in solid-phase extraction , solid-phase microextraction , and stir bar sorption extraction has been studied in several publications. [ 19 ] Moreover, chromatography techniques such as HPLC and TLC can make use of MIPs as packing materials and stationary phases for the separation of template analytes. The kinetics of noncovalently imprinted materials were observed to be faster than materials prepared by the covalent approach, so noncovalent MIPs are more commonly used in chromatography. [ 20 ]
Another application is the use of molecularly imprinted materials as chemical and biological sensors . They have been developed to target herbicides, sugars, drugs, toxins, and vapors. MIP-based sensors not only have high selectivity and high sensitivity, but they can also generate output signals (electrochemical, optical, or piezoelectric) for detection. This allows them to be utilized in fluorescence sensing, electrochemical sensing, chemiluminescence sensing, and UV-Vis sensing. [ 7 ] [ 20 ] Forensic applications that delve into detections of illicit drugs, banned sport drugs, toxins, and chemical warfare agents are also an area of growing interest. [ 21 ]
Molecular imprinting has steadily been emerging in fields like drug delivery and biotechnology . The selective interaction between template and polymer matrix can be utilized in preparation of artificial antibodies . In the biopharmaceutical market, separation of amino acids, chiral compounds, hemoglobin, and hormones can be achieved with MIP adsorbents . Methods to utilize molecular imprinting techniques for mimicking linear and polyanionic molecules, such as DNA, proteins, and carbohydrates have been researched. [ 22 ] An area of challenges is protein imprinting. Large, water-soluble biological macromolecules have posed a difficulty for molecular imprinting because their conformational integrity cannot be ensured in synthetic environments. Current methods to navigate this include immobilizing template molecules at the surface of solid substrates, thereby minimizing aggregation and controlling the template molecules to locate at the surface of imprinted materials. [ 21 ] However, a critical review of molecular imprinting of proteins by scientists from Utrecht University found that further testing is required. [ 23 ]
Pharmaceutical applications include selective drug delivery and control drug release systems, which make use of MIPs’ stable conformations, fast equilibrium release, and resistance to enzymatic and chemical stress. [ 7 ] Intelligent drug release, the release of a therapeutic agent as a result of a specific stimuli, has also been explored. Molecularly imprinted materials of insulin and other drugs at the nanoscale were shown to exhibit high adsorption capacity for their respective targets, showing huge potential for newfound drug delivery systems. [ 24 ] In comparison with natural receptors , MIPs also have higher chemical and physical stability, easier availability, and lower cost. MIPs could especially be used for stabilization of proteins, particularly selective protection of proteins against denaturation from heat. [ 25 ] | https://en.wikipedia.org/wiki/Molecular_imprinting |
In chemistry , a molecular knot is a mechanically interlocked molecular architecture that is analogous to a macroscopic knot . [ 1 ] Naturally-forming molecular knots are found in organic molecules like DNA , RNA , and proteins . It is not certain that naturally occurring knots are evolutionarily advantageous to nucleic acids or proteins, though knotting is thought to play a role in the structure, stability, and function of knotted biological molecules. [ 2 ] The mechanism by which knots naturally form in molecules, and the mechanism by which a molecule is stabilized or improved by knotting, is ambiguous. [ 3 ] The study of molecular knots involves the formation and applications of both naturally occurring and chemically synthesized molecular knots. Applying chemical topology and knot theory to molecular knots allows biologists to better understand the structures and synthesis of knotted organic molecules. [ 1 ]
The term knotane was coined by Vögtle et al. in 2000 to describe molecular knots by analogy with rotaxanes and catenanes , which are other mechanically interlocked molecular architectures. [ 1 ] [ 4 ] The term has not been broadly adopted by chemists and has not been adopted by IUPAC .
Organic molecules containing knots may fall into the categories of slipknots or pseudo-knots. [ 2 ] They are not considered mathematical knots because they are not a closed curve, but rather a knot that exists within an otherwise linear chain, with termini at each end. Knotted proteins are thought to form molecular knots during their tertiary structure folding process, and knotted nucleic acids generally form molecular knots during genomic replication and transcription, [ 6 ] though details of knotting mechanism continue to be disputed and ambiguous. Molecular simulations are fundamental to the research on molecular knotting mechanisms.
Knotted DNA was found first in single-stranded, circular, bacterial DNA, though double-stranded circular DNA has been found to also form knots. Naturally knotted RNA has not yet been reported. [ citation needed ] [ 7 ]
A number of proteins containing naturally occurring molecular knots have been identified. The knot types found to be naturally occurring in proteins are the + 3 1 , − 3 1 , 4 1 , − 5 2 , {\displaystyle +3_{1},-3_{1},4_{1},-5_{2},} and + 6 1 {\displaystyle +6_{1}} knots, as identified in the KnotProt database of known knotted proteins. [ 8 ]
Several synthetic molecular knots have been reported. [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Knot types that have been successfully synthesized in molecules are 3 1 , 4 1 , 5 1 {\displaystyle 3_{1},4_{1},5_{1}} and 8 19 knots. Though the − 5 2 {\displaystyle -5_{2}} and + 6 1 {\displaystyle +6_{1}} knots have been found to naturally occur in knotted molecules, they have not been successfully synthesized. Small-molecule composite knots have also not yet been synthesized. [ 7 ]
Artificial DNA, RNA, and protein knots have been successfully synthesized. DNA is a particularly useful model of synthetic knot synthesis, as the structure naturally forms interlocked structures and can be easily manipulated into forming knots [ 15 ] control precisely the raveling necessary to form knots. Molecular knots are often synthesized with the help of crucial metal ion ligands. [ 7 ]
The first researcher to suggest the existence of a molecular knot in a protein was Jane Richardson in 1977, who reported that carbonic anhydrase B (CAB) exhibited apparent knotting during her survey of various proteins' topological behavior. [ 29 ] However, the researcher generally attributed with the discovery of the first knotted protein is Marc. L. Mansfield in 1994, as he was the first to specifically investigate the occurrence of knots in proteins and confirm the existence of the trefoil knot in CAB. Knotted DNA was found first by Liu et al. in 1981, in single-stranded, circular, bacterial DNA, though double-stranded circular DNA has been found to also form knots. [ 30 ]
In 1989, Sauvage and coworkers reported the first synthetic knotted molecule: a trefoil synthesized via a double-helix complex with the aid of Cu+ ions. [ 16 ]
Vogtle et al. was the first to describe molecular knots as knotanes in 2000. [ 1 ] Also in 2000 was William Taylor's creation of an alternative computational method to analyze protein knotting that set the termini at a fixed point far enough away from the knotted component of the molecule that the knot type could be well-defined. In this study, Taylor discovered a deep 4 1 {\displaystyle 4_{1}} knot in a protein. [ 31 ] With this study, Taylor confirmed the existence of deeply knotted proteins.
In 2007, Eric Yeates reported the identification of a molecular slipknot, which is when the molecule contains knotted subchains even though their backbone chain as a whole is unknotted and does not contain completely knotted structures that are easily detectable by computational models. [ 32 ] Mathematically, slipknots are difficult to analyze because they are not recognized in the examination of the complete structure.
A pentafoil knot prepared using dynamic covalent chemistry was synthesized by Ayme et al. in 2012, which at the time was the most complex non-DNA molecular knot prepared to date. [ 19 ] Later in 2016, a fully organic pentafoil knot was also reported, including the very first use of a molecular knot to allosterically regulate catalysis. [ 33 ] In January 2017, an 8 19 knot was synthesized by David Leigh 's group, making the 8 19 knot the most complex molecular knot synthesized. [ 27 ]
An important development in knot theory is allowing for intra-chain contacts within an entangled molecular chain. Circuit topology has emerged as a topology framework that formalises the arrangement of contacts as well as chain crossings in a folded linear chain. As a complementary approach, Colin Adams. et al., developed a singular knot theory that is applicable to folded linear chains with intramolecular interactions. [ 34 ]
Many synthetic molecular knots have a distinct globular shape and dimensions that make them potential building blocks in nanotechnology . | https://en.wikipedia.org/wiki/Molecular_knot |
Molecular laser isotope separation ( MLIS ) is a method of isotope separation , where specially tuned lasers are used to separate isotopes of uranium using selective ionization of hyperfine transitions of uranium hexafluoride molecules. It is similar to AVLIS . Its main advantage over AVLIS is low energy consumption and use of uranium hexafluoride instead of vaporized uranium. MLIS was conceived in 1971 at the Los Alamos National Laboratory .
MLIS operates in cascade setup, like the gaseous diffusion process. Instead of vaporized uranium as in AVLIS the working medium of the MLIS is uranium hexafluoride which requires a much lower temperature to vaporize. The UF 6 gas is mixed with a suitable carrier gas (a noble gas including some hydrogen ) which allows the molecules to remain in the gaseous phase after being cooled by expansion through a supersonic de Laval nozzle . A scavenger gas (e.g. methane ) is also included in the mixture to bind with the fluorine atoms after they are dissociated from the UF 6 and inhibit their recombination with the enriched UF 5 product.
In the first stage, the expanded and cooled stream of UF 6 is irradiated with an infrared laser operating at the wavelength of 16 μm. The mix is then irradiated with another laser, either infrared or ultraviolet, whose photons are selectively absorbed by the excited 235 UF 6 , causing its photolysis to 235 UF 5 and fluorine . [ 1 ] The resultant enriched UF 5 forms a solid which is then separated from the gas by filtration or a cyclone separator. The precipitated UF 5 is relatively enriched with 235 UF 5 and after conversion back to UF 6 it is fed to the next stage of the cascade to be further enriched.
The laser for the excitation is usually a carbon dioxide laser with output wavelength shifted from 10.6 μm to 16 μm; the photolysis laser may be a Xe Cl excimer laser operating at 308 nm, however, infrared lasers are mostly used in existing implementations. [ citation needed ]
The process is complex: many mixed UFx compounds are formed which contaminate the product and are difficult to remove. The United States , France , United Kingdom , Germany and South Africa have reported the termination of their MLIS programs; however, Japan still has a small-scale program in operation. [ citation needed ]
The Commonwealth Scientific and Industrial Research Organisation in Australia has developed the SILEX pulsed laser separation process. GE, Cameco and Hitachi are currently involved in developing it for commercial use. [ citation needed ] | https://en.wikipedia.org/wiki/Molecular_laser_isotope_separation |
Molecular layer deposition ( MLD ) is a vapour phase thin film deposition technique based on self-limiting surface reactions carried out in a sequential manner. [ 1 ] Essentially, MLD resembles the well established technique of atomic layer deposition (ALD) but, whereas ALD is limited to exclusively inorganic coatings, the precursor chemistry in MLD can use small, bifunctional organic molecules as well. This enables, as well as the growth of organic layers in a process similar to polymerization, the linking of both types of building blocks together in a controlled way to build up organic-inorganic hybrid materials.
Even though MLD is a known technique in the thin film deposition sector, due to its relative youth it is not as explored as its inorganic counterpart, ALD, and a wide sector development is expected in the upcoming years.
Molecular layer deposition is a sister technique of atomic layer deposition . While the history of atomic layer deposition dates back to the 1970s, thanks to the independent work of Valentin Borisovich Aleskovskii . [ 2 ] and Tuomo Suntola , [ 3 ] the first MLD experiments with organic molecules were not published until 1991, when an article from Tetsuzo Yoshimura and co-workers appeared [ 4 ] regarding the synthesis of polyimides using amines and anhydrides as reactants. [ 5 ] After some work on organic compounds along the 1990s, the first papers related to hybrid materials emerged, after combining both ALD and MLD techniques. [ 6 ] [ 7 ] Since then, the number of articles submitted per year on molecular layer deposition has increased steadily, and a more diverse range of deposited layers have been observed, including polyamides, [ 8 ] [ 9 ] [ 10 ] polyimines, [ 11 ] polyurea, [ 12 ] polythiourea [ 13 ] and some copolymers, [ 14 ] with special interest in the deposition of hybrid films.
In similar fashion to an atomic layer deposition process, during an MLD process the reactants are pulsed on a sequential, cyclical manner, and all gas-solid reactions are self-limiting on the sample substrate. Each of these cycles are called MLD cycles and layer growth is measured as Growth Per Cycle (GPC), usually expressed in nm/cycle or Å/cycle. [ 1 ] During a model, two precursor experiment, an MLD cycle proceeds as follows:
First, precursor 1 is pulsed in the reactor, where it reacts and chemisorbs to the surface species on the sample surface. Once all adsorption sites have been covered and saturation has been reached, no more precursor will attach, and excess precursor molecules and generated byproducts are withdrawn from the reactor, either by purging with inert gas or by pumping the reactor chamber down. Only when the chamber has been properly purged with inert gas/pumped down to base pressure (~ 10 −6 mbar range) and all unwanted molecules from the previous step have been removed, can precursor 2 be introduced. [ 15 ] Otherwise, the process runs the risk of CVD-type growth, where the two precursors react in the gaseous phase before attaching to the sample surface, which would result in a coating with different characteristics.
Next, precursor 2 is pulsed, which reacts with the previous precursor 1 molecules anchored to the surface. This surface reaction is again self-limiting and, followed again by purging/pumping to base pressure the reactor, leaves behind a layer terminated with surface groups that can again react with precursor 1 in the next cycle. In the ideal case, the repetition of the MLD cycle will build up an organic/inorganic film one monatomic layer at a time, enabling highly conformal coatings with precise thickness control and film purity [ 15 ]
If ALD and MLD are combined, more precursors in a wider range can be used, both inorganic and organic. [ 5 ] [ 6 ] In addition, other reactions can be included in the ALD/MLD cycles as well, such as plasma or radical exposures. This way, an experiment can be freely customised according to the research needs by tuning the number of ALD and MLD cycles and the steps contained within the cycles. [ 15 ]
Precursor chemistry plays a key role in MLD. The chemical properties of the precursor molecules drive the composition, structure and stability of the deposited hybrid material. To reach the saturation stage in a short time and ensure a reasonable deposition rate, precursors must chemisorb on the surface, react rapidly with the surface active groups and react with each other. The desired MLD reactions should have a large negative ∆ G value. [ 16 ] [ 17 ]
Organic compounds are employed as precursors for MLD. For their effective use, the precursor should have sufficient vapor pressure and thermal stability to be transported in the gas phase to the reaction zone without decomposing. Volatility is influenced by the molecular weight and intermolecular interactions. One of the challenges in MLD is to find an organic precursor that has sufficient vapor pressure, reactivity and thermal stability. Most organic precursors have low volatility, and heating is necessary to ensure the sufficient supply of vapor reaching the substrate. The backbone of the organic precursors can be flexible i.e., aliphatic, or rigid i.e., aromatics employed with the functional groups. The organic precursors usually are homo or heterobifunctional molecules with -OH, -COOH, -NH 2 , -CONH 2 , -CHO, -COCl, -SH, -CNO, -CN, alkenes, etc. functional groups. The bifunctional nature of the precursors is essential for continuous film growth as one group is expected to react with the surface and the other one is accessible to react with the next pulse of the co-reactant. The attached functional groups play a vital role in the reactivity and binding modes of the precursor and they should be able to react with the functional groups present at the surface. A flexible backbone may hinder the growth of a continuous and dense film by back coordination, blocking the reactive sites and thus lowering the film growth rate. Thus, finding a MLD precursor with all the above-mentioned requirements fulfilled is not straightforward process. [ 18 ]
Surface groups play a crucial role as reaction intermediates. The substrate is usually hydroxylated or hydrogen terminated and hydroxyls serve as reactive linkers for condensation reactions with metals. The inorganic precursor reacts with surface reactive groups via the corresponding linking chemistry that leads to the formation of new O-Metal bonds. The metal precursor step changes the surface termination, leaving the surface with new reactive sites ready to react with the organic precursor. The organic precursor reacts at the resulting surface by bonding covalently with the metal sites, releasing metal ligands and leaves another reactive molecular layer ready for the next pulse. Byproducts are released after each adsorption step and the reactions are summarised below. [ 19 ]
When performing an MLD process, as a variant of ALD, certain aspects need to be taken into account in order to obtain the desired layer with adequate purity and growth rate:
Before starting an experiment, the researcher must know whether the process designed will yield saturated or unsaturated conditions. [ 20 ] If this information is unknown, it is a priority to get to know it in order to have accurate results. If not long enough precursor pulsing times are allowed, the surface reactive sites of the sample will not have sufficient time to react with the gaseous molecules and form a monolayer, which will be translated in a lower growth per cycle (GPC). To solve this issue, a saturation experiment can be performed, where the film growth is monitored in-situ at different precursor pulsing times, whose GPCs will then be plotted against pulsing time to find the saturation conditions. [ 20 ]
Additionally, too short purging times will result in remaining precursor molecules in the reactor chamber, which will be reactive in the gaseous phase towards the new precursor molecules introduced during the next step, obtaining an undesired CVD-grown layer instead. [ 20 ]
Film growth usually depends on the temperature of deposition, on what is called MLD window, [ 1 ] a temperature range in which, ideally, film growth will remain constant. When working outside of the MLD window, a number of problems can occur:
In addition, even when working within the MLD window, GPCs can still vary with temperature sometimes, due to the effect of other temperature-dependent factors, such as film diffusion, number of reactive sites or reaction mechanism. [ 1 ]
When carrying out an MLD process, the ideal case of one monolayer per cycle is not usually applicable. In the real world, many parameters affect the actual growth rate of the film, which in turn produce non idealities like sub-monolayer growth (deposition of less than a full layer per cycle), island growth and coalescence of islands. [ 20 ]
During an MLD process, film growth will usually achieve a constant value (GPC). However, during the first cycles, incoming precursor molecules will not interact with a surface of the grown material but rather with the bare substrate, and thus will undergo different chemical reactions with different reaction rates. As a consequence of this, growth rates can experience a substrate enhancement (faster substrate-film reaction than film-film reactions) and therefore higher GPCs in the first cycles; or a substrate inhibition (slower substrate-film reaction than film-film reactions), accompanied by a GPC decrease at the beginning. In any case, process growth rates can be very similar in both cases in some depositions. [ 21 ]
In MLD, it is not strange to observe that, often, experiments yield lower than anticipated growth rates. The reason for this relies on several factors, [ 22 ] such as:
This phenomenon can be avoided as much as possible by using organic precursors with stiff backbones [ 24 ] or with more than two functional groups, [ 23 ] using a three step reaction sequence, [ 25 ] or using precursors in which ring-opening reactions occur. [ 26 ]
High volatility and ease-of-handling make liquid precursors the preferred choice for ALD/MLD. Generally, liquid precursors have high enough vapor pressures at room temperature and hence require limited to no heating. They are also not prone to common problems with solid precursors like caking, particle size change, channeling and provide consistent and stable vapor delivery. Hence, some solid precursors with low melting points are generally used in their liquid states.
A carrier gas is usually employed to carry the precursor vapor from its source to the reactor. The precursor vapors can be directly entrained into this carrier gas with the help of solenoid and needle valves. [ 27 ] On the other hand, the carrier gas may be flown over the head space of a container containing the precursor or bubbled through the precursor. For the latter, dip-tube bubblers are very commonly used. The setup comprises a hollow tube (inlet) opening almost at the bottom of a sealed ampoule filled with precursor and an outlet at the top of the ampoule. An inert carrier gas like Nitrogen/Argon is bubbled through the liquid via the tube and led to the reactor downstream via the outlet. Owing to relatively fast evaporation kinetics of liquids, the outcoming carrier gas is nearly saturated with precursor vapor. The vapor supply to the reactor can be regulated by adjusting the carrier gas flow, temperature of the precursor and if needed, can be diluted further down the line. It must be ensured that the connections downstream from the bubbler are kept at high enough temperatures so as to avoid precursor condensation. The setup can also be used in spatial reactors which demand extremely high, stable and constant supply of precursor vapor.
In conventional reactors, hold cells can also be used as a temporary reservoir of precursor vapor. [ 28 ] [ 29 ] In such a setup, the cell is initially evacuated. It is then opened to a precursor source and allowed to be filled with precursor vapor. The cell is then cut off from the precursor source. Depending upon the reactor pressure, the cell may then be pressurized with an inert gas. Finally, the cell is opened to the reactor and the precursor is delivered. This cycle of filling and emptying the hold (storage) cell can be synced with an ALD cycle. The setup is not suitable for spatial reactors which demand continuous supply of vapor.
Solid precursors are not as common as liquid but are still used. A very common example of a solid precursor having potential applications in ALD for semiconductor industry is trimethylindium (TMIn). In MLD, some solid co-reactants like p-Aminophenol, Hydroquinone, p-Phenylenediamine can overcome the problem of double reactions faced by liquid reactants like Ethylene glycol. Their aromatic backbone can be attributed as one of the reasons for this. Growth rates obtained from such precursors is usually higher than precursors with flexible backbones.
However, most of the solid precursors have relatively low vapor pressures and slow evaporation kinetics.
For temporal setups, the precursor is generally filled in a heated boat and the overhead vapors are swept to the reactor by a carrier gas. However, slow evaporation kinetics make it difficult to deliver equilibrium vapor pressures. In order to ensure maximum saturation of a carrier gas with the precursor vapor, the contact between a carrier gas and the precursor needs to be long and sufficient. A simple dip-tube bubbler, commonly used for liquids, can be used for this purpose. But, the consistency in vapor delivery from such a setup is prone to evaporative/sublimative cooling of the precursor, [ 30 ] [ 31 ] precursor caking, carrier gas channeling, [ 32 ] changes in precursor morphology and particle size change. [ 32 ] Also, blowing high flows of carrier gas through a solid precursor can lead to small particles being carried away to the reactor or a downstream filter thereby clogging it. In order to avoid these problems, the precursor may first be dissolved in a non-volatile inert liquid or suspended in it and the solution/suspension can then be used in a bubbler setup. [ 33 ]
Apart from this, some special vapor delivery systems have also been designed for solid precursors to ensure stable and consistent delivery of precursor vapor for longer durations and higher carrier flows. [ 32 ] [ 34 ]
ALD/MLD are both gas phase processes. Hence, precursors are required to be introduced into the reaction zones in their gaseous form. A precursor already existing in a gaseous physical state would make its transport to the reactor very straightforward and hassle free. For example, there will be no need of heating the precursor thereby reducing the risk of condensation. However, precursors are seldom available in gaseous state. On the other hand, some ALD co-reactants are available in gaseous form. Examples include H 2 S used for sulphide films; [ 35 ] NH 3 used for nitride films; [ 36 ] plasmas of O 2 [ 37 ] and O 3 [ 38 ] to produce oxides. The most common and straight forward way of regulating the supply of these co-reactants to the reactor is using a mass flow controller attached between the source and the reactor. They can also be diluted with an inert gas to control their partial pressure.
Several characterisation techniques have evolved over time as the demand for creating ALD/MLD films for different applications has increased. This includes lab-based characterisation and efficient synchrotron-based x-ray techniques.
Since they both follow a similar protocol, almost all characterisation applicable to ALD generally applies to MLD as well. Many tools have been employed to characterise MLD film properties such as thickness, surface and interface roughness, composition, and morphology. Thickness and roughness (surface and interface) of a grown MLD film are of utmost importance and are usually characterised ex-situ by X-ray reflectivity (XRR) . [ 39 ] In-situ techniques offer an easier and more efficient characterisation than their ex-situ counterparts, among which spectroscopic ellipsometry (SE) [ 40 ] and quartz crystal microbalance (QCM) [ 41 ] have become very popular to measure thin films from a few angstroms to a few micrometers with exceptional thickness control. [ 42 ] [ 43 ]
X-ray photoelectron spectroscopy (XPS) [ 44 ] and X-ray diffractometry (XRD) [ 45 ] are widely used to gain insights into film composition and crystallinity, respectively, whereas atomic force microscopy (AFM) [ 46 ] and scanning electron microscopy (SEM) [ 47 ] are being frequently utilised to observe surface roughness and morphology. As MLD mostly deals with hybrid materials, comprising both organic and inorganic components, Fourier transform infrared spectroscopy (FTIR) [ 48 ] is an important tool to understand the new functional group added or removed during the MLD cycles and also it is a powerful tool to elucidate the underlying chemistry or surface reactions [ 25 ] during each sub cycle of an MLD process.
A synchrotron is an immensely powerful source of x-rays that reaches energy levels which cannot be achieved in a lab-based environment. It produces synchrotron radiation , the electromagnetic radiation emitted when charged particles undergo radial acceleration, whose high power levels offer a deeper understanding of processes and lead to cutting-edge research outputs. [ 49 ] Synchrotron-based characterisations also offer potential opportunities for understanding the basic chemistry and developing fundamental knowledge about MLD processes and their potential applications. [ 50 ] [ 51 ] The combination of in-situ X-ray fluorescence (XRF) [ 52 ] and Grazing incidence small angle X-ray scattering (GISAXS) [ 53 ] has been demonstrated as a successful methodology to learn the nucleation and growth during ALD processes [ 54 ] [ 55 ] and, although this combination has not yet been investigated in detail to study MLD processes, it holds great potential to improve the understanding of initial nucleation and internal structure of the hybrid materials developed by MLD or by vapour phase infiltration (VPI). [ 56 ]
The main application for molecular scale-engineered hybrid materials relies on its synergetic properties, which surpass the individual performance of their inorganic and organic components. The main fields of application of MLD-deposited materials are [ 57 ]
Combining inorganic and organic building blocks on a molecular scale has proved to be challenging, due to the different preparative conditions needed for forming inorganic and organic networks. Current routes are often based on solution chemistry, e.g. sol-gel synthesis combined with spin-coating, dipping or spraying, to which MLD is an alternative.
The dielectric constant (k) of a medium is defined as the ratio of the capacitor capacitances with and without medium. [ 58 ] Nowadays delay, crosstalk and power dissipation caused by the resistance of the metal interconnection and the dielectric layer of nanoscale devices have become the main factors that limit the performance of a device and, as electronic devices are scaled-down further, interconnect resistance capacitance (RC) delay may dominate the overall device speed. To solve this, current work is focused on minimising the dielectric constant of materials by combining inorganic and organic materials, [ 59 ] whose reduced capacitance allows for shrinkage of spacing between metal lines and, with it, the ability to decrease the number of metal layers in a device. In these kind of materials, the organic part must be hard and resistant and, for that purpose, metal oxides and fluorides are commonly used. However, since this materials are more brittle, organic polymers are also added, providing the hybrid material with low dielectric constant, good interstitial ability, high flatness, low residual stress, low thermal conductivity. In current research, great efforts are being put in order to prepare low-k materials by MLD with a k value of less than 3. [ 60 ]
Novel organic thin-film transistors require a high-performance dielectric layer, which should be thin and possess a high k-value. MLD makes tuning the high-k and dielectric strength possible by altering the amount and the ratio of the organic and inorganic components. Moreover, the usage of MLD allows to achieve better mechanical properties in terms of flexibility.
Various hybrid dielectrics have already been developed: zincone hybrids from zirconium tert-butoxide (ZTB) and ethylene glycol (EG); [ 61 ] Al 2 O 3 based hybrids such as self-assembled MLD-deposited octenyltrichlorosilane (OTS) layers and Al 2 O 3 linkers. [ 62 ] Additionally, dielectric Ti-based hybrid from TiCl 4 and fumaric acid proved its applicability in charge memory capacitors. [ 63 ]
MLD has high potential for the deposition of porous hybrid organic-inorganic and purely organic films, such as Metal-Organic Frameworks (MOFs) and Covalent-Organic Frameworks (COFs). Thanks to the defined pore structure and chemical tunability, thin films of these novel materials are expected to be incorporated in the next generation of gas sensors and low-k dielectrics. [ 64 ] [ 65 ] Conventionally, thin films of MOFs and COFs are grown via solvent-based routes, which are detrimental in a cleanroom environment and can cause corrosion of the pre-existing circuitry. [ 64 ] As a cleanroom-compatible technique, MLD presents an attractive alternative, which has not been fully realized yet. As to date, there are no reports on direct MLD of MOFs and COFs. Scientists are actively developing other solvent-free all-gas-phase methods towards a true MLD process.
One of the early examples of an MLD-like process is the so-called "MOF-CVD". It was first realized for ZIF-8 utilizing a two-step process: ALD of ZnO followed by exposure to 2-methylimidazole linker vapor. [ 66 ] It was later extended to several other MOFs. [ 67 ] [ 68 ] MOF-CVD is a single-chamber deposition method and the reactions involved exhibit self-limiting nature, bearing a strong resemblance to a typical MLD process.
An attempt to perform a direct MLD of a MOF by sequential reactions of a metal precursor and organic linker commonly results in a dense and amorphous film. Some of these materials can serve as a MOF precursor after a specific gas-phase post-treatment. This two-step process presents an alternative to the MOF-CVD. It has been successfully realized for a few prototypical MOFs: IRMOF-8, [ 69 ] MOF-5, [ 70 ] UiO-66, [ 71 ] Though the post-treatment step is necessary for MOF crystallization, it often requires harsh conditions (high temperature, corrosive vapors) that lead to rough and non-uniform films. A deposition with zero to minimum post-treatment is highly desirable for industrial applications.
Conductive and flexible films are crucial for numerous emerging applications, such as displays, wearable devices, photovoltaics, personal medical devices, etc. For example, a zincone hybrid is closely related to a ZnO film and, therefore, may combine the conductivity of ZnO with the flexibility of an organic layer. Zincones can be deposited from diethylzinc (DEZ), hydroquinone (HQ) and water to generate a molecular chain in the form of (−Zn-O-phenylene-O−) n , which is an electrical conductor. [ 72 ] Measurements of a pure ZnO film showed a conductivity of ~14 S/m, while the MLD zincone showed ~170 S/m, demonstrating a considerable enhancement of the conductivity in the hybrid alloy of more than one order of magnitude.
One of the main applications of MLD in the batteries field is to coat the battery electrodes with hybrid (organic-inorganic) coatings. The main reason being, these coatings can potentially protect the electrodes from the main sources of degradation, while not breaking. These coatings are more flexible than purely inorganic materials. Therefore, being able to cope with volume expansion occurring in the battery electrodes upon charge and discharge.
Atomic/molecular layer deposition (ALD/MLD) as a thin film deposition technology with high precision and control creates this opportunity to produce very good hybrid inorganic-organic superlattice structures. Adding organic barrier layers inside the inorganic lattice of the thermoelectric materials improves the thermoelectric efficiency. The aforementioned phenomenon is the result of a quenching effect that the organic barrier layers have on phonons. Consequently, the electrons that are mainly responsible for the electrical transport through the lattice, can pass through the organic layers mostly intact, while the phonons that are responsible for the thermal transport will be suppressed to some degree. Consequently, the resulting films will have better thermoelectric efficiency.
It is believed that the application of barrier layers along with other methods for increasing thermoelectric efficiency can help to produce thermoelectric modules that are non-toxic, flexible, cheap, and stable. One such case is thermoelectric oxides of earth-abundant elements. These oxides in comparison to other thermoelectric materials have lower thermoelectricity due to their higher thermal conductivity. Therefore, adding barrier layers, by means of ALD/MLD, is a good method to overcome this negative characteristic of oxides.
MLD can also be applied to design of bioactive and biocompatible surfaces for targeted cell and tissue responses. Bioactive materials involve materials for regenerative medicine, tissue engineering (tissue scaffolds), biosensors etc. The important factors that can affect the cell-surface interaction, as well as the immune response of the system are surface chemistry (e.g. functional groups, surface charge and wettability) and surface topography. [ 76 ] Understanding these properties is crucial in order to control the attachment and proliferation of the cell, and resultant bioactivity of the surfaces. Furthermore, the choice of organic building blocks and a type of biomolecules (e.g. proteins, peptides or polysaccharides) during the formation of bioactive surfaces is a key factor for cellular response of the surface. MLD allows for the building of bioactive, precise structures by combining such organic molecules with inorganic biocompatible elements like titanium. The use of MLD for biomedical applications is not widely studied and is a promising field of research. This method enables surface modification and thus, can functionalize a surface.
A recent study published in 2017 used MLD to create bioactive scaffolds by combining titanium clusters with amino acids such as glycine, L-aspartic acid and L-arginine as organic linkers, to enhance rat conjunctival goblet cell proliferation. [ 77 ] This novel group of organic-inorganic hybrid materials was called titaminates . Also, the bioactive hybrid materials that contain titanium and primary nucleobases such as thymine, uracil and adenine show high (>85%) cell viability and potential application in the field of tissue engineering. [ 78 ] [ 79 ]
Hospital-acquired infections caused by pathogenic microorganisms such as bacteria, viruses, parasites or fungi, are a major problem in modern healthcare. [ 80 ] A large number of these microbes developed the ability to stop popular antimicrobial agents (such as antibiotics and antivirals) from working against them. To overcome the increasing problem of antimicrobial resistance, it has become necessary to develop alternative and effective antimicrobial technologies to which pathogens will not be able to develop resistance.
One possible approach is to cover a surface of medical devices with antimicrobial agents e.g. photosensitive organic molecules. In the method called antimicrobial photodynamic inactivation [ 81 ] (aPDI), photosensitive organic molecules utilise light energy to form highly reactive oxygen species that oxidize biomolecules (like proteins, lipids and nucleic acids) leading to the pathogen death. [ 82 ] [ 83 ] Furthermore, aPDI can locally treat the infected area, which is an advantage for small medical devices like dental implants. MLD is a suitable technique to combine such photosensitive organic molecules like aromatic acids with biocompatible metal clusters (i.e. zirconium or titanium) to create light-activated antimicrobial coatings with controlled thickness and accuracy. The recent studies show that the MLD-fabricated surfaces based on 2,6-naphthalenedicarboxylic acid and Zr-O clusters were successfully used against Enterococcus faecalis in the presence of UV-A irradiation. [ 84 ]
The main advantage of molecular layer deposition relates to its slow, cyclical approach. While other techniques may yield thicker films in shorter times, molecular layer deposition is known for its thickness control at Angstrom level precision. In addition, its cyclical approach yields films with excellent conformality, making it suitable for the coating of surfaces with complex shapes. The growth of multilayers consisting of different materials is also possible with MLD, and the ratio of organic/inorganic hybrid films can easily be controlled and tailored to the research needs.
As well as in the previous case, the main disadvantage of molecular layer deposition is also related to it slow, cyclical approach. Since both precursors are pulsed sequentially during each cycle, and saturation needs to be achieved each time, the time required in order to obtain a film thick enough can easily be in the order of hours, if not days. In addition, before depositing the desired films it is always necessary to test and optimise all parameters for it to yield successful results.
In addition, another issue related to hybrid films deposited via MLD is their stability. Hybrid organic/inorganic films can degrade or shrink in H 2 O. However, this can be used to facilitate the chemical transformation of the films. Modifying the MLD surface chemistries can provide a solution to increase the stability and mechanical strength of hybrid films.
In terms of cost, regular molecular layer deposition equipment can cost between $200,000 and $800,000. What's more, the cost of the precursors used needs to be taken into consideration. [ 85 ]
Similar to the atomic layer deposition case, there are some rather strict chemical limitations for precursors to be suitable for molecular layer deposition.
MLD precursors must have [ 86 ]
In addition, it is advisable to find precursors with the following characteristics: | https://en.wikipedia.org/wiki/Molecular_layer_deposition |
A molecular lesion or point lesion is damage to the structure of a biological molecule such as DNA , RNA , or protein . This damage may result in the reduction or absence of normal function, and in rare cases the gain of a new function. Lesions in DNA may consist of breaks or other changes in chemical structure of the helix, ultimately preventing transcription. Meanwhile, lesions in proteins consist of both broken bonds and improper folding of the amino acid chain . While many nucleic acid lesions are general across DNA and RNA, some are specific to one, such as thymine dimers being found exclusively in DNA. Several cellular repair mechanisms exist, ranging from global to specific, in order to prevent lasting damage resulting from lesions.
There are two broad causes of nucleic acid lesions, endogenous and exogenous factors. Endogenous factors, or endogeny , refer to the resulting conditions that develop within an organism. This is in contrast with exogenous factors which originate from outside the organism. DNA and RNA lesions caused by endogenous factors generally occur more frequently than damage caused by exogenous ones. [ 1 ]
Endogenous sources of specific DNA damage include pathways like hydrolysis , oxidation , alkylation , mismatch of DNA bases, depurination , depyrimidination, double-strand breaks (DSS), and cytosine deamination . DNA lesions can also naturally occur from the release of specific compounds such as reactive oxygen species (ROS) , reactive nitrogen species (RNS) , reactive carbonyl species (RCS) , lipid peroxidation products, adducts , and alkylating agents through metabolic processes. ROS is one of the major endogenous sources of DNA damage and the most studied oxidative DNA adduct is 8-oxo-dG . Other adducts known to form are etheno-, propano-, and malondialdehyde-derived DNA adducts. The aldehydes formed from lipid peroxidation also pose another threat to DNA. [ 2 ] Proteins such as "damage-up" proteins (DDPs) can promote endogenous DNA lesions by either increasing the amount of reactive oxygen by transmembrane transporters, losing chromosomes by replisome binding, and stalling replication by transcription factors. [ 3 ] For RNA lesions specifically, the most abundant types of endogenous damage include oxidation, alkylation, and chlorination . [ 4 ] Phagocytic cells produce radical species that include hypochlorous acid (HOCl), nitric oxide (NO•), and peroxynitrite (ONOO−) to fight infections, and many cell types use nitric oxide as a signaling molecule. However, these radical species can also cause the pathways that form RNA lesions. [ 5 ]
UV light, specifically non-ionizing shorter-wavelength radiation such as UVC and UVB, causes direct DNA damage by initiating a synthesis reaction between two thymine molecules. The resulting dimer is very stable. Although they can be removed through excision repairs, when UV damage is extensive, the entire DNA molecule breaks down and the cell dies. If the damage is not too extensive, precancerous or cancerous cells are created from healthy cells. [ 6 ]
Chemotherapeutics, by design, induce DNA damage and are targeted towards rapidly dividing cancer cells. [ 7 ] However, these drugs can not tell the difference between sick and healthy cells, resulting in the damage of normal cells. [ 8 ]
Alkylating agents are a type of chemotherapeutic drug which keeps the cell from undergoing mitosis by damaging its DNA. They work in all phases of the cell cycle. The use of alkylating agents may result in leukemia due to them being able to target the cells of the bone marrow. [ 8 ]
Carcinogens are known to cause a number of DNA lesions, such as single-strand breaks, double- strand breaks, and covalently bound chemical DNA adducts. Tobacco products are one of the most prevalent cancer-causing agents of today. [ 9 ] Other DNA damaging, cancer-causing agents include asbestos, which can cause damage through physical interaction with DNA or by indirectly setting off a reactive oxygen species, [ 10 ] excessive nickel exposure, which can repress the DNA damage-repair pathways, [ 11 ] aflatoxins, which are found in food, [ 9 ] and many more.
Oxidative lesions are an umbrella category of lesions caused by reactive oxygen species (ROS), reactive nitrogen species (RNS), other byproducts of cellular metabolism , and exogenous factors such as ionizing or ultraviolet radiation . [ 12 ] Byproducts of oxidative respiration are the main source of reactive species which cause a background level of oxidative lesions in the cell. DNA and RNA are both affected by this, and it has been found that RNA oxidative lesions are more abundant in humans compared to DNA. This may be due to cytoplasmic RNA having closer proximity to the electron transport chain . [ 13 ] Known oxidative lesions characterized in DNA and RNA are many in number, as oxidized products are unstable and may resolve quickly. The hydroxyl radical and singlet oxygen are common reactive oxygen species responsible for these lesions. [ 14 ] 8-oxo-guanine (8-oxoG) is the most abundant and well characterized oxidative lesion, found in both RNA and DNA. Accumulation of 8-oxoG may cause dire damage within the mitochondria and is thought to be a key player in the aging process. [ 15 ] RNA oxidation has direct consequences in the production of proteins . mRNA affected by oxidative lesions is still recognized by ribosome , but the ribosome will undergo stalling and dysfunction. This results in proteins having either decreased expression or truncation, leading to aggregation and general dysfunction. [ 16 ]
Single-strand breaks (SSBs) occur when one strand of the DNA double helix experiences breakage of a single nucleotide accompanied by damaged 5’- and/or 3’-termini at this point. One common source of SSBs is due to oxidative attack by physiological reactive oxygen species (ROS) such as hydrogen peroxide. H 2 O 2 causes SSBs three times more frequently than double-strand breaks (DSBs). Alternative methods of SSB acquisition include direct disintegration of the oxidized sugar or through DNA base-excision repair (BER) of damaged bases. Additionally, cellular enzymes may perform erroneous activity leading to SSBs or DSBs by a variety of mechanisms. One such example would be when the cleavage complex formed by DNA topoisomerase 1 (TOP1) relaxes DNA during transcription and replication through the transient formation of a nick . While TOP1 normally reseals this nick shortly after, these cleavage complexes may collide with RNA or DNA polymerases or be proximal to other lesions, leading to TOP1-linked SSBs or TOP1-linked DSBs. [ 20 ]
A DNA adduct is a segment of DNA that binds to a chemical carcinogen. Some adducts that cause lesions to DNA included oxidatively modified bases, propano-, etheno-, and MDA-induced adducts. [ 2 ] 5‐Hydroxymethyluracil is an example of an oxidatively modified base where oxidation of the methyl group of thymine occurs. [ 21 ] This adduct interferes with the binding of transcription factors to DNA which can trigger apoptosis or result in deletion mutations. [ 21 ] Propano adducts are derived by species generated by lipid peroxidation. For example, HNE is a major toxic product of the process. [ 22 ] It regulates the expression of genes that are involved in cell cycle regulation and apoptosis. Some of the aldehydes from lipid peroxidation can be converted to epoxy aldehydes by oxidation reactions. [ 23 ] These epoxy aldehydes can damage DNA by producing etheno adducts. An increase in this type of DNA lesion exhibits conditions resulting in oxidative stress which is known to be associated with an increased risk of cancer. [ 24 ] Malondialdehyde (MDA) is another highly toxic product from lipid peroxidation and also in the synthesis of prostaglandin. MDA reacts with DNA to form the M1dG adduct which causes DNA lesions. [ 2 ]
Many systems are in place to repair DNA and RNA lesions but it is possible for lesions to escape these measures. This may lead to mutations or large genome abnormalities, which can threaten the cell or organism's ability to live. Several cancers are a result of DNA lesions. Even repair mechanisms to heal the damage may end up causing more damage. Mismatch repair defects, for example, cause instability that predisposes to colorectal and endometrial carcinomas. [ 9 ]
DNA lesions in neurons may lead to neurodegenerative disorders such as Alzheimer's , Huntington's, and Parkinson's diseases. These come as a result of neurons generally being associated with high mitochondrial respiration and redox species production, which can damage nuclear DNA. Since these cells often cannot be replaced after being damaged, the damage done to them leads to dire consequences. Other disorders stemming from DNA lesions and their association with neurons include but are not limited to Fragile X syndrome, Friedreich's ataxia , and Spinocerebellar ataxias . [ 9 ]
During replication , usually DNA polymerases are unable to go past the lesioned area, however, some cells are equipped with special polymerases which allow for translesion synthesis (TLS). TLS polymerases allow for the replication of DNA past lesions and risk generating mutations at a high frequency. Common mutations that occur after undergoing this process are point mutations and frameshift mutations . Several diseases come as a result of this process including several cancers and Xeroderma pigmentosum . [ 25 ]
The effect of oxidatively damaged RNA has resulted in a number of human diseases and is especially associated with chronic degeneration. This type of damage has been observed in many neurodegenerative diseases such as Amyotrophic lateral sclerosis , [ 9 ] Alzheimer's, Parkinson's, dementia with Lewy bodies, and several prion diseases. [ 26 ] It is important to note that this list is rapidly growing and data suggests that RNA oxidation occurs early in the development of these diseases, rather than as an effect of cellular decay. [ 9 ] RNA and DNA lesions are both associated with the development of diabetes mellitus type 2. [ 9 ]
When DNA is damaged such as due to a lesion, a complex signal transduction pathway is activated which is responsible for recognizing the damage and instigating the cell's response for repair. Compared to the other lesion repair mechanisms, DDR is the highest level of repair and is employed for the most complex lesions. DDR consists of various pathways, the most common of which are the DDR kinase signaling cascades. These are controlled by phosphatidylinositol 3-kinase-related kinases (PIKK), and range from DNA-dependent protein kinase (DNA-PKcs) and ataxia telangiectasia-mutated (ATM) most involved in repairing DSBs to the more versatile Rad3-related (ATR). ATR is crucial to human cell viability, while ATM mutations cause the severe disorder ataxia-telangiectasia leading to neurodegeneration, cancer, and immunodeficiency. These three DDR kinases all recognize damage via protein-protein interactions which localize the kinases to the areas of damage. Next, further protein-protein interactions and posttranslational modifications (PTMs) complete the kinase activation, and a series of phosphorylation events takes place. DDR kinases perform repair regulation at three levels - via PTMs, at the level of chromatin , and at the level of the nucleus . [ 27 ]
Base excision repair ( BER ) is responsible for removing damaged bases in DNA. This mechanism specifically works on excising small base lesions which do not distort the DNA double helix, in contrast to the nucleotide excision repair pathway which is employed in correcting more prominent distorting lesions. DNA glycosylases initiate BER by both recognizing the faulty or incorrect bases and then removing them, forming AP sites lacking any purine or pyrimidine. AP endonuclease then cleaves the AP site, and the single-strand break is either processed by short-patch BER to replace a single nucleotide long-patch BER to create 2-10 replacement nucleotides. [ 28 ]
Single stranded breaks (SSBs) can severely threaten genetic stability and cell survival if not quickly and properly repaired, so cells have developed fast and efficient SSB repair (SSBR) mechanisms. While global SSBR systems extract SSBs throughout the genome and during interphase, S-phase specific SSBR processes work together with homologous recombination at the replication forks. [ 29 ]
Double stranded breaks (DSB) are a threat to all organisms as they can cause cell death and cancer. They can be caused exogenously as a result of radiation and endogenously from errors in replication or encounters with DNA lesions by the replication fork. [ 30 ] DSB repair occurs through a variety of different pathways and mechanisms in order to correctly repair these errors.
Nucleotide excision repair is one of the main mechanisms used to remove bulky adducts from DNA lesions caused by chemotherapy drugs, environmental mutagens, and most importantly UV radiation. [ 9 ] This mechanism functions by releasing a short damage containing oligonucleotide from the DNA site, and then that gap is filled in and repaired by NER. [ 9 ] NER recognizes a variety of structurally unrelated DNA lesions due to the flexibility of the mechanism itself, as NER is highly sensitive to changes in the DNA helical structure . [ 31 ] Bulky adducts seem to trigger NER. [ 31 ] The XPC-RAD23-CETN2 heterotrimer involved with NER has a critical role in DNA lesion recognition. [ 32 ] In addition to other general lesions in the genome, UV damaged DNA binding protein complex (UV-DDB) also has an important role in both recognition and repair of UV-induced DNA photolesions. [ 32 ]
Mismatch repair (MMR) mechanisms within the cell correct base mispairs that occur during replication using a variety of pathways. It has a high affinity for targeting DNA lesions with specificity, as alternations in base pair stacking that occur at DNA lesion sites affect the helical structure . [ 33 ] This is likely one of many signals that triggers MMR. | https://en.wikipedia.org/wiki/Molecular_lesion |
A molecular logic gate is a molecule that performs a logical operation based on at least one physical or chemical inputs and a single output. The field has advanced from simple logic systems based on a single chemical or physical input to molecules capable of combinatorial and sequential operations such as arithmetic operations (i.e. moleculators and memory storage algorithms). [ 1 ] Molecular logic gates work with input signals based on chemical processes and with output signals based on spectroscopic phenomena.
Logic gates are the fundamental building blocks of computers , microcontrollers and other electrical circuits that require one or more logical operations. They can be used to construct digital architectures with varying degrees of complexity by a cascade of a few to several million logic gates, and are essentially physical devices that produce a singular binary output after performing logical operations based on Boolean functions on one or more binary inputs. The concept of molecular logic gates, extending the applicability of logic gates to molecules, aims to convert chemical systems into computational units. [ 2 ] [ 3 ] The field has evolved to realize several practical applications in fields such as molecular electronics , biosensing , DNA computing , nanorobotics , and cell imaging .
For logic gates with a single input, there are four possible output patterns. When the input is 0, the output can be either a 0 or 1. When the input is 1, the output can again be 0 or 1. The four output bit patterns correspond to a specific logic type: PASS 0, YES, NOT, and PASS 1. PASS 0 and PASS 1 always outputs 0 and 1, respectively, regardless of input. YES outputs a 1 when the input is 1, and NOT is the inverse of YES – it outputs a 0 when the input is 1. [ citation needed ]
AND , OR , XOR , NAND , NOR , XNOR , and INH are two-input logic gates. The AND , OR , and XOR gates are fundamental logic gates, and the NAND , NOR , and XNOR gates are complementary to AND, OR, and XOR gates, respectively. An INHIBIT (INH) gate is a special conditional logic gate that includes a prohibitory input. When the prohibitory input is absent, the output produced depends solely on the other input. [ citation needed ]
One of the earliest ideas for the use of π-conjugated molecules in molecular computation was proposed by Ari Aviram from IBM in 1988. [ 5 ]
The first practical realization of molecular logic was by de Silva et al . in their seminal work, in which they constructed a molecular photoionic AND gate with a fluorescent output. [ 6 ] While a YES molecular logic gate can convert signals from their ionic to photonic forms, they are singular-input-singular-output systems. To build more complex molecular logic architectures, two-input gates, namely AND and OR gates, are needed. Some early works made some progress in this direction, but they could not realize a complete truth table as their protonated ionic forms could not bind to the substrate in every case. [ 7 ] [ 8 ] De Silva et al. constructed an anthracene -based AND gate made up of tertiary amine and benzo-18-crown-6 units, both of which were known to show photoinduced electron transfer (PET) processes. The two molecules acted as receptors that were connected to the anthracene-based fluorophore by alkyl spacers. The PET is quenched upon coordination with protons [ 9 ] and sodium ions, [ 10 ] respectively, for the two receptors, and would cause the anthracene unit to fluoresce .
An example of a YES logic gate comprises a benzo-crown-ether connected to a cyano-substituted anthracene unit. An output of 1 (fluorescence) is obtained only when sodium ions are present in the solution (indicating an input of 1). Sodium ions are encapsulated by the crown ether , resulting in a quenching of the PET process and causing the anthracene unit to fluoresce. [ 11 ]
This molecular logic gate illustrates the advancement from redox-fluorescent switches to multi-input logic gates with an electrochemical switch, detecting the presence of acids. This two-input AND logic gate incorporates a tertiary amine proton receptor and a tetrathiafulvalene redox donor. These groups, when attached to anthracene, can simultaneously process information concerning the concentration of the acid and oxidizing ability of the solution. [ 12 ]
De Silva et al. constructed an OR molecular logic gate using an aza-crown ether receptor and sodium and potassium ions as the inputs. Either of the two ions could bind to the crown ether, causing the PET to be quenched and the fluorescence to be turned on. Since either of the two ions (input “1”) could cause fluorescence (output “1”), the system resembled an OR logic gate. [ 6 ]
The INH logic gate incorporates a Tb 3+ ion in a chelate complex. This two-input logic gate displays non-commutative behavior with chemical inputs and a phosphorescence output. Whenever dioxygen (input “1”) is present, the system is quenched and no phosphorescence is observed (output “0”). The second input, H + , must also be present for an output “1” to be observed. [ 13 ]
Parker and Williams constructed a NAND logic gate based on strong emission from a terbium complex of phenanthridine . When acid and oxygen (the two inputs) are absent (input “0”), the terbium center fluoresces (output “1”). [ 14 ]
Akkaya and coworkers demonstrated a molecular NOR gate using a boradiazaindacene system. Fluorescence of the highly-emissive boradiazaindacene (input “1”) was found to be quenched in the presence of either a zinc salt [Zn(II)] or trifluoroacetic acid (TFA). [ 15 ]
De Silva and McClenaghan designed a proof-of-principle arithmetic device based on molecular logic gates. Compound A is a push-pull olefin with the top receptor containing four carboxylic acid anion groups (and non-disclosed counter cations) capable of binding to calcium . The bottom part is a quinoline molecule which is a receptor for hydrogen ions. The logic gate operates as follows: without any chemical input of Ca 2+ or H + , the chromophore shows a maximum absorbance in UV/VIS spectroscopy at 390 nm . When calcium is introduced, a hypsochromic shift ( blue shift ) takes place and the absorbance at 390 nm decreases; likewise, an addition of protons causes a bathochromic shift ( red shift ). When both cations are in water, the net result is absorption at the original 390 nm wavelength. This system represents an XNOR logic gate in absorption and an XOR logic gate in transmittance . [ 16 ]
In another XOR logic gate system, the chemistry is based on pseudorotaxane . In organic solution the electron-deficient diazapyrenium salt (rod) and the electron-rich 2,3-dioxy naphthalene units of the crown ether (ring) self-assemble by formation of a charge transfer complex . An added tertiary amine like tributylamine forms a 1:2 adduct with the diazapyrene and the complex gets dethreaded. This process is accompanied by an increase in emission intensity at 343 nm resulting from freed crown ether. Added trifluoromethanesulfonic acid reacts with the amine and the process is reverted. Excess acid locks the crown ether by protonation and the complex is de-threaded again. [ 17 ]
In compound B , the bottom section contains a tertiary amino group that is capable of binding to protons. In this system, fluorescence only occurs when both cations are present. The presence of both cations hinders PET, allowing compound B to fluoresce. In the absence of either ion, fluorescence is quenched by PET, which involves an electron transfer from either the nitrogen atom or the oxygen atoms, or both to the anthracenyl group. When both receptors are bound to calcium ions and protons respectively, both PET channels are shut off. The overall result of Compound B is AND logic, since an output of "1" (fluorescence) occurs only when both Ca 2+ and H + are present in solution, that is, have values as "1". With both systems running in parallel and the monitoring of transmittance for system A and fluorescence for system B, the result is a half-adder capable of reproducing the equation 1 + 1 = 2. [ 16 ]
In a modification of system B, three chemical inputs are simultaneously processed in an AND logic gate. An enhanced fluorescence signal is observed only in the presence of excess protons, zinc and sodium ions through interactions with their respective amine, phenyldiaminocarboxylate, and crown ether receptors. The processing mode operates similarly as discussed above – fluorescence is observed due to the prevention of competing PET reactions from the receptors to the excited anthracene fluorophore. The absence of any ion input results in a low fluorescence output. Each receptor is selective for its specific ion as an increase in the concentration of the other ions does not yield a high fluorescence. The specific concentration threshold of each input must be reached to achieve a fluorescent output in accordance with combinatorial AND logic. [ 18 ]
A molecular logic gate can process modulators much like the setup seen in de Silva’s proof-of-principle, [ 16 ] but incorporating different logic gates on the same molecule is challenging. Such a function is called integrated logic and is exemplified by the BODIPY -based, half-subtractor logic gate illustrated by Coskun, Akkaya, and their colleagues. When monitored at two different wavelengths, 565 and 660 nm, XOR and INH logic gates operations are realized at the respective wavelengths. Optical studies of this compound in tetrahydrofuran reveal an absorbance peak at 565 nm and an emission peak at 660 nm. Addition of an acid results in a hypsochromic shift of both peaks as protonation of the tertiary amine results in an internal charge transfer. The color of the emission observed is yellow. When a strong base is added, the phenolic hydroxyl group is deprotonated, effecting a PET that renders the molecule non-emissive. When an acid and base are added, the molecule is observed to give off a red emission, as the tertiary amine would not be protonated while the hydroxyl group would remain protonated, resulting in the absence of both PET and intramolecular charge transfer (ICT). Due to the great difference in emission intensity, this single molecule is capable of carrying out subtraction at a nanoscale level. [ 19 ]
A full adder system based on fluorescein has also been constructed by Shanzer et al. The system is able to compute 1+1+1=3. [ 1 ]
Over the years, the utility of molecular logic gates has been explored in a wide range of fields such as chemical and biological detection, the pharmaceutical and food industries, and the emerging fields of nanomaterials and chemical computing . [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ]
Fluoride (F − ) and acetate (CH 3 COO − ) anions are among the most important ones in the context of human health and well-being. The former, used extensively in health care, is known for its toxicity and corrosiveness. The latter can cause alkalosis and affect metabolic pathways beyond a certain concentration. Hence, it is crucial to develop methods to detect these anions in aqueous media. Bhat et al. . constructed an INH gate with receptors that bind selectively to F ‑ and CH 3 COO − anions. The system used changes in absorbance as a colorimetric -based output to detect the concentration of anions. [ 25 ]
Wen and coworkers designed an INH molecular logic gate with Fe 3+ and EDTA as the inputs and a fluorescent output for the detection of ferric ions in solutions. The fluorescence of the system is quenched if and only if Fe 3+ input is present and EDTA is absent. [ 26 ]
Heavy metal ions are a persistent threat to human health because of their inherent toxicity and low degradability. Several molecular logic gate-based systems have been constructed to detect ions such as Cd 2+ , [ 27 ] Hg 2+ / Pb 2+ , [ 28 ] and Ag + . [ 29 ] In their work, Chen et al . demonstrated that logic gate-based systems could be used to detect Cd 2+ ions in rice samples. [ 27 ]
The effectiveness of methods such as chemotherapy to treat cancer tends to plateau after some time, as the cells undergo molecular changes that render them insensitive to the effect of anticancer drugs, [ 30 ] making the early detection of cancerous cells important. A biomarker, microRNA (miRNA), is crucial in this detection via its expression patterns. [ 31 ] Zhang et al. have demonstrated an INH-OR gate cascade for the purpose, [ 32 ] Yue et al . used an AND gate to construct a system with two miRNA inputs and a quantum dot photoluminescence output, [ 33 ] and Peng et al. also constructed an AND gate-based dual-input system for the simultaneous detection of miRNAs from tumor cells. [ 34 ]
Akkaya et al. illustrated the application of a logic gate for photodynamic therapy in their work. A BODIPY dye attached to a crown ether and two pyridyl groups separated by spacers works according to an AND logic gate. The molecule works as a photodynamic agent upon irradiation at 660 nm under conditions of relatively high sodium and proton ion concentrations by converting triplet oxygen to cytotoxic singlet oxygen . This prototypical example uses higher sodium levels and lower pH in tumor tissue compared to the levels in normal cells. When these two cancer-related cellular parameters are satisfied, a change is observed in the absorbance spectrum. [ 35 ]
The concept of DNA computing arose from addressing storage density issues because of the increasing volumes of data information. Theoretically, a gram of single-stranded DNA is capable of storing over 400 exabytes of data at a density of two bits per nucleotide . [ 36 ] Leonard Adleman is credited with having established the field in 1994. [ 37 ] Recently, molecular logic gate systems have been utilized in DNA computing models. [ 38 ]
Massey et al . constructed photonic DNA molecular logic circuits using cascades of AND, OR, NAND, and NOR molecular logic gates. [ 39 ] They used lanthanide complexes as fluorescent markers, and their luminescent outputs were detected by FRET -based devices at the terminals of DNA strands. Works by Campbell et al. on demonstrating NOT, AND, OR, and XNOR logic systems based on DNA crossover tiles, [ 40 ] Bader et al . on manipulating the DNA G-quadruplex structure to realize YES, AND, and OR logic operations, [ 41 ] and Chatterjee and coworkers on constructing logic gates using reactive DNA hairpins on DNA origami surfaces are some examples of logic gate-based DNA computing. [ 42 ]
Nanorobots have the potential to transform drug delivery processes and biological computing . [ 43 ] Llopis-Lorente et al . developed a nanorobot that can perform logic operations and process information on glucose and urea . [ 44 ] Thubagere et al . designed a DNA molecular nanorobot capable of sorting chemical cargo. The system could work without additional power as the robot was capable of walking across the DNA origami surface on its two feet. It also had an arm to transport cargo. [ 45 ]
Margulies et al . demonstrated molecular sequential logic, where they created a molecular keypad lock resembling the processing capabilities of an electronic security device, which is equivalent to incorporates several interconnected AND logic gates in parallel. The molecule mimics an electronic keypad of an automated teller machine . The output signals are dependent not only on the presence of inputs but also on a correct order; i.e. the correct password must be entered. The molecule was designed using pyrene and fluorescein fluorophores connected by a siderophore , which binds to Fe(III), and the acidity of the solution changes the fluorescence properties of the fluorescein fluorophore. [ 46 ]
Molecular logic gate systems can theoretically overcome the problems arising when semiconductors approach nano-dimensions . Molecular logic gates are more versatile than their silicon counterparts, with phenomena such as superposed logic unavailable to semiconductor electronics. [ 24 ] Dry molecular gates, such as the one demonstrated by Avouris and colleagues, prove to be possible substitutes for semiconductor devices due to their small size, similar infrastructure, and data processing abilities. Avouris revealed a NOT logic gate composed of a bundle of carbon nanotubes . The nanotubes are doped differently in adjoining regions creating two complementary field effect transistors , and the bundle operates as a NOT logic gate only when satisfactory conditions are met. [ 47 ] | https://en.wikipedia.org/wiki/Molecular_logic_gate |
Molecular machines are a class of molecules typically described as an assembly of a discrete number of molecular components intended to produce mechanical movements in response to specific stimuli, mimicking macromolecular devices such as switches and motors. Naturally occurring or biological molecular machines are responsible for vital living processes such as DNA replication and ATP synthesis . Kinesins and ribosomes are examples of molecular machines, and they often take the form of multi-protein complexes . For the last several decades, scientists have attempted, with varying degrees of success, to miniaturize machines found in the macroscopic world. The first example of an artificial molecular machine (AMM) was reported in 1994, featuring a rotaxane with a ring and two different possible binding sites . In 2016 the Nobel Prize in Chemistry was awarded to Jean-Pierre Sauvage , Sir J. Fraser Stoddart , and Bernard L. Feringa for the design and synthesis of molecular machines.
AMMs have diversified rapidly over the past few decades and their design principles, properties, and characterization methods have been outlined better. A major starting point for the design of AMMs is to exploit the existing modes of motion in molecules, such as rotation about single bonds or cis-trans isomerization . Different AMMs are produced by introducing various functionalities, such as the introduction of bistability to create switches. A broad range of AMMs has been designed, featuring different properties and applications; some of these include molecular motors , switches , and logic gates . A wide range of applications have been demonstrated for AMMs, including those integrated into polymeric , liquid crystal , and crystalline systems for varied functions (such as materials research, homogenous catalysis and surface chemistry ).
Several definitions describe a "molecular machine" as a class of molecules typically described as an assembly of a discrete number of molecular components intended to produce mechanical movements in response to specific stimuli. The expression is often more generally applied to molecules that simply mimic functions that occur at the macroscopic level. [ 1 ] A few prime requirements for a molecule to be considered a "molecular machine" are: the presence of moving parts, the ability to consume energy, and the ability to perform a task. [ 2 ] Molecular machines differ from other stimuli-responsive compounds that can produce motion (such as cis - trans isomers ) in their relatively larger amplitude of movement (potentially due to chemical reactions ) and the presence of a clear external stimulus to regulate the movements (as compared to random thermal motion ). [ 1 ] Piezoelectric , magnetostrictive , and other materials that produce a movement due to external stimuli on a macro-scale are generally not included, since despite the molecular origin of the motion the effects are not useable on the molecular scale. [ citation needed ]
This definition generally applies to synthetic molecular machines, which have historically gained inspiration from the naturally occurring biological molecular machines (also referred to as "nanomachines"). Biological machines are considered to be nanoscale devices (such as molecular proteins ) in a living system that convert various forms of energy to mechanical work in order to drive crucial biological processes such as intracellular transport , muscle contractions , ATP generation and cell division . [ 3 ] [ 4 ]
What would be the utility of such machines? Who knows? I cannot see exactly what would happen, but I can hardly doubt that when we have some control of the arrangement of things on a molecular scale we will get an enormously greater range of possible properties that substances can have, and of the different things we can do.
Biological molecular machines have been known and studied for years given their vital role in sustaining life, and have served as inspiration for synthetically designed systems with similar useful functionality. [ 3 ] [ 4 ] The advent of conformational analysis, or the study of conformers to analyze complex chemical structures, in the 1950s gave rise to the idea of understanding and controlling relative motion within molecular components for further applications. This led to the design of "proto-molecular machines" featuring conformational changes such as cog-wheeling of the aromatic rings in triptycenes . [ 6 ] By 1980, scientists could achieve desired conformations using external stimuli and utilize this for different applications. A major example is the design of a photoresponsive crown ether containing an azobenzene unit, which could switch between cis and trans isomers on exposure to light and hence tune the cation-binding properties of the ether. [ 7 ] In his seminal 1959 lecture There's Plenty of Room at the Bottom , Richard Feynman alluded to the idea and applications of molecular devices designed artificially by manipulating matter at the atomic level. [ 5 ] This was further substantiated by Eric Drexler during the 1970s, who developed ideas based on molecular nanotechnology such as nanoscale "assemblers", [ 8 ] though their feasibility was disputed . [ 9 ]
Though these events served as inspiration for the field, the actual breakthrough in practical approaches to synthesize artificial molecular machines (AMMs) took place in 1991 with the invention of a "molecular shuttle" by Sir Fraser Stoddart . [ 10 ] Building upon the assembly of mechanically linked molecules such as catenanes and rotaxanes as developed by Jean-Pierre Sauvage in the early 1980s, [ 11 ] [ 12 ] this shuttle features a rotaxane with a ring that can move across an "axle" between two ends or possible binding sites ( hydroquinone units). This design realized the well-defined motion of a molecular unit across the length of the molecule for the first time. [ 6 ] In 1994, an improved design allowed control over the motion of the ring by pH variation or electrochemical methods, making it the first example of an AMM. Here the two binding sites are a benzidine and a biphenol unit; the cationic ring typically prefers staying over the benzidine ring, but moves over to the biphenol group when the benzidine gets protonated at low pH or if it gets electrochemically oxidized . [ 13 ] In 1998, a study could capture the rotary motion of a decacyclene molecule on a copper-base metallic surface using a scanning tunneling microscope . [ 14 ] Over the following decade, a broad variety of AMMs responding to various stimuli were invented for different applications. [ 15 ] [ 16 ] In 2016, the Nobel Prize in Chemistry was awarded to Sauvage, Stoddart, and Bernard L. Feringa for the design and synthesis of molecular machines. [ 17 ] [ 18 ]
Over the past few decades, AMMs have diversified rapidly and their design principles, [ 2 ] properties, [ 19 ] and characterization methods [ 20 ] have been outlined more clearly. A major starting point for the design of AMMs is to exploit the existing modes of motion in molecules. [ 2 ] For instance, single bonds can be visualized as axes of rotation, [ 21 ] as can be metallocene complexes. [ 22 ] Bending or V-like shapes can be achieved by incorporating double bonds , that can undergo cis-trans isomerization in response to certain stimuli (typically irradiation with a suitable wavelength ), as seen in numerous designs consisting of stilbene and azobenzene units. [ 23 ] Similarly, ring-opening and -closing reactions such as those seen for spiropyran and diarylethene can also produce curved shapes. [ 24 ] Another common mode of movement is the circumrotation of rings relative to one another as observed in mechanically interlocked molecules (primarily catenanes). While this type of rotation can not be accessed beyond the molecule itself (because the rings are confined within one another), rotaxanes can overcome this as the rings can undergo translational movements along a dumbbell-like axis. [ 25 ] Another line of AMMs consists of biomolecules such as DNA and proteins as part of their design, making use of phenomena like protein folding and unfolding. [ 26 ] [ 27 ]
AMM designs have diversified significantly since the early days of the field. A major route is the introduction of bistability to produce molecular switches, featuring two distinct configurations for the molecule to convert between. This has been perceived as a step forward from the original molecular shuttle which consisted of two identical sites for the ring to move between without any preference, in a manner analogous to the ring flip in an unsubstituted cyclohexane . If these two sites are different from each other in terms of features like electron density , this can give rise to weak or strong recognition sites as in biological systems — such AMMs have found applications in catalysis and drug delivery . This switching behavior has been further optimized to acquire useful work that gets lost when a typical switch returns to its original state.
Inspired by the use of kinetic control to produce work in natural processes, molecular motors are designed to have a continuous energy influx to keep them away from equilibrium to deliver work. [ 2 ] [ 1 ]
Various energy sources are employed to drive molecular machines today, but this was not the case during the early years of AMM development. Though the movements in AMMs were regulated relative to the random thermal motion generally seen in molecules, they could not be controlled or manipulated as desired. This led to the addition of stimuli-responsive moieties in AMM design, so that externally applied non-thermal sources of energy could drive molecular motion and hence allow control over the properties. Chemical energy (or "chemical fuels") was an attractive option at the beginning, given the broad array of reversible chemical reactions (heavily based on acid-base chemistry ) to switch molecules between different states. [ 28 ] However, this comes with the issue of practically regulating the delivery of the chemical fuel and the removal of waste generated to maintain the efficiency of the machine as in biological systems. Though some AMMs have found ways to circumvent this, [ 29 ] more recently waste-free reactions such based on electron transfers or isomerization have gained attention (such as redox-responsive viologens ). Eventually, several different forms of energy (electric, [ 30 ] magnetic, [ 31 ] optical [ 32 ] and so on) have become the primary energy sources used to power AMMs, even producing autonomous systems such as light-driven motors. [ 33 ]
Various AMMs are tabulated below along with indicative images: [ 19 ]
Many macromolecular machines are found within cells, often in the form of multi-protein complexes . [ 78 ] Examples of biological machines include motor proteins such as myosin , which is responsible for muscle contraction, kinesin , which moves cargo inside cells away from the nucleus along microtubules , and dynein , which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella . "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines ... Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics ." [ 79 ] Other biological machines are responsible for energy production, for example ATP synthase which harnesses energy from proton gradients across membranes to drive a turbine-like motion used to synthesise ATP , the energy currency of a cell. [ 80 ] Still other machines are responsible for gene expression , including DNA polymerases for replicating DNA, RNA polymerases for producing mRNA , the spliceosome for removing introns , and the ribosome for synthesising proteins . These machines and their nanoscale dynamics are far more complex than any molecular machines that have yet been artificially constructed. [ 81 ]
Biological machines have potential applications in nanomedicine . [ 82 ] For example, they could be used to identify and destroy cancer cells. [ 83 ] [ 84 ] Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers , biological machines which could re-order matter at a molecular or atomic scale. [ citation needed ] Nanomedicine would make use of these nanorobots , introduced into the body, to repair or detect damages and infections, but these are considered to be far beyond current capabilities. [ 85 ]
Advances in this area are inhibited by the lack of synthetic methods. [ 86 ] In this context, theoretical modeling has emerged as a pivotal tool to understand the self-assembly or -disassembly processes in these systems. [ 87 ] [ 88 ]
Possible applications have been demonstrated for AMMs, including those integrated into polymeric , [ 89 ] [ 90 ] liquid crystal , [ 91 ] [ 92 ] and crystalline [ 93 ] [ 94 ] systems for varied functions. Homogenous catalysis is a prominent example, especially in areas like asymmetric synthesis , utilizing noncovalent interactions and biomimetic allosteric catalysis. [ 95 ] [ 96 ] AMMs have been pivotal in the design of several stimuli-responsive smart materials, such as 2D and 3D self-assembled materials and nanoparticle -based systems, for versatile applications ranging from 3D printing to drug delivery. [ 97 ] [ 98 ]
AMMs are gradually moving from the conventional solution-phase chemistry to surfaces and interfaces. For instance, AMM-immobilized surfaces (AMMISs) are a novel class of functional materials consisting of AMMs attached to inorganic surfaces forming features like self-assembled monolayers; this gives rise to tunable properties such as fluorescence, aggregation and drug-release activity. [ 99 ]
Most of these "applications" remain at the proof-of-concept level. Challenges in streamlining macroscale applications include autonomous operation, the complexity of the machines, stability in the synthesis of the machines and the working conditions. [ 1 ] [ 100 ] | https://en.wikipedia.org/wiki/Molecular_machine |
In molecular biology and other fields, a molecular marker is a molecule , sampled from some source, that gives information about its source. For example, DNA is a molecular marker that gives information about the organism from which it was taken. For another example, some proteins can be molecular markers of Alzheimer's disease in a person from which they are taken. [ 1 ] Molecular markers may be non-biological. Non-biological markers are often used in environmental studies. [ 2 ]
In genetics, a molecular marker (identified as genetic marker ) is a fragment of DNA that is associated with a certain location within the genome . Molecular markers are used in molecular biology and biotechnology to identify a particular sequence of DNA in a pool of unknown DNA.
There are many types of genetic markers, each with particular limitations and strengths. Within genetic markers there are three different categories: "First Generation Markers", "Second Generation Markers", and "New Generation Markers". [ 3 ] These types of markers may also identify dominance and co-dominance within the genome. [ 4 ] Identifying dominance and co-dominance with a marker may help identify heterozygotes from homozygotes within the organism. Co-dominant markers are more beneficial because they identify more than one allele thus enabling someone to follow a particular trait through mapping techniques. These markers allow for the amplification of particular sequence within the genome for comparison and analysis.
Molecular markers are effective because they identify an abundance of genetic linkage between identifiable locations within a chromosome and are able to be repeated for verification. They can identify small changes within the mapping population enabling distinction between a mapping species, allowing for segregation of traits and identity. They identify particular locations on a chromosome, allowing for physical maps to be created. Lastly they can identify how many alleles an organism has for a particular trait (bi allelic or poly allelic). [ 5 ]
Genomic markers as mentioned, have particular strengths and weakness, so, consideration and knowledge of the markers is necessary before use. For instance, a RAPD marker is dominant (identifying only one band of distinction) and it may be sensitive to reproducible results. This is typically due to the conditions in which it was produced. RAPD's are used also under the assumption that two samples share a same locus when a sample is produced. [ 4 ] Different markers may also require different amounts of DNA. RAPD's may only need 0.02 ug of DNA while an RFLP marker may require 10 ug of DNA extracted from it to produce identifiable results. [ 6 ] currently, SNP markers have turned out to be a potential tool in breeding programs in several crops. [ 7 ]
Molecular mapping aids in identifying the location of particular markers within the genome. There are two types of maps that may be created for analysis of genetic material. First, is a physical map, that helps identify the location of where you are on a chromosome as well as which chromosome you are on. Secondly there is a linkage map that identifies how particular genes are linked to other genes on a chromosome. This linkage map may identify distances from other genes using (cM) centiMorgans as a unit of measurement. Co-dominant markers can be used in mapping, to identify particular locations within a genome and can represent differences in phenotype. [ 8 ] Linkage of markers can help identify particular polymorphisms within the genome. These polymorphisms indicate slight changes within the genome that may present nucleotide substitutions or rearrangement of sequence. [ 9 ] When developing a map it is beneficial to identify several polymorphic distinctions between two species as well as identify similar sequences between two species.
When using molecular markers to study the genetics of a particular crop, it must be remembered that markers have restrictions. It should first be assessed what the genetic variability is within the organism being studied. Analyze how identifiable particular genomic sequence, near or in candidate genes. Maps can be created to determine distances between genes and differentiation between species. [ 10 ]
Genetic markers can aid in the development of new novel traits that can be put into mass production. These novel traits can be identified using molecular markers and maps. Particular traits such as color, may be controlled by just a few genes. Qualitative traits (requires less than 2 genes) such as color, can be identified using MAS (marker assisted selection). Once a desired marker is found, it is able to be followed within different filial generations. An identifiable marker may help follow particular traits of interest when crossing between different genus or species, with the hopes of transferring particular traits to offspring.
One example of using molecular markers in identifying a particular trait within a plant is, Fusarium head blight in wheat. Fusarium head blight can be a devastating disease in cereal crops but certain varieties or offspring or varieties may be resistant to the disease. This resistance is inferred by a particular gene that can be followed using MAS (Marker Assisted Selection) and QTL (Quantitative Trait Loci). [ 11 ] QTLs identify particular variants within phenotypes or traits and typically identify where the GOI (Gene of Interest) is located. Once the cross has been made, sampling of offspring may be taken and evaluated to determine which offspring inherited the traits and which offspring did not. This type of selection is becoming more beneficial to breeders and farmers because it is reducing the amount of herbicides, fungicides and insecticides needed to be used on crops. [ 11 ] Another way to insert a GOI is through mechanical or bacterial transmission. This is more difficult but may save time and money.
Biochemical markers are generally the protein marker. These are based on the change in the sequence of amino acids in a protein molecule. The most important protein marker is alloenzyme . Alloenzymes are variant forms of an enzyme that are coded by different alleles at the same locus and this alloenzymes differs from species to species. So for detecting the variation alloenzymes are used. These markers are type-i markers.
Advantages:
Disadvantages:
Applications: | https://en.wikipedia.org/wiki/Molecular_marker |
Molecular mechanics uses classical mechanics to model molecular systems. The Born–Oppenheimer approximation is assumed valid and the potential energy of all systems is calculated as a function of the nuclear coordinates using force fields . Molecular mechanics can be used to study molecule systems ranging in size and complexity from small to large biological systems or material assemblies with many thousands to millions of atoms.
All-atomistic molecular mechanics methods have the following properties:
Variants on this theme are possible. For example, many simulations have historically used a united-atom representation in which each terminal methyl group or intermediate methylene unit was considered one particle, and large protein systems are commonly simulated using a bead model that assigns two to four particles per amino acid .
The following functional abstraction, termed an interatomic potential function or force field in chemistry, calculates the molecular system's potential energy (E) in a given conformation as a sum of individual energy terms.
where the components of the covalent and noncovalent contributions are given by the following summations:
The exact functional form of the potential function , or force field, depends on the particular simulation program being used. Generally the bond and angle terms are modeled as harmonic potentials centered around equilibrium bond-length values derived from experiment or theoretical calculations of electronic structure performed with software which does ab-initio type calculations such as Gaussian . For accurate reproduction of vibrational spectra, the Morse potential can be used instead, at computational cost. The dihedral or torsional terms typically have multiple minima and thus cannot be modeled as harmonic oscillators, though their specific functional form varies with the implementation. This class of terms may include improper dihedral terms, which function as correction factors for out-of-plane deviations (for example, they can be used to keep benzene rings planar, or correct geometry and chirality of tetrahedral atoms in a united-atom representation).
The non-bonded terms are much more computationally costly to calculate in full, since a typical atom is bonded to only a few of its neighbors, but interacts with every other atom in the molecule. Fortunately the van der Waals term falls off rapidly. It is typically modeled using a 6–12 Lennard-Jones potential , which means that attractive forces fall off with distance as r −6 and repulsive forces as r −12 , where r represents the distance between two atoms. The repulsive part r −12 is however unphysical, because repulsion increases exponentially. Description of van der Waals forces by the Lennard-Jones 6–12 potential introduces inaccuracies, which become significant at short distances. [ 1 ] Generally a cutoff radius is used to speed up the calculation so that atom pairs which distances are greater than the cutoff have a van der Waals interaction energy of zero.
The electrostatic terms are notoriously difficult to calculate well because they do not fall off rapidly with distance, and long-range electrostatic interactions are often important features of the system under study (especially for proteins ). The basic functional form is the Coulomb potential , which only falls off as r −1 . A variety of methods are used to address this problem, the simplest being a cutoff radius similar to that used for the van der Waals terms. However, this introduces a sharp discontinuity between atoms inside and atoms outside the radius. Switching or scaling functions that modulate the apparent electrostatic energy are somewhat more accurate methods that multiply the calculated energy by a smoothly varying scaling factor from 0 to 1 at the outer and inner cutoff radii. Other more sophisticated but computationally intensive methods are particle mesh Ewald (PME) and the multipole algorithm .
In addition to the functional form of each energy term, a useful energy function must be assigned parameters for force constants, van der Waals multipliers, and other constant terms. These terms, together with the equilibrium bond, angle, and dihedral values, partial charge values, atomic masses and radii, and energy function definitions, are collectively termed a force field . Parameterization is typically done through agreement with experimental values and theoretical calculations results. Norman L. Allinger 's force field in the last MM4 version calculate for hydrocarbons heats of formation with a RMS error of 0.35 kcal/mol, vibrational spectra with a RMS error of 24 cm −1 , rotational barriers with a RMS error of 2.2°, C−C bond lengths within 0.004 Å and C−C−C angles within 1°. [ 2 ] Later MM4 versions cover also compounds with heteroatoms such as aliphatic amines. [ 3 ]
Each force field is parameterized to be internally consistent, but the parameters are generally not transferable from one force field to another.
The main use of molecular mechanics is in the field of molecular dynamics . This uses the force field to calculate the forces acting on each particle and a suitable integrator to model the dynamics of the particles and predict trajectories. Given enough sampling and subject to the ergodic hypothesis , molecular dynamics trajectories can be used to estimate thermodynamic parameters of a system or probe kinetic properties, such as reaction rates and mechanisms.
Molecular mechanics is also used within QM/MM , which allows study of proteins and enzyme kinetics. The system is divided into two regions—one of which is treated with quantum mechanics (QM) allowing breaking and formation of bonds and the rest of the protein is modeled using molecular mechanics (MM). MM alone does not allow the study of mechanisms of enzymes, which QM allows. QM also produces more exact energy calculation of the system although it is much more computationally expensive.
Another application of molecular mechanics is energy minimization, whereby the force field is used as an optimization criterion. This method uses an appropriate algorithm (e.g. steepest descent ) to find the molecular structure of a local energy minimum. These minima correspond to stable conformers of the molecule (in the chosen force field) and molecular motion can be modelled as vibrations around and interconversions between these stable conformers. It is thus common to find local energy minimization methods combined with global energy optimization, to find the global energy minimum (and other low energy states). At finite temperature, the molecule spends most of its time in these low-lying states, which thus dominate the molecular properties. Global optimization can be accomplished using simulated annealing , the Metropolis algorithm and other Monte Carlo methods , or using different deterministic methods of discrete or continuous optimization. While the force field represents only the enthalpic component of free energy (and only this component is included during energy minimization), it is possible to include the entropic component through the use of additional methods, such as normal mode analysis.
Molecular mechanics potential energy functions have been used to calculate binding constants, [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] protein folding kinetics, [ 9 ] protonation equilibria, [ 10 ] active site coordinates , [ 6 ] [ 11 ] and to design binding sites . [ 12 ]
In molecular mechanics, several ways exist to define the environment surrounding a molecule or molecules of interest. A system can be simulated in vacuum (termed a gas-phase simulation) with no surrounding environment, but this is usually undesirable because it introduces artifacts in the molecular geometry, especially in charged molecules. Surface charges that would ordinarily interact with solvent molecules instead interact with each other, producing molecular conformations that are unlikely to be present in any other environment. The most accurate way to solvate a system is to place explicit water molecules in the simulation box with the molecules of interest and treat the water molecules as interacting particles like those in the other molecule(s). A variety of water models exist with increasing levels of complexity, representing water as a simple hard sphere (a united-atom model), as three separate particles with fixed bond angle, or even as four or five separate interaction centers to account for unpaired electrons on the oxygen atom. As water models grow more complex, related simulations grow more computationally intensive. A compromise method has been found in implicit solvation , which replaces the explicitly represented water molecules with a mathematical expression that reproduces the average behavior of water molecules (or other solvents such as lipids). This method is useful to prevent artifacts that arise from vacuum simulations and reproduces bulk solvent properties well, but cannot reproduce situations in which individual water molecules create specific interactions with a solute that are not well captured by the solvent model, such as water molecules that are part of the hydrogen bond network within a protein. [ 13 ]
This is a limited list; many more packages are available. | https://en.wikipedia.org/wiki/Molecular_mechanics |
Molecular medicine is a broad field, where physical, chemical, biological, bioinformatics and medical techniques are used to describe molecular structures and mechanisms, identify fundamental molecular and genetic errors of disease, and to develop molecular interventions to correct them. [ 1 ] The molecular medicine perspective emphasizes cellular and molecular phenomena and interventions rather than the previous conceptual and observational focus on patients and their organs. [ 2 ]
In November 1949, with the seminal paper, " Sickle Cell Anemia, a Molecular Disease ", [ 3 ] in Science magazine, Linus Pauling , Harvey Itano and their collaborators laid the groundwork for establishing the field of molecular medicine. [ 4 ] In 1956, Roger J. Williams wrote Biochemical Individuality , [ 5 ] a prescient book about genetics, prevention and treatment of disease on a molecular basis, and nutrition which is now variously referred to as individualized medicine [ 6 ] and orthomolecular medicine . [ 7 ] Another paper in Science by Pauling in 1968, [ 8 ] introduced and defined this view of molecular medicine that focuses on natural and nutritional substances used for treatment and prevention.
Published research and progress was slow until the 1970s' "biological revolution" that introduced many new techniques and commercial applications. [ 9 ]
Some researchers separate molecular surgery as a compartment of molecular medicine. [ 10 ]
Molecular medicine is a new scientific discipline in European universities . [ 11 ] Combining contemporary medical studies with the field of biochemistry , it offers a bridge between the two subjects. At present only a handful of universities offer the course to undergraduates . With a degree in this discipline, the graduate is able to pursue a career in medical sciences, scientific research, laboratory work, and postgraduate medical degrees.
Core subjects are similar to biochemistry courses and typically include gene expression , research methods, proteins , cancer research, immunology , biotechnology and many more. In some universities molecular medicine is combined with another discipline such as chemistry , functioning as an additional study to enrich the undergraduate program. | https://en.wikipedia.org/wiki/Molecular_medicine |
Molecular memory is a term for data storage technologies that use molecular species as the data storage element, rather than e.g. circuits , magnetics , inorganic materials or physical shapes. [ 1 ] The molecular component can be described as a molecular switch , and may perform this function by any of several mechanisms, including charge storage, photochromism , or changes in capacitance . In a perfect molecular memory device, each individual molecule contains a bit of data, leading to massive data capacity. However, practical devices are more likely to use large numbers of molecules for each bit, in the manner of 3D optical data storage (many examples of which can be considered molecular memory devices). The term "molecular memory" is most often used to mean very fast, electronically addressed solid-state data storage, as is the term computer memory . At present, molecular memories are still found only in laboratories.
One approach to molecular memories is based on special compounds such as porphyrin -based polymers which are capable of storing electric charge . Once a certain voltage threshold is achieved the material oxidizes , releasing an electric charge. The process is reversible, in effect creating an electric capacitor . The properties of the material allow for a much greater capacitance per unit area than with conventional DRAM memory, thus potentially leading to smaller and cheaper integrated circuits .
Several universities and a number of companies ( Hewlett-Packard , ZettaCore ) have announced work on molecular memories, which some hope will supplant DRAM memory as the lowest cost technology for high-speed computer memory . NASA is also supporting research on non-volatile molecular memories. [ 2 ]
In 2018, researches from the University of Jyväskylä in Finland, developed a molecular memory which can memorize the direction of a magnetic field for long periods of time after being switched off at extremely low temperatures, which would aid in enhancing the storage capacity of hard disk drives without enlarging their physical size. [ 3 ] | https://en.wikipedia.org/wiki/Molecular_memory |
Molecular mimicry is the theoretical possibility that sequence similarities between foreign and self-peptides are enough to result in the cross-activation of autoreactive T or B cells by pathogen-derived peptides . Despite the prevalence of several peptide sequences which can be both foreign and self in nature, just a few crucial residues can activate a single antibody or TCR ( T cell receptor ). This highlights the importance of structural homology in the theory of molecular mimicry. Upon activation, these "peptide mimic" specific T or B cells can cross-react with self-epitopes, thus leading to tissue pathology ( autoimmunity ). [ 1 ] Molecular mimicry is one of several ways in which autoimmunity can be evoked. A molecular mimicking event is more than an epiphenomenon despite its low probability, and these events have serious implications in the onset of many human autoimmune disorders.
One possible cause of autoimmunity, the failure to recognize self antigens as "self", is a loss of immunological tolerance , the ability for the immune system to discriminate between self and non-self. Other possible causes include mutations governing programmed cell death or environmental products that injure target tissues, thus causing a release of immunostimulatory alarm signals. [ 2 ] [ 3 ] Growth in the field of autoimmunity has resulted in more frequent diagnosis of autoimmune diseases. The resulting data show that autoimmune diseases affect approximately 1 in 31 people within the general population. [ 4 ] Growth has also led to a greater characterization of what autoimmunity is and how it can be studied and treated. With more research comes growth in the study of the several different ways in which autoimmunity can occur, one of which is molecular mimicry. The mechanism by which pathogens have similar amino acid sequences or the homologous three-dimensional crystal structure of immunodominant epitopes remains a mystery.
Tolerance is a fundamental property of the immune system . Tolerance involves non-self discrimination which is the ability of the normal immune system to recognize and respond to foreign antigens, but not self antigens. Autoimmunity is evoked when this tolerance to self antigen is broken. [ 5 ] Tolerance within an individual is normally evoked as a fetus . This is known as maternal-fetal tolerance where B cells expressing receptors specific for a particular antigen enter the circulation of the developing fetus via the placenta. [ 6 ]
After pre-B cells leave the bone marrow where they are synthesized, they are moved to the bone marrow where the maturation of B cells occurs. It is here where the first wave of B cell tolerance arises. Within the bone marrow, pre-B cells will encounter various self and foreign antigens present in the thymus that enter the thymus from peripheral sites via the circulatory system . Within the thymus, pre-T cells undergo a selection process where they must be positively selected and should avoid negative selection. B cells that bind with low avidity to self-MHC receptors are positively selected for maturation, those that do not die by apoptosis . Cells that survive positive selection, but bind strongly to self-antigens are negatively selected also by active induction of apoptosis. This negative selection is known as clonal deletion , one of the mechanisms for B cell tolerance. Approximately 99 percent of pre-B cells within the thymus are negatively selected. Only approximately 1 percent are positively selected for maturity. [ 7 ]
However, there is only a limited repertoire of antigen that T cells can encounter within the thymus. T cell tolerance then must occur within the periphery after the induction of T cell tolerance within the thymus as a more diverse group of antigens can be encountered in peripheral tissues. This same positive and negative selection mechanism, but in peripheral tissues, is known as clonal anergy . The mechanism of clonal anergy is important to maintain tolerance to many autologous antigens. Active suppression is the other known mechanism of T cell tolerance. Active suppression involves the injection of large amounts of foreign antigen in the absence of an adjuvant which leads to a state of unresponsiveness. This unresponsive state is then transferred to a naïve recipient from the injected donor to induce a state of tolerance within the recipient. [ 8 ]
Tolerance is also produced in B cells. There are also various processes which lead to B cell tolerance. Just as in T cells, clonal deletion and clonal anergy can physically eliminate autoreactive B cell clones. Receptor editing is another mechanism for B cell tolerance. This involves the reactivation or maintenance of V(D)J recombination in the cell which leads to the expression of novel receptor specificity through V region gene rearrangements which will create variation in the heavy and light immunoglobulin (Ig) chains. [ 8 ]
Autoimmunity can thus be defined simply as exceptions to the tolerance "rules." By doing this, an immune response is generated against self-tissue and cells. These mechanisms are known by many to be intrinsic. However, there are pathogenic mechanisms for the generation of autoimmune disease. Pathogens can induce autoimmunity by polyclonal activation of B or T cells, or increased expression of major histocompatibility complex (MHC) class I or II molecules. There are several ways in which a pathogen can cause an autoimmune response. A pathogen may contain a protein that acts as a mitogen to encourage cell division, thus causing more B or T cell clones to be produced. Similarly, a pathogenic protein may act as a superantigen which causes rapid polyclonal activation of B or T cells. Pathogens can also cause the release of cytokines resulting in the activation of B or T cells, or they can alter macrophage function. Finally, pathogens may also expose B or T cells to cryptic determinants, which are self antigen determinants that have not been processed and presented sufficiently to tolerize the developing T cells in the thymus and are presented at the periphery where the infection occurs. [ 9 ]
Molecular mimicry has been characterized as recently as the 1970s as another mechanism by which a pathogen can generate autoimmunity. Molecular mimicry is defined as similar structures shared by molecules from dissimilar genes or by their protein products. Either the linear amino acid sequence or the conformational fit of the immunodominant epitope may be shared between the pathogen and host. This is also known as " cross-reactivity " between self antigen of the host and immunodominant epitopes of the pathogen. An autoimmune response is then generated against the epitope. Due to similar sequence homology in the epitope between the pathogen and the host, cells and tissues of the host associated with the protein are destroyed as a result of the autoimmune response. [ 9 ]
The prerequisite for molecular mimicry to occur is thus the sharing of the immunodominant epitope between the pathogen and the immunodominant self sequence that is generated by a cell or tissue. However, due to the amino acid variation between different proteins, molecular mimicry should not happen from a probability standpoint. Assuming five to six amino acid residues are used to induce a monoclonal antibody response, the probability of 20 amino acids being identical in six consecutive residues between two proteins is 1 in 20 6 or 1 in 64,000,000. However, there has been evidence shown and documented of many molecular mimicry events. [ 10 ] This is because the similarity is also looked at between human and pathogen proteomes, rather than the selected human and pathogen proteins.
To determine which epitopes are shared between pathogen and human, large protein databases are used. The largest protein database in the world, known as the UniProt database (formerly SwissProt), has shown reports of molecular mimicry becoming more common with expansion of the database. The database currently contains 86.6 X 10 9 residues ( https://www.ebi.ac.uk/uniprot/TrEMBLstats , August 2, 2023). Assuming 20 different amino acids are randomly present at every position, the probability of finding a perfect match with a five-amino-acid-long motif is 1 in 3.2 X 10 6 .In other words, we can expect to observe each different motif of five amino acids in length for one time in average, within two different proteomes containing total 3.2 X 10 6 amino acids through its proteins that consist of a random sequence of 20 different amino acids. However, the distribution of five-amino-acids-long motif matches is expected to generate a bell curve with a peak at single-match position for each motif. That is, while we are likely to see each different motif once in the two proteomes in this example, there will be motifs that are not shared. There will also be motifs shared more than once between the two proteomes, which is observed for fewer motifs as the number of shares increases. Assuming that the SwissProt database has the same structure as in the example, an average of 86.6 X 10 9 / (3.2 X 10 6 ) ≈ 27 X 10 3 matches. This number of matches is huge and will increase as the database expands. This expectation was only five when the database contained 1.5 X 10 7 residues. As a result, there are overrepresented sequence motifs in the database. For example, the QKRAA sequence is an amino acid motif in the third hypervariable region of HLA-DRB1*04:01. This motif is also expressed on numerous proteins of other organisms, such as on gp110 of the Epstein-Barr virus and in E. coli . This motif occurs 37 times in the database. [ 11 ] This would suggest that the linear amino acid sequence may not be the single underlying cause of molecular mimicry since it can be found numerous times within the database. The possibility exists, then, for similarity in three-dimensional structure between two peptides to be recognized by T cell clones when there is variability within the amino acid sequence. This, therefore, uncovers a flaw of such large databases. They may be able to give a hint to relationships between epitopes, but the important three-dimensional structure cannot yet be searched for in such a database. [ 12 ]
Despite no obvious amino acid sequence similarity from pathogen to host factors, structural studies have revealed that mimicry can still occur at the host level. In some cases, pathogenic mimics can possess a structural architecture that differs markedly from that of the functional homologues. Therefore, proteins of dissimilar sequence may have a common structure which elicits an autoimmune response. It has been hypothesized that these virulent proteins display their mimicry through molecular surfaces that mimic host protein surfaces (protein fold or three-dimensional conformation), which have been obtained by convergent evolution . It has also been theorized that these similar protein folds have been obtained by horizontal gene transfer , most likely from a eukaryotic host. This further supports the theory that microbial organisms have evolved a mechanism of concealment similar to that of higher organisms such as the African praying mantis or chameleon who camouflage themselves so that they can mimic their background as not to be recognized by others. [ 13 ]
Despite dissimilar sequence homology between self and foreign peptide, weak electrostatic interactions between foreign peptide and the MHC can also mimic self peptide to elicit an autoimmune response within the host. For example, charged residues can explain the enhanced on-rate and reduced off-rate of a particular antigen or can contribute to a higher affinity and activity for a particular antigen that can perhaps mimic that of the host. Similarly, prominent ridges on the floor of peptide-binding grooves can do such things as create C-terminal bulges in particular peptides that can greatly increase the interaction between foreign and self peptide on the MHC. [ 14 ] Similarly, there has been evidence that even gross features such as acidic/basic and hydrophobic/hydrophilic interactions have allowed foreign peptides to interact with an antibody or MHC and TCR. It is now apparent that sequence similarity considerations may not be sufficient when evaluating potential mimic epitopes and the underlying mechanisms of molecular mimicry. Molecular mimicry, from these examples, has therefore been shown to occur also in the absence of any true sequence homology. [ 1 ]
There has been increasing evidence for mimicking events caused not only by amino acid similarities but also in similarities in binding motifs to the MHC. Molecular mimicry is thus occurring between two recognized peptides that have similar antigenic surfaces in the absence of primary sequence homology. For example, specific single amino acid residues such as cysteine (creates di-sulfide bonds), arginine or lysine (form multiple hydrogen bonds), could be essential for T cell cross-reactivity. These single residues may be the only residues conserved between self and foreign antigen that allow the structurally similar but sequence non-specific peptides to bind to the MHC. [ 15 ]
Epitope spreading, also known as determinant spreading, is another common way in which autoimmunity can occur which uses the molecular mimicry mechanism. Autoreactive T cells are activated de novo by self epitopes released secondary to pathogen-specific T cell-mediated bystander damage. [ 16 ] T cell responses to progressively less dominant epitopes are activated as a consequence of the release of other antigens secondary to the destruction of the pathogen with a homologous immunodominant sequence. Thus, inflammatory responses induced by specific pathogens that trigger pro-inflammatory T h 1 responses have the ability to persist in genetically susceptible hosts. This may lead to organ-specific autoimmune disease. [ 17 ] Conversely, epitope spreading could be due to target antigens being physically linked intracellularly as members of a complex to self antigen. The result of this is an autoimmune response that is triggered by exogenous antigen that progresses to a truly autoimmune response against mimicked self antigen and other antigens. [ 18 ] From these examples, it is clear that the search for candidate mimic epitopes must extend beyond the immunodominant epitopes of a given autoimmune response. [ 1 ]
The HIV-1 virus has been shown to cause diseases of the central nervous system (CNS) in humans through a molecular mimicry apparatus. HIV-1 gp41 is used to bind chemokines on the cell surface of the host so that the virion may gain entrance into the host. Astrocytes are cells of the CNS which are used to regulate the concentrations of K + and neurotransmitter which enter the cerebrospinal fluid (CSF) to contribute to the blood brain barrier . A twelve amino acid sequence (Leu-Gly-Ile-Trp-Gly-Cys-Ser-Gly-Lys-Leu-Ile-Cys) on gp41 of the HIV-1 virus (immunodominant region) shows sequence homology with a twelve amino acid protein on the surface of human astrocytes. Antibodies are produced for the HIV-1 gp41 protein. These antibodies can cross-react with astrocytes within human CNS tissue and act as autoantibodies . This contributes to many CNS complications found in AIDS patients. [ 19 ]
Theiler's murine encephalomyelitis virus (TMEV) leads to the development in mice of a progressive CD4 + T cell-mediated response after these cells have infiltrated the CNS. This virus has been shown to cause CNS disease in mice that resembles multiple sclerosis, an autoimmune disease in humans that results in the gradual destruction of the myelin sheath coating axons of the CNS. The TMEV mouse virus shares a thirteen amino acid sequence (His-Cys-Leu-Gly-Lys-Trp-Leu-Gly-His-Pro-Asp-Lys-Phe) (PLP ( proteolipid protein ) 139-151 epitope) with that of a human myelin-specific epitope. Bystander myelin damage is caused by virus specific T h 1 cells that cross react with this self epitope. To test the efficacy in which TMEV uses molecular mimicry to its advantage, a sequence of the human myelin-specific epitope was inserted into a non-pathogenic TMEV variant. As a result, there was a CD4 + T cell response and autoimmune demyelination was initiated by infection with a TMEV peptide ligand. [ 20 ] In humans, it has recently been shown that there are other possible targets for molecular mimicry in patients with multiple sclerosis. These involve the hepatitis B virus mimicking the human proteolipid protein (myelin protein) and the Epstein-Barr virus mimicking anti- myelin oligodendrocyte glycoprotein (contributes to a ring of myelin around blood vessels) [ 21 ] or the glial cell adhesion protein (GlialCAM) found in the CNS. [ 22 ]
Myasthenia gravis is another common autoimmune disease. This disease causes fluctuating muscle weakness and fatigue. The disease occurs due to detectable antibodies produced against the human acetylcholine receptor . The receptor contains a seven amino acid sequence (Trp-Thr-Tyr-Asp-Gly-Thr-Lys) [ 21 ] in the α-subunit that demonstrates immunological cross-reactivity with a shared immunodominant domain of gpD of the herpes simplex virus (HSV). Similar to HIV-1, gpD also aids in binding to chemokines on the cell surface of the host to gain entry into the host. Cross-reactivity of the self epitope (α-subunit of the receptor) with antibodies produced against HSV suggests that the virus is associated with the initiation of myasthenia gravis. Not only does HSV cause immunologic cross-reactivity, but the gpD peptide also competitively inhibits the binding of antibody made against the α-subunit to its corresponding peptide on the α-subunit. Despite this, an autoimmune response still occurs. This further shows an immunologically significant sequence homology to the biologically active site of the human acetylcholine receptor. [ 23 ]
There are ways in which autoimmunity caused by molecular mimicry can be avoided. Control of the initiating factor (pathogen) via vaccination seems to be the most common method to avoid autoimmunity. Inducing tolerance to the host autoantigen in this way may also be the most stable factor. The development of a downregulating immune response to the shared epitope between pathogen and host may be the best way of treating an autoimmune disease caused by molecular mimicry. [ 24 ] Alternatively, treatment with immunosuppressive drugs such as ciclosporin and azathioprine has also been used as a possible solution. However, in many cases this has been shown to be ineffective because cells and tissues have already been destroyed at the onset of the infection. [ 5 ]
The concept of molecular mimicry is a useful tool in understanding the etiology , pathogenesis , treatment, and prevention of autoimmune disorders. Molecular mimicry is, however, only one mechanism by which an autoimmune disease can occur in association with a pathogen. Understanding the mechanisms of molecular mimicry may allow future research to be directed toward uncovering the initiating infectious agent as well as recognizing the self determinant. This way, future research may be able to design strategies for treatment and prevention of autoimmune disorders. The use of transgenic models such as those used for discovery of the mimicry events leading to diseases of the CNS and muscle disorders has helped evaluate the sequence of events leading to molecular mimicry.
Newcombe, P. J., & Alhenc-Gelas, M. (2019). The Role of T-Cell Receptors in the Pathogenesis of Rheumatic Fever: A Study of Molecular Mimicry. | https://en.wikipedia.org/wiki/Molecular_mimicry |
A molecular model is a physical model of an atomistic system that represents molecules and their processes. They play an important role in understanding chemistry and generating and testing hypotheses . The creation of mathematical models of molecular properties and behavior is referred to as molecular modeling , and their graphical depiction is referred to as molecular graphics .
The term, "molecular model" refer to systems that contain one or more explicit atoms (although solvent atoms may be represented implicitly) and where nuclear structure is neglected. The electronic structure is often also omitted unless it is necessary in illustrating the function of the molecule being modeled.
Molecular models may be created for several reasons – as pedagogic tools for students or those unfamiliar with atomistic structures; as objects to generate or test theories (e.g., the structure of DNA); as analogue computers (e.g., for measuring distances and angles in flexible systems); or as aesthetically pleasing objects on the boundary of art and science.
The construction of physical models is often a creative act, and many bespoke examples have been carefully created in the workshops of science departments. There is a very wide range of approaches to physical modeling, including ball-and-stick models available for purchase commercially, to molecular models created using 3D printers . The main strategy, initially in textbooks and research articles and more recently on computers. Molecular graphics has made the visualization of molecular models on computer hardware easier, more accessible, and inexpensive, although physical models are widely used to enhance the tactile and visual message being portrayed.
In the 1600s, Johannes Kepler speculated on the symmetry of snowflakes and the close packing of spherical objects such as fruit. [ 1 ] The symmetrical arrangement of closely packed spheres informed theories of molecular structure in the late 1800s, and many theories of crystallography and solid state inorganic structure used collections of equal and unequal spheres to simulate packing and predict structure.
John Dalton represented compounds as aggregations of circular atoms, and although Johann Josef Loschmidt did not create physical models, his diagrams based on circles are two-dimensional analogues of later models. [ 2 ] August Wilhelm von Hofmann is credited with the first physical molecular model around 1860. [ 3 ] Note how the size of the carbon appears smaller than the hydrogen. The importance of stereochemistry was not then recognised and the model is essentially topological (it should be a 3-dimensional tetrahedron ).
Jacobus Henricus van 't Hoff and Joseph Le Bel introduced the concept of chemistry in three dimensions of space, that is, stereochemistry. Van 't Hoff built tetrahedral molecules representing the three-dimensional properties of carbon . [ citation needed ]
Repeating units will help to show how easy it is and clear it is to represent molecules through balls that represent atoms.
The binary compounds sodium chloride (NaCl) and caesium chloride (CsCl) have cubic structures but have different space groups. This can be rationalised in terms of close packing of spheres of different sizes. For example, NaCl can be described as close-packed chloride ions (in a face-centered cubic lattice) with sodium ions in the octahedral holes. After the development of X-ray crystallography as a tool for determining crystal structures, many laboratories built models based on spheres. With the development of plastic or polystyrene balls it is now easy to create such models.
The concept of the chemical bond as a direct link between atoms can be modelled by linking balls (atoms) with sticks/rods (bonds). This has been extremely popular and is still widely used today. Initially atoms were made of spherical wooden balls with specially drilled holes for rods. Thus carbon can be represented as a sphere with four holes at the tetrahedral angles cos −1 (− 1 ⁄ 3 ) ≈ 109.47°.
A problem with rigid bonds and holes is that systems with arbitrary angles could not be built. This can be overcome with flexible bonds, originally helical springs but now usually plastic. This also allows double and triple bonds to be approximated by multiple single bonds.
The model shown to the left represents a ball-and-stick model of proline . The balls have colours: black represents carbon (C); red , oxygen (O); blue , nitrogen (N); and white, hydrogen (H). Each ball is drilled with as many holes as its conventional valence (C: 4; N: 3; O: 2; H: 1) directed towards the vertices of a tetrahedron. Single bonds are represented by (fairly) rigid grey rods. Double and triple bonds use two longer flexible bonds which restrict rotation and support conventional cis / trans stereochemistry.
However, most molecules require holes at other angles and specialist companies manufacture kits and bespoke models. Besides tetrahedral, trigonal and octahedral holes, there were all-purpose balls with 24 holes. These models allowed rotation about the single rod bonds, which could be both an advantage (showing molecular flexibility) and a disadvantage (models are floppy). The approximate scale was 5 cm per ångström (0.5 m/nm or 500,000,000:1), but was not consistent over all elements.
Arnold Beevers in Edinburgh created small models using PMMA balls and stainless steel rods. By using individually drilled balls with precise bond angles and bond lengths in these models, large crystal structures to be accurately created, but with light and rigid form. Figure 4 shows a unit cell of ruby in this style.
Crick and Watson's DNA model and the protein -building kits of Kendrew were among the first skeletal models. These were based on atomic components where the valences were represented by rods; the atoms were points at the intersections. Bonds were created by linking components with tubular connectors with locking screws.
André Dreiding introduced a molecular modelling kit in the late 1950s which dispensed with the connectors. A given atom would have solid and hollow valence spikes. The solid rods clicked into the tubes forming a bond, usually with free rotation. These were and are very widely used in organic chemistry departments and were made so accurately that interatomic measurements could be made by ruler.
More recently, inexpensive plastic models (such as Orbit) use a similar principle. A small plastic sphere has protuberances onto which plastic tubes can be fitted. The flexibility of the plastic means that distorted geometries can be made.
Many inorganic solids consist of atoms surrounded by a coordination sphere of electronegative atoms (e.g. PO 4 tetrahedra, TiO 6 octahedra). Structures can be modelled by gluing together polyhedra made of paper or plastic.
A good example of composite models is the Nicholson approach, widely used from the late 1970s for building models of biological macromolecules . The components are primarily amino acids and nucleic acids with preformed residues representing groups of atoms. Many of these atoms are directly moulded into the template, and fit together by pushing plastic stubs into small holes. The plastic grips well and makes bonds difficult to rotate, so that arbitrary torsion angles can be set and retain their value. The conformations of the backbone and side chains are determined by pre-computing the torsion angles and then adjusting the model with a protractor .
The plastic is white and can be painted to distinguish between O and N atoms. Hydrogen atoms are normally implicit and modelled by snipping off the spokes. A model of a typical protein with approximately 300 residues could take a month to build. It was common for laboratories to build a model for each protein solved. By 2005, so many protein structures were being determined that relatively few models were made.
With the development of computer-based physical modelling, it is now possible to create complete single-piece models by feeding the coordinates of a surface into the computer. Figure 6 shows models of anthrax toxin, left (at a scale of approximately 20 Å/cm or 1:5,000,000) and green fluorescent protein , right (5 cm high, at a scale of about 4 Å/cm or 1:25,000,000) from 3D Molecular Design. Models are made of plaster or starch, using a rapid prototyping process.
It has also recently become possible to create accurate molecular models inside glass blocks using a technique known as subsurface laser engraving . The image at right shows the 3D structure of an E. coli protein (DNA polymerase beta-subunit, PDB code 1MMI) etched inside a block of glass by British company Luminorum Ltd.
Computers can also model molecules mathematically. Programs such as Avogadro can run on typical desktops and can predict bond lengths and angles, molecular polarity and charge distribution, and even quantum mechanical properties such as absorption and emission spectra. However, these sorts of programs cannot model molecules as more atoms are added, because the number of calculations is quadratic in the number of atoms involved; if four times as many atoms are used in a molecule, the calculations with take 16 times as long. For most practical purposes, such as drug design or protein folding, the calculations of a model require supercomputing or cannot be done on classical computers at all in a reasonable amount of time. Quantum computers can model molecules with fewer calculations because the type of calculations performed in each cycle by a quantum computer are well-suited to molecular modelling.
Some of the most common colors used in molecular models are as follows: [ 4 ] [ better source needed ]
This table is an incomplete chronology of events where physical molecular models provided major scientific insights. | https://en.wikipedia.org/wiki/Molecular_model |
Molecular modeling on GPU is the technique of using a graphics processing unit (GPU) for molecular simulations . [ 1 ]
In 2007, Nvidia introduced video cards that could be used not only to show graphics but also for scientific calculations. These cards include many arithmetic units (as of 2016 [update] , up to 3,584 in Tesla P100) working in parallel. Long before this event, the computational power of video cards was purely used to accelerate graphics calculations. The new features of these cards made it possible to develop parallel programs in a high-level application programming interface (API) named CUDA . This technology substantially simplified programming by enabling programs to be written in C / C++ . More recently, OpenCL allows cross-platform GPU acceleration.
Quantum chemistry calculations [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] and molecular mechanics simulations [ 8 ] [ 9 ] [ 10 ] ( molecular modeling in terms of classical mechanics ) are among beneficial applications of this technology. The video cards can accelerate the calculations tens of times, so a PC with such a card has the power similar to that of a cluster of workstations based on common processors. | https://en.wikipedia.org/wiki/Molecular_modeling_on_GPUs |
Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules . [ 1 ] The methods are used in the fields of computational chemistry , drug design , computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling protons and neutrons with its quarks, anti-quarks and gluons and electrons with its photons (a quantum chemistry approach).
Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics ( Newtonian mechanics ) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds ) and Van der Waals forces . The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law . Atoms are assigned coordinates in Cartesian space or in internal coordinates , and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient ), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics .
This function, referred to as a potential function , computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field . Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function . [ 2 ] The common force fields in use today have been developed by using chemical theory, experimental reference data, and high level quantum calculations. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } . Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects.
Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations.
Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule , make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy. [ 3 ] Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method. [ 3 ]
Molecular modelling methods are used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. A large number of molecular models of force field are today readily available in databases. [ 4 ] [ 5 ] The types of biological activity that have been investigated using molecular modelling include protein folding , enzyme catalysis , protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA , and membrane complexes. [ 6 ] | https://en.wikipedia.org/wiki/Molecular_modelling |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.